EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing Y Dong, Z Li, M Rezagholizadeh, JCK Cheung Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019 | 193 | 2019 |
Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation M Valipour, M Rezagholizadeh, I Kobyzev, A Ghodsi arXiv preprint arXiv:2210.07558, 2022 | 162 | 2022 |
Alp-kd: Attention-based layer projection for knowledge distillation P Passban, Y Wu, M Rezagholizadeh, Q Liu Proceedings of the AAAI Conference on artificial intelligence 35 (15), 13657 …, 2021 | 129 | 2021 |
A computer tracking system of solar dish with two-axis degree freedoms based on picture processing of bar shadow H Arbab, B Jazi, M Rezagholizadeh Renewable Energy 34 (4), 1114-1118, 2009 | 119 | 2009 |
Krona: Parameter efficient tuning with kronecker adapter A Edalati, M Tahaei, I Kobyzev, VP Nia, JJ Clark, M Rezagholizadeh arXiv preprint arXiv:2212.10650, 2022 | 111 | 2022 |
MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages X Zhang, N Thakur, O Ogundepo, E Kamalloo, D Alfonso-Hermelo, X Li, ... Transactions of the Association for Computational Linguistics 11, 1114-1131, 2023 | 110* | 2023 |
Fully quantized transformer for machine translation G Prato, E Charlaix, M Rezagholizadeh arXiv preprint arXiv:1910.10485, 2019 | 108* | 2019 |
Textkd-gan: Text generation using knowledge distillation and generative adversarial networks MA Haidar, M Rezagholizadeh Canadian Conference on Artificial Intelligence, 107-118, 2019 | 82 | 2019 |
Annealing knowledge distillation A Jafari, M Rezagholizadeh, P Sharma, A Ghodsi arXiv preprint arXiv:2104.07163, 2021 | 75 | 2021 |
Reg-gan: Semi-supervised learning based on generative adversarial networks for regression M Rezagholizadeh, MA Haidar | 53 | 2018 |
Systems and methods for multilingual text generation field M Rezagholizadeh, MA Haidar, A Do-Omri, A Rashid US Patent 11,151,334, 2021 | 47* | 2021 |
Context-aware adversarial training for name regularity bias in named entity recognition A Ghaddar, P Langlais, A Rashid, M Rezagholizadeh Transactions of the Association for Computational Linguistics 9, 586-604, 2021 | 44 | 2021 |
KroneckerBERT: Significant compression of pre-trained language models through kronecker decomposition and knowledge distillation M Tahaei, E Charlaix, V Nia, A Ghodsi, M Rezagholizadeh Proceedings of the 2022 Conference of the North American Chapter of the …, 2022 | 40* | 2022 |
Semi-supervised regression with generative adversarial networks M Rezagholizadeh, MA Haidar, D Wu US Patent 11,003,995, 2021 | 40 | 2021 |
Mate-kd: Masked adversarial text, a companion to knowledge distillation A Rashid, V Lioutas, M Rezagholizadeh arXiv preprint arXiv:2105.05912, 2021 | 38 | 2021 |
Kronecker decomposition for gpt compression A Edalati, M Tahaei, A Rashid, VP Nia, JJ Clark, M Rezagholizadeh arXiv preprint arXiv:2110.08152, 2021 | 37 | 2021 |
Beyond the limits: A survey of techniques to extend the context length in large language models X Wang, M Salmani, P Omidi, X Ren, M Rezagholizadeh, A Eshaghi arXiv preprint arXiv:2402.02244, 2024 | 35 | 2024 |
Revisiting pre-trained language models and their evaluation for arabic natural language understanding A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi, M Rezagholizadeh, C Xing, ... arXiv preprint arXiv:2205.10687, 2022 | 35* | 2022 |
A simplified fully quantized transformer for end-to-end speech recognition A Bie, B Venkitesh, J Monteiro, MA Haidar, M Rezagholizadeh arXiv preprint arXiv:1911.03604, 2019 | 35* | 2019 |
End-to-end self-debiasing framework for robust NLU training A Ghaddar, P Langlais, M Rezagholizadeh, A Rashid arXiv preprint arXiv:2109.02071, 2021 | 34 | 2021 |