Bloom: A 176b-parameter open-access multilingual language model BS Workshop, TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, ... JMLR 2023, 2022 | 1503* | 2022 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... TMLR 2023, 2022 | 1013 | 2022 |
StarCoder: may the source be with you! R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ... TMLR 2023, 2023 | 689* | 2023 |
Crosslingual generalization through multitask finetuning N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ... ACL 2023, 2022 | 539 | 2022 |
A framework for few-shot language model evaluation L Gao, J Tow, S Biderman, S Black, A DiPofi, C Foster, L Golding, J Hsu, ... GitHub, 2021 | 502* | 2021 |
MTEB: Massive text embedding benchmark N Muennighoff, N Tazi, L Magne, N Reimers EACL 2023, 2022 | 305 | 2022 |
C-pack: Packaged resources to advance general chinese embedding S Xiao, Z Liu, P Zhang, N Muennighoff SIGIR 2024, 2023 | 199 | 2023 |
SantaCoder: don't reach for the stars! LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ... ICLR 2023 DL4C Workshop, Best Paper Award, 2023 | 185* | 2023 |
SGPT: GPT sentence embeddings for semantic search N Muennighoff arXiv, 2022 | 160 | 2022 |
Scaling Data-Constrained Language Models N Muennighoff, AM Rush, B Barak, TL Scao, A Piktus, N Tazi, S Pyysalo, ... NeurIPS 2023 Oral, Outstanding Paper Runner-Up Award, 2023 | 140 | 2023 |
Kto: Model alignment as prospect theoretic optimization K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela ICML 2024 Spotlight, 2024 | 117 | 2024 |
Octopack: Instruction tuning code large language models N Muennighoff, Q Liu, A Zebaze, Q Zheng, B Hui, TY Zhuo, S Singh, ... ICLR 2024 Spotlight, NeurIPS 2023 Instruction Workshop, 2023 | 105 | 2023 |
Olmo: Accelerating the science of language models D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ... ACL 2024, Best Theme Paper Award, 2024 | 96* | 2024 |
What Language Model to Train if You Have One Million GPU Hours? TL Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, S Bideman, ... EMNLP 2022 Findings, 2022 | 91 | 2022 |
Starcoder 2 and the stack v2: The next generation A Lozhkov, R Li, LB Allal, F Cassano, J Lamy-Poirier, N Tazi, A Tang, ... arXiv preprint arXiv:2402.19173, 2024 | 77 | 2024 |
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... ACL 2024, Best Resource Paper Award, 2024 | 76* | 2024 |
Nl-augmenter: A framework for task-sensitive natural language augmentation KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ... NEJLT 2023, 2021 | 71 | 2021 |
Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes N Muennighoff NeurIPS 2020 Competitions, 2020 | 66 | 2020 |
The hateful memes challenge: Competition report D Kiela, H Firooz, A Mohan, V Goswami, A Singh, CA Fitzpatrick, P Bull, ... NeurIPS 2020 Competitions, 2021 | 64 | 2021 |
Aya model: An instruction finetuned open-access multilingual language model A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ... ACL 2024, Best Paper Award, 2024 | 47 | 2024 |