Follow
Shayne Longpre
Shayne Longpre
MIT, Stanford, Apple
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
29272024
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
16222023
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML 2023, 2023
6062023
Entity-based knowledge conflicts in question answering
S Longpre, K Perisetla, A Chen, N Ramesh, C DuBois, S Singh
EMNLP 2021, 2021
1902021
Question rewriting for conversational question answering
S Vakulenko, S Longpre, Z Tu, R Anantha
WSDM 2021, 355-363, 2021
1722021
The bigscience roots corpus: A 1.6 tb composite multilingual dataset
H Laurençon, L Saulnier, T Wang, C Akiki, A Villanova del Moral, ...
Advances in Neural Information Processing Systems 35, 31809-31826, 2022
1682022
Open-domain question answering goes conversational via question rewriting
R Anantha, S Vakulenko, Z Tu, S Longpre, S Pulman, S Chappidi
NAACL 2021, 2020
1632020
Octopack: Instruction tuning code large language models
N Muennighoff, Q Liu, A Zebaze, Q Zheng, B Hui, TY Zhuo, S Singh, ...
arXiv preprint arXiv:2308.07124, 2023
1532023
MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering
S Longpre, Y Lu, J Daiber
TACL 2021, Vol 9, 2020
1332020
Prometheus: Inducing fine-grained evaluation capability in language models
S Kim, J Shin, Y Cho, J Jang, S Longpre, H Lee, S Yun, S Shin, S Kim, ...
The Twelfth International Conference on Learning Representations, 2023
1192023
You reap what you sow: On the challenges of bias evaluation under multilingual settings
Z Talat, A Névéol, S Biderman, M Clinciu, M Dey, S Longpre, S Luccioni, ...
Proceedings of BigScience Episode# 5--Workshop on Challenges & Perspectives …, 2022
1042022
A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity
S Longpre, G Yauney, E Reif, K Lee, A Roberts, B Zoph, D Zhou, J Wei, ...
arXiv preprint arXiv:2305.13169, 2023
1012023
How Effective is Task-Agnostic Data Augmentation for Pretrained Transformers?
S Longpre, Y Wang, C DuBois
Findings of the Association for Computational Linguistics: EMNLP 2020, 2020
1012020
Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
Le, and Jason Wei, 2022
962022
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ...
arXiv preprint arXiv:2402.07827, 2024
922024
The foundation model transparency index
R Bommasani, K Klyman, S Longpre, S Kapoor, N Maslej, B Xiong, ...
arXiv preprint arXiv:2310.12941, 2023
782023
Prometheus 2: An open source language model specialized in evaluating other language models
S Kim, J Suk, S Longpre, BY Lin, J Shin, S Welleck, G Neubig, M Lee, ...
arXiv preprint arXiv:2405.01535, 2024
692024
A survey on data selection for language models
A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert, X Wang, ...
arXiv preprint arXiv:2402.16827, 2024
632024
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv preprint arXiv:2305.14705, 2023
612023
Scaling instruction-finetuned language models (2022)
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416 3, 2022
532022
The system can't perform the operation now. Try again later.
Articles 1–20