フォロー
Ellie Pavlick
Ellie Pavlick
確認したメール アドレス: brown.edu - ホームページ
タイトル
引用先
引用先
BERT rediscovers the classical NLP pipeline
I Tenney
arXiv preprint arXiv:1905.05950, 2019
17472019
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
16202023
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
RT McCoy
arXiv preprint arXiv:1902.01007, 2019
12852019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
11692022
What do you learn from context? probing for sentence structure in contextualized word representations
I Tenney, P Xia, B Chen, A Wang, A Poliak, RT McCoy, N Kim, ...
arXiv preprint arXiv:1905.06316, 2019
9242019
Optimizing statistical machine translation for text simplification
W Xu, C Napoles, E Pavlick, Q Chen, C Callison-Burch
Transactions of the Association for Computational Linguistics 4, 401-415, 2016
6812016
Openwebtext corpus
A Gokaslan, V Cohen, E Pavlick, S Tellex
4852019
PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification
E Pavlick, J Ganitkevitch, P Rastogi, B Van Durme, C Callison-Burch
Volume 2: Short Papers, 425, 2015
3952015
Do prompt-based models really understand the meaning of their prompts?
A Webson, E Pavlick
arXiv preprint arXiv:2109.01247, 2021
3652021
Inherent disagreements in human textual inferences
E Pavlick, T Kwiatkowski
Transactions of the Association for Computational Linguistics 7, 677-694, 2019
2822019
What happens to BERT embeddings during fine-tuning?
A Merchant, E Rahimtoroghi, E Pavlick, I Tenney
arXiv preprint arXiv:2004.14448, 2020
2042020
An empirical analysis of formality in online communication
E Pavlick, J Tetreault
Transactions of the association for computational linguistics 4, 61-74, 2016
1702016
Collecting diverse natural language inference problems for sentence representation evaluation
A Poliak, A Haldar, R Rudinger, JE Hu, E Pavlick, AS White, B Van Durme
arXiv preprint arXiv:1804.08207, 2018
1662018
Mapping language models to grounded conceptual spaces
R Patel, E Pavlick
International conference on learning representations, 2022
1382022
Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling
A Wang, J Hula, P Xia, R Pappagari, RT McCoy, R Patel, N Kim, I Tenney, ...
arXiv preprint arXiv:1812.10860, 2018
1242018
Simple PPDB: A Paraphrase Database for Simplification
E Pavlick, C Callison-Burch
1212016
Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen, E Chi, ...
arXiv preprint arXiv:2010.06032, 2020
1182020
Can language models encode perceptual structure without grounding? a case study in color
M Abdou, A Kulmizev, D Hershcovich, S Frank, E Pavlick, A Søgaard
arXiv preprint arXiv:2109.06129, 2021
1132021
Probing what different NLP tasks teach machines about function word comprehension
N Kim, R Patel, A Poliak, A Wang, P Xia, RT McCoy, I Tenney, A Ross, ...
arXiv preprint arXiv:1904.11544, 2019
1062019
Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models
K Webster, X Wang, I Tenney, A Beutel, E Pitler, E Pavlick, J Chen
arXiv preprint arXiv:2010.06032, 2010
1042010
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20