Follow
Paul Michel
Paul Michel
Research Scientist, DeepMind
Verified email at deepmind.com - Homepage
Title
Cited by
Cited by
Year
Are Sixteen Heads Really Better than One?
P Michel, O Levy, G Neubig
NeurIPS 2019, 2019
9072019
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
439*2017
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
3482023
Weight Poisoning Attacks on Pre-trained Models
K Kurita, P Michel, G Neubig
ACL 2020, 2020
3132020
MTNT: A Testbed for Machine Translation of Noisy Text
P Michel, G Neubig
EMNLP 2018, 2018
1362018
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
P Michel, X Li, G Neubig, JM Pino
NAACL 2019, 2019
1322019
compare-mt: A Tool for Holistic Comparison of Language Generation Systems
G Neubig, ZY Dou, J Hu, P Michel, D Pruthi, X Wang
NAACL 2019 Demo, 2019
1242019
Extreme Adaptation for Personalized Neural Machine Translation
P Michel, G Neubig
ACL 2018, 2018
1072018
Findings of the first shared task on machine translation robustness
X Li, P Michel, A Anastasopoulos, Y Belinkov, N Durrani, O Firat, P Koehn, ...
WMT 2019, 2019
642019
Examining and Combating Spurious Features under Distribution Shift
C Zhou, X Ma, P Michel, G Neubig
ICML 2021, 2021
582021
Optimizing data usage via differentiable rewards
X Wang, H Pham, P Michel, A Anastasopoulos, J Carbonell, G Neubig
International Conference on Machine Learning, 9983-9995, 2020
532020
Modeling the Second Player in Distributionally Robust Optimization
P Michel, T Hashimoto, G Neubig
ICLR 2021, 2021
292021
Findings of the WMT 2020 shared task on machine translation robustness
L Specia, Z Li, J Pino, V Chaudhary, F Guzmán, G Neubig, N Durrani, ...
Proceedings of the Fifth Conference on Machine Translation, 76-91, 2020
282020
Blind phoneme segmentation with temporal prediction errors
P Michel, O Räsänen, R Thiolliere, E Dupoux
ACL SRW 2017, 2016
27*2016
Should we be pre-training? an argument for end-task aware training as an alternative
LM Dery, P Michel, A Talwalkar, G Neubig
ICLR 2022, 2021
242021
Distributionally Robust Models with Parametric Likelihood Ratios
P Michel, T Hashimoto, G Neubig
ICLR 2022, 2022
172022
Emergent communication: Generalization and overfitting in lewis games
M Rita, C Tallec, P Michel, JB Grill, O Pietquin, E Dupoux, F Strub
Advances in neural information processing systems 35, 1389-1404, 2022
162022
Does the Geometry of Word Embeddings Help Document Classification? A Case Study on Persistent Homology Based Representations
P Michel, A Ravichander, S Rijhwani
Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017
132017
Aang: Automating auxiliary learning
LM Dery, P Michel, M Khodak, G Neubig, A Talwalkar
arXiv preprint arXiv:2205.14082, 2022
82022
Balancing average and worst-case accuracy in multitask learning
P Michel, S Ruder, D Yogatama
arXiv preprint arXiv:2110.05838, 2021
62021
The system can't perform the operation now. Try again later.
Articles 1–20