Follow
Melvin Johnson
Melvin Johnson
Researcher, Google
Verified email at stanford.edu
Title
Cited by
Cited by
Year
Google’s neural machine translation system: Bridging the gap between human and machine translation.
Y Wu, M Schuster, Z Chen, QV Le, M Norouzi, W Macherey, M Krikun, ...
arXiv preprint arXiv:1609.08144, 2016
89722016
Google’s multilingual neural machine translation system: Enabling zero-shot translation
M Johnson, M Schuster, QV Le, M Krikun, Y Wu, Z Chen, N Thorat, ...
Transactions of the Association for Computational Linguistics 5, 339-351, 2017
23652017
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
13962023
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
11912023
Leveraging linguistic structure for open domain information extraction
G Angeli, MJJ Premkumar, CD Manning
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
9392015
Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation
J Hu, S Ruder, A Siddhant, G Neubig, O Firat, M Johnson
International Conference on Machine Learning, 4411-4421, 2020
8832020
Massively multilingual neural machine translation
R Aharoni, M Johnson, O Firat
arXiv preprint arXiv:1903.00089, 2019
5782019
The best of both worlds: Combining recent advances in neural machine translation
MX Chen, O Firat, A Bapna, M Johnson, W Macherey, G Foster, L Jones, ...
arXiv preprint arXiv:1804.09849, 2018
5322018
Massively multilingual neural machine translation in the wild: Findings and challenges
N Arivazhagan, A Bapna, O Firat, D Lepikhin, M Johnson, M Krikun, ...
arXiv preprint arXiv:1907.05019, 2019
4022019
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
3112024
Direct speech-to-speech translation with a sequence-to-sequence model
Y Jia, RJ Weiss, F Biadsy, W Macherey, M Johnson, Z Chen, Y Wu
arXiv preprint arXiv:1904.06037, 2019
2332019
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
2092019
Machine learning in automatic speech recognition: A survey
J Padmanabhan, MJ Johnson Premkumar
IETE Technical Review 32 (4), 240-251, 2015
2052015
Leveraging weakly supervised data to improve end-to-end speech-to-text translation
Y Jia, M Johnson, W Macherey, RJ Weiss, Y Cao, CC Chiu, N Ari, ...
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
1752019
Small and practical BERT models for sequence labeling
H Tsai, J Riesa, M Johnson, N Arivazhagan, X Li, A Archer
arXiv preprint arXiv:1909.00100, 2019
1512019
XTREME-R: Towards more challenging and nuanced multilingual evaluation
S Ruder, N Constant, J Botha, A Siddhant, O Firat, J Fu, P Liu, J Hu, ...
arXiv preprint arXiv:2104.07412, 2021
1362021
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
1322023
Rethinking embedding coupling in pre-trained language models
HW Chung, T Fevry, H Tsai, M Johnson, S Ruder
arXiv preprint arXiv:2010.12821, 2020
1312020
The missing ingredient in zero-shot neural machine translation
N Arivazhagan, A Bapna, O Firat, R Aharoni, M Johnson, W Macherey
arXiv preprint arXiv:1903.07091, 2019
1102019
mslam: Massively multilingual joint pre-training for speech and text
A Bapna, C Cherry, Y Zhang, Y Jia, M Johnson, Y Cheng, S Khanuja, ...
arXiv preprint arXiv:2202.01374, 2022
1082022
The system can't perform the operation now. Try again later.
Articles 1–20