On the sentence embeddings from pre-trained language models B Li, H Zhou, J He, M Wang, Y Yang, L Li arXiv preprint arXiv:2011.05864, 2020 | 688 | 2020 |
Deep semantic role labeling with self-attention Z Tan, M Wang, J Xie, Y Chen, X Shi Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 399 | 2018 |
Contrastive learning for many-to-many multilingual neural machine translation X Pan, M Wang, L Wu, L Li arXiv preprint arXiv:2105.09501, 2021 | 191 | 2021 |
Towards Making the Most of BERT in Neural Machine Translation JYMWH Zhou, CZWZY Yu, L Li | 175* | 2020 |
Glancing transformer for non-autoregressive neural machine translation L Qian, H Zhou, Y Bao, M Wang, L Qiu, W Zhang, Y Yu, L Li arXiv preprint arXiv:2008.07905, 2020 | 153 | 2020 |
Encoding source language with convolutional neural network for machine translation F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu arXiv preprint arXiv:1503.01838, 2015 | 148 | 2015 |
Pre-training multilingual neural machine translation by leveraging alignment information Z Lin, X Pan, M Wang, X Qiu, J Feng, H Zhou, L Li arXiv preprint arXiv:2010.03142, 2020 | 130 | 2020 |
A hierarchy-to-sequence attentional neural machine translation model J Su, J Zeng, D Xiong, Y Liu, M Wang, J Xie IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (3), 623-632, 2018 | 101 | 2018 |
Syntax-based deep matching of short texts M Wang, Z Lu, H Li, Q Liu arXiv preprint arXiv:1503.02427, 2015 | 101 | 2015 |
Imitation learning for non-autoregressive neural machine translation B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun arXiv preprint arXiv:1906.02041, 2019 | 99 | 2019 |
STEMM: Self-learning with speech-text manifold mixup for speech translation Q Fang, R Ye, L Li, Y Feng, M Wang arXiv preprint arXiv:2203.10426, 2022 | 93 | 2022 |
Learning language specific sub-network for multilingual machine translation Z Lin, L Wu, M Wang, L Li arXiv preprint arXiv:2105.09259, 2021 | 88 | 2021 |
Memory-enhanced decoder for neural machine translation M Wang, Z Lu, H Li, Q Liu arXiv preprint arXiv:1606.02003, 2016 | 80 | 2016 |
Cross-modal contrastive learning for speech translation R Ye, M Wang, L Li arXiv preprint arXiv:2205.02444, 2022 | 77 | 2022 |
Learning shared semantic space for speech-to-text translation C Han, M Wang, H Ji, L Li arXiv preprint arXiv:2105.03095, 2021 | 77 | 2021 |
End-to-end speech translation via cross-modal progressive training R Ye, M Wang, L Li arXiv preprint arXiv:2104.10380, 2021 | 76 | 2021 |
Rethinking document-level neural machine translation Z Sun, M Wang, H Zhou, C Zhao, S Huang, J Chen, L Li arXiv preprint arXiv:2010.08961, 2020 | 67 | 2020 |
LightSeq: A high performance inference library for transformers X Wang, Y Xiong, Y Wei, M Wang, L Li arXiv preprint arXiv:2010.13887, 2020 | 65 | 2020 |
Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation Q Dong, R Ye, M Wang, H Zhou, S Xu, B Xu, L Li Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12749 …, 2021 | 61 | 2021 |
Deep neural machine translation with linear associative unit M Wang, Z Lu, J Zhou, Q Liu arXiv preprint arXiv:1705.00861, 2017 | 54 | 2017 |