Follow
Linjun Shou (寿林钧)
Linjun Shou (寿林钧)
Principal Group Applied Scientist Manager, Edge ML, Microsoft China
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Codebert: A pre-trained model for programming and natural languages
Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ...
arXiv preprint arXiv:2002.08155, 2020
21372020
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
6282021
XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation
Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong, L Shou, D Jiang, ...
arXiv preprint arXiv:2004.01401, 2020
2872020
Unicoder: A universal language encoder by pre-training with multiple cross-lingual tasks
H Huang, Y Liang, N Duan, M Gong, L Shou, D Jiang, M Zhou
arXiv preprint arXiv:1909.00964, 2019
2242019
Graph-based reasoning over heterogeneous external knowledge for commonsense question answering
S Lv, D Guo, J Xu, D Tang, N Duan, M Gong, L Shou, D Jiang, G Cao, ...
Proceedings of the AAAI conference on artificial intelligence 34 (05), 8449-8456, 2020
1962020
Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis
Y Liang, C Wu, T Song, W Wu, Y Xia, Y Liu, Y Ou, S Lu, L Ji, S Mao, ...
Intelligent Computing 3, 0063, 2024
1272024
Reinforced multi-teacher selection for knowledge distillation
F Yuan, L Shou, J Pei, W Lin, M Gong, Y Fu, D Jiang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (16), 14284 …, 2021
962021
Model compression with two-stage multi-teacher knowledge distillation for web question answering system
Z Yang, L Shou, M Gong, W Lin, D Jiang
Proceedings of the 13th International Conference on Web Search and Data …, 2020
942020
WhiteningBERT: An easy unsupervised sentence embedding approach
J Huang, D Tang, W Zhong, S Lu, L Shou, M Gong, D Jiang, N Duan
arXiv preprint arXiv:2104.01767, 2021
882021
Cosqa: 20,000+ web queries for code search and question answering
J Huang, D Tang, L Shou, M Gong, K Xu, D Jiang, M Zhou, N Duan
arXiv preprint arXiv:2105.13239, 2021
752021
GLGE: A new general language generation evaluation benchmark
D Liu, Y Yan, Y Gong, W Qi, H Zhang, J Jiao, W Chen, J Fu, L Shou, ...
arXiv preprint arXiv:2011.11928, 2020
682020
CodeBERT: a pre-trained model for programming and natural languages (2020)
Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ...
arXiv preprint arXiv:2002.08155, 2002
612002
Improving readability for automatic speech recognition transcription
J Liao, S Eskimez, L Lu, Y Shi, M Gong, L Shou, H Qu, M Zeng
ACM Transactions on Asian and Low-Resource Language Information Processing …, 2023
562023
Bridging the gap between indexing and retrieval for differentiable search index with query generation
S Zhuang, H Ren, L Shou, J Pei, M Gong, G Zuccon, D Jiang
arXiv preprint arXiv:2206.10128, 2022
562022
LogicalFactChecker: Leveraging logical operations for fact checking with graph module network
W Zhong, D Tang, Z Feng, N Duan, M Zhou, M Gong, L Shou, D Jiang, ...
arXiv preprint arXiv:2004.13659, 2020
532020
Syntax-enhanced pre-trained model
Z Xu, D Guo, D Tang, Q Su, L Shou, M Gong, W Zhong, X Quan, N Duan, ...
arXiv preprint arXiv:2012.14116, 2020
472020
Graph fusion network for text classification
Y Dai, L Shou, M Gong, X Xia, Z Kang, Z Xu, D Jiang
Knowledge-based systems 236, 107659, 2022
452022
Large language models are diverse role-players for summarization evaluation
N Wu, M Gong, L Shou, S Liang, D Jiang
CCF International Conference on Natural Language Processing and Chinese …, 2023
372023
Retrieval enhanced model for commonsense generation
H Wang, Y Liu, C Zhu, L Shou, M Gong, Y Xu, M Zeng
arXiv preprint arXiv:2105.11174, 2021
352021
Negative sampling for contrastive representation learning: A review
L Xu, J Lian, WX Zhao, M Gong, L Shou, D Jiang, X Xie, JR Wen
arXiv preprint arXiv:2206.00212, 2022
302022
The system can't perform the operation now. Try again later.
Articles 1–20