フォロー
Andrew Dai
タイトル
引用先
引用先
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
37592023
Generating sentences from a continuous space
SR Bowman, L Vilnis, O Vinyals, AM Dai, R Jozefowicz, S Bengio
Proceedings of the 20th SIGNLL Conference on Computational Natural Language …, 2016
27222016
Natural questions: a benchmark for question answering research
T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, ...
Transactions of the Association for Computational Linguistics 7, 453-466, 2019
22962019
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
arXiv preprint arXiv:2109.01652, 2021
22302021
Scalable and accurate deep learning with electronic health records
A Rajkomar, E Oren, K Chen, AM Dai, N Hajaj, M Hardt, PJ Liu, X Liu, ...
NPJ digital medicine 1 (1), 1-10, 2018
21182018
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
17922024
Semi-supervised sequence learning
AM Dai, QV Le
Advances in neural information processing systems 28, 2015
15782015
HyperNetworks
D Ha, A Dai, QV Le
Proceedings of the International Conference on Learning Representations, 2017
15772017
Adversarial Training Methods for Semi-Supervised Text Classification
T Miyato, AM Dai, I Goodfellow
Proceedings of the International Conference on Learning Representations, 2017
12612017
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
9512023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
8132022
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
7822023
Maskgan: better text generation via filling in the_
W Fedus, I Goodfellow, AM Dai
arXiv preprint arXiv:1801.07736, 2018
6092018
Document embedding with paragraph vectors
AM Dai, C Olah, QV Le
NIPS 2014 Deep learning workshop, 2015
5612015
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International Conference on Machine Learning, 5547-5569, 2022
3782022
Many paths to equilibrium: GANs do not need to decrease a divergence at every step
W Fedus, M Rosca, B Lakshminarayanan, AM Dai, S Mohamed, ...
arXiv preprint arXiv:1710.08446, 2017
2502017
Who said what: Modeling individual labelers improves classification
M Guan, V Gulshan, A Dai, G Hinton
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
2232018
Gmail smart compose: Real-time assisted writing
MX Chen, BN Lee, G Bansal, Y Cao, S Zhang, J Lu, J Tsay, Y Wang, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
2212019
Learning longer-term dependencies in rnns with auxiliary losses
T Trinh, A Dai, T Luong, Q Le
International Conference on Machine Learning, 4965-4974, 2018
2192018
Learning the graphical structure of electronic health records with graph convolutional transformer
E Choi, Z Xu, Y Li, M Dusenberry, G Flores, E Xue, A Dai
Proceedings of the AAAI conference on artificial intelligence 34 (01), 606-613, 2020
2102020
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20