フォロー
Hao Peng
タイトル
引用先
引用先
Classifying relations via long short term memory networks along shortest dependency paths
Y Xu, L Mou, G Li, Y Chen, H Peng, Z Jin
Proceedings of the 2015 conference on empirical methods in natural language …, 2015
7932015
A convolutional attention network for extreme summarization of source code
M Allamanis, H Peng, C Sutton
International conference on machine learning, 2091-2100, 2016
7422016
Random feature attention
H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong
arXiv preprint arXiv:2103.02143, 2021
3482021
Complexity-based prompting for multi-step reasoning
Y Fu, H Peng, A Sabharwal, P Clark, T Khot
The Eleventh International Conference on Learning Representations, 2022
3142022
Contextualized perturbation for textual adversarial attack
D Li, Y Zhang, H Peng, L Chen, C Brockett, MT Sun, B Dolan
arXiv preprint arXiv:2009.07502, 2020
2442020
Specializing smaller language models towards multi-step reasoning
Y Fu, H Peng, L Ou, A Sabharwal, T Khot
International Conference on Machine Learning, 10421-10430, 2023
1812023
Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation
J Kasai, N Pappas, H Peng, J Cross, NA Smith
arXiv preprint arXiv:2006.10369, 2020
1752020
Building program vector representations for deep learning
H Peng, L Mou, G Li, Y Liu, L Zhang, Z Jin
Knowledge Science, Engineering and Management: 8th International Conference …, 2015
1732015
Discriminative neural sentence modeling by tree-based convolution
L Mou, H Peng, G Li, Y Xu, L Zhang, Z Jin
arXiv preprint arXiv:1504.01106, 2015
1592015
Deep multitask learning for semantic dependency parsing
H Peng, S Thomson, NA Smith
arXiv preprint arXiv:1704.06855, 2017
1502017
Improving language model negotiation with self-play and in-context learning from ai feedback
Y Fu, H Peng, T Khot, M Lapata
arXiv preprint arXiv:2305.10142, 2023
1212023
Lm-infinite: Simple on-the-fly length generalization for large language models
C Han, Q Wang, W Xiong, Y Chen, H Ji, S Wang
arXiv preprint arXiv:2308.16137, 2023
952023
Mint: Evaluating llms in multi-turn interaction with tools and language feedback
X Wang, Z Wang, J Liu, Y Chen, L Yuan, H Peng, H Ji
arXiv preprint arXiv:2309.10691, 2023
862023
Tailor: Generating and perturbing text with semantic controls
A Ross, T Wu, H Peng, ME Peters, M Gardner
arXiv preprint arXiv:2107.07150, 2021
812021
Classifying relations via long short term memory networks along shortest dependency path
X Yan, L Mou, G Li, Y Chen, H Peng, Z Jin
arXiv preprint arXiv:1508.03720, 2015
752015
Learning joint semantic parsers from disjoint data
H Peng, S Thomson, S Swayamdipta, NA Smith
arXiv preprint arXiv:1804.05990, 2018
712018
Text generation with exemplar-based adaptive decoding
H Peng, AP Parikh, M Faruqui, B Dhingra, D Das
arXiv preprint arXiv:1904.04428, 2019
672019
Executable code actions elicit better llm agents
X Wang, Y Chen, L Yuan, Y Zhang, Y Li, H Peng, H Ji
arXiv preprint arXiv:2402.01030, 2024
612024
How does gpt obtain its ability? tracing emergent abilities of language models to their sources
Y Fu, H Peng, T Khot
Yao Fu’s Notion, 2022
562022
Data engineering for scaling language models to 128k context
Y Fu, R Panda, X Niu, X Yue, H Hajishirzi, Y Kim, H Peng
arXiv preprint arXiv:2402.10171, 2024
522024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20