Hao Peng
Cited by
Cited by
Classifying relations via long short term memory networks along shortest dependency paths
Y Xu, L Mou, G Li, Y Chen, H Peng, Z Jin
Proceedings of the 2015 conference on empirical methods in natural language …, 2015
A convolutional attention network for extreme summarization of source code
M Allamanis, H Peng, C Sutton
International conference on machine learning, 2091-2100, 2016
Random feature attention
H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong
arXiv preprint arXiv:2103.02143, 2021
Complexity-based prompting for multi-step reasoning
Y Fu, H Peng, A Sabharwal, P Clark, T Khot
The Eleventh International Conference on Learning Representations, 2022
Contextualized perturbation for textual adversarial attack
D Li, Y Zhang, H Peng, L Chen, C Brockett, MT Sun, B Dolan
arXiv preprint arXiv:2009.07502, 2020
Building program vector representations for deep learning
H Peng, L Mou, G Li, Y Liu, L Zhang, Z Jin
Knowledge Science, Engineering and Management: 8th International Conference …, 2015
Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation
J Kasai, N Pappas, H Peng, J Cross, NA Smith
arXiv preprint arXiv:2006.10369, 2020
Discriminative neural sentence modeling by tree-based convolution
L Mou, H Peng, G Li, Y Xu, L Zhang, Z Jin
arXiv preprint arXiv:1504.01106, 2015
Deep multitask learning for semantic dependency parsing
H Peng, S Thomson, NA Smith
arXiv preprint arXiv:1704.06855, 2017
Specializing smaller language models towards multi-step reasoning
Y Fu, H Peng, L Ou, A Sabharwal, T Khot
International Conference on Machine Learning, 10421-10430, 2023
Improving language model negotiation with self-play and in-context learning from ai feedback
Y Fu, H Peng, T Khot, M Lapata
arXiv preprint arXiv:2305.10142, 2023
Classifying relations via long short term memory networks along shortest dependency path
X Yan, L Mou, G Li, Y Chen, H Peng, Z Jin
arXiv preprint arXiv:1508.03720, 2015
Tailor: Generating and perturbing text with semantic controls
A Ross, T Wu, H Peng, ME Peters, M Gardner
arXiv preprint arXiv:2107.07150, 2021
Learning joint semantic parsers from disjoint data
H Peng, S Thomson, S Swayamdipta, NA Smith
arXiv preprint arXiv:1804.05990, 2018
Text generation with exemplar-based adaptive decoding
H Peng, AP Parikh, M Faruqui, B Dhingra, D Das
arXiv preprint arXiv:1904.04428, 2019
Lm-infinite: Simple on-the-fly length generalization for large language models
C Han, Q Wang, W Xiong, Y Chen, H Ji, S Wang
arXiv preprint arXiv:2308.16137, 2023
Mint: Evaluating llms in multi-turn interaction with tools and language feedback
X Wang, Z Wang, J Liu, Y Chen, L Yuan, H Peng, H Ji
arXiv preprint arXiv:2309.10691, 2023
Finetuning pretrained transformers into rnns
J Kasai, H Peng, Y Zhang, D Yogatama, G Ilharco, N Pappas, Y Mao, ...
arXiv preprint arXiv:2103.13076, 2021
How does gpt obtain its ability? tracing emergent abilities of language models to their sources
Y Fu, H Peng, T Khot
Yao Fu’s Notion, 2022
Rational recurrences
H Peng, R Schwartz, S Thomson, NA Smith
arXiv preprint arXiv:1808.09357, 2018
The system can't perform the operation now. Try again later.
Articles 1–20