Raphael Shu
Raphael Shu
The University of Tokyo
Verified email at - Homepage
Cited by
Cited by
Compressing word embeddings via deep compositional code learning
R Shu, H Nakayama
arXiv preprint arXiv:1711.01068, 2017
Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference
R Shu, J Lee, H Nakayama, K Cho
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), 2020
Generating diverse translations with sentence codes
R Shu, H Nakayama, K Cho
Proceedings of the 57th annual meeting of the association for computational …, 2019
Generating video description using sequence-to-sequence model with temporal attention
N Laokulrat, S Phan, N Nishida, R Shu, Y Ehara, N Okazaki, Y Miyao, ...
Proceedings of COLING 2016, the 26th International Conference on …, 2016
Iterative refinement in the continuous space for non-autoregressive neural machine translation
J Lee, R Shu, K Cho
arXiv preprint arXiv:2009.07177, 2020
Graphplan: Story generation by planning with event graph
H Chen, R Shu, H Takamura, H Nakayama
arXiv preprint arXiv:2102.02977, 2021
Improving beam search by removing monotonic constraint for neural machine translation
R Shu, H Nakayama
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
Later-stage minimum bayes-risk decoding for neural machine translation
R Shu, H Nakayama
arXiv preprint arXiv:1704.03169, 2017
Residual stacking of rnns for neural machine translation
R Shu, A Miura
Proceedings of the 3rd Workshop on Asian Translation (WAT2016), 223-229, 2016
Federated semi-supervised learning with prototypical networks
W Kim, K Park, K Sohn, R Shu, HS Kim
arXiv preprint arXiv:2205.13921, 2022
Dialog2api: Task-oriented dialogue with api description and example programs
R Shu, E Mansimov, T Alkhouli, N Pappas, S Romeo, A Gupta, S Mansour, ...
arXiv preprint arXiv:2212.09946, 2022
Reward optimization for neural machine translation with learned metrics
R Shu, KM Yoo, JW Ha
arXiv preprint arXiv:2104.07541, 2021
An empirical study of adequate vision span for attention-based neural machine translation
R Shu, H Nakayama
arXiv preprint arXiv:1612.06043, 2016
Conversation style transfer using few-shot learning
S Roy, R Shu, N Pappas, E Mansimov, Y Zhang, S Mansour, D Roth
arXiv preprint arXiv:2302.08362, 2023
Intent induction from conversations for task-oriented dialogue track at dstc 11
J Gung, R Shu, E Moeng, W Rose, S Romeo, Y Benajiba, A Gupta, ...
arXiv preprint arXiv:2304.12982, 2023
User Simulation with Large Language Models for Evaluating Task-Oriented Dialogue
S Davidson, S Romeo, R Shu, J Gung, A Gupta, S Mansour, Y Zhang
arXiv preprint arXiv:2309.13233, 2023
DiactTOD: Learning generalizable latent dialogue acts for controllable task-oriented dialogue systems
Q Wu, J Gung, R Shu, Y Zhang
arXiv preprint arXiv:2308.00878, 2023
Real-time Neural-based Input Method
J Yao, R Shu, X Li, K Ohtsuki, H Nakayama
arXiv preprint arXiv:1810.09309, 2018
Pre-training intent-aware encoders for zero-and few-shot intent classification
M Sung, J Gung, E Mansimov, N Pappas, R Shu, S Romeo, Y Zhang, ...
arXiv preprint arXiv:2305.14827, 2023
Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks
R Yang, R Shu, H Nakayama
IEICE TRANSACTIONS on Information and Systems 105 (9), 1537-1545, 2022
The system can't perform the operation now. Try again later.
Articles 1–20