Talking with ERICA, an autonomous android K Inoue, P Milhorat, D Lala, T Zhao, T Kawahara Proceedings of the 17th Annual Meeting of the Special Interest Group on …, 2016 | 68 | 2016 |
Attentive listening system with backchanneling, response generation and flexible turn-taking D Lala, P Milhorat, K Inoue, M Ishida, K Takanashi, T Kawahara SIGDIAL, 2017 | 56 | 2017 |
Prediction and Generation of Backchannel Form for Attentive Listening Systems T Kawahara, T Yamaguchi, K Inoue, K Takanashi, NG Ward INTERSPEECH, 2890-2894, 2016 | 44 | 2016 |
Evaluation of real-time deep learning turn-taking models for multiple dialogue scenarios D Lala, K Inoue, T Kawahara Proceedings of the 20th ACM International Conference on Multimodal …, 2018 | 29 | 2018 |
Prediction of Turn-taking Using Multitask Learning with Prediction of Backchannels and Fillers K Hara, K Inoue, K Takanashi, T Kawahara INTERSPEECH 162, 364, 2018 | 28 | 2018 |
A conversational dialogue manager for the humanoid robot ERICA P Milhorat, D Lala, K Inoue, T Zhao, M Ishida, K Takanashi, S Nakamura, ... | 27 | 2017 |
Analysis and prediction of morphological patterns of backchannels for attentive listening agents T Yamaguchi, K Inoue, K Yoshino, K Takanashi, NG Ward, T Kawahara Proc. 7th International Workshop on Spoken Dialogue Systems, 1-12, 2016 | 21 | 2016 |
Detection of social signals for recognizing engagement in human-robot interaction D Lala, K Inoue, P Milhorat, T Kawahara arXiv preprint arXiv:1709.10257, 2017 | 20 | 2017 |
Smooth turn-taking by a robot using an online continuous model to generate turn-taking cues D Lala, K Inoue, T Kawahara 2019 International Conference on Multimodal Interaction, 226-234, 2019 | 18 | 2019 |
Latent Character Model for Engagement Recognition Based on Multimodal Behaviors K Inoue, D Lala, K Takanashi, T Kawahara | 16 | 2018 |
傾聴対話システムのための言語情報と韻律情報に基づく多様な形態の相槌の生成 山口貴史, 井上昂治, 吉野幸一郎, 高梨克也, 河原達也 人工知能学会論文誌 31 (4), C-G31_1-10, 2016 | 16 | 2016 |
Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human–robot interaction Y Li, CT Ishi, K Inoue, S Nakamura, T Kawahara Advanced Robotics 33 (20), 1030-1041, 2019 | 15 | 2019 |
Emotion recognition by combining prosody and sentiment analysis for expressing reactive emotion by humanoid robot Y Li, CT Ishi, N Ward, K Inoue, S Nakamura, K Takanashi, T Kawahara 2017 Asia-Pacific Signal and Information Processing Association Annual …, 2017 | 15 | 2017 |
自律型アンドロイド Erica のための音声対話システム 井上昂治, 河原達也 人工知能学会研究会資料 言語・音声理解と対話処理研究会 75 回, 05, 2015 | 15 | 2015 |
An Attentive Listening System with Android ERICA: Comparison of Autonomous and WOZ Interactions K Inoue, D Lala, K Yamamoto, S Nakamura, K Takanashi, T Kawahara | 14 | 2020 |
Generating Fillers based on Dialog Act Pairs for Smooth Turn-Taking by Humanoid Robot R Nakanishi, K Inoue, S Nakamura, K Takanashi, T Kawahara | 14 | 2018 |
Turn-Taking Prediction Based on Detection of Transition Relevance Place K Hara, K Inoue, K Takanashi, T Kawahara Proc. Interspeech 2019, 4170-4174, 2019 | 13 | 2019 |
Engagement recognition by a latent character model based on multimodal listener behaviors in spoken dialogue K Inoue, D Lala, K Takanashi, T Kawahara APSIPA Transactions on Signal and Information Processing 7, 2018 | 12 | 2018 |
Social Signal Detection in Spontaneous Dialogue Using Bidirectional LSTM-CTC. H Inaguma, K Inoue, M Mimura, T Kawahara Interspeech, 1691-1695, 2017 | 12 | 2017 |
Annotation and analysis of listener's engagement based on multi-modal behaviors K Inoue, D Lala, S Nakamura, K Takanashi, T Kawahara Proceedings of the Workshop on Multimodal Analyses enabling Artificial …, 2016 | 11 | 2016 |