Riku Arakawa
Riku Arakawa
Verified email at - Homepage
Cited by
Cited by
DQN-TAMER: Human-in-the-loop reinforcement learning with intractable feedback
R Arakawa, S Kobayashi, Y Unno, Y Tsuboi, S Maeda
Proceedings of 2nd Workshop on Human-Robot Teaming Beyond Human Operational …, 2018
Implementation of DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device
R Arakawa, S Takamichi, H Saruwatari
Proceedings of the 10th ISCA Speech Synthesis Workshop, 2019
A learning-from-observation framework: One-shot robot teaching for grasp-manipulation-release household operations
N Wake, R Arakawa, I Yanokura, T Kiyokawa, K Sasabuchi, J Takamatsu, ...
2021 IEEE/SICE International Symposium on System Integration (SII), 461-466, 2021
Pensight: Enhanced interaction with a pen-top camera
F Matulic, R Arakawa, B Vogel, D Vogel
Proceedings of the 2020 CHI conference on human factors in computing systems …, 2020
REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection
R Arakawa, H Yakura
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems …, 2019
Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation
R Arakawa, H Yakura
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems …, 2021
INWARD: A Computer-Supported Tool for Video-Reflection Improves Efficiency and Effectiveness in Executive Coaching
R Arakawa, H Yakura
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020
Predicting winners and losers under time-of-use tariffs using smart meter data
Y Kiguchi, M Weeks, R Arakawa
Energy 236, 121438, 2021
Semantic constraints to represent common sense required in household actions for multi-modal learning-from-observation robot
K Ikeuchi, N Wake, R Arakawa, K Sasabuchi, J Takamatsu
arXiv preprint arXiv:2103.02201, 2021
Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy
H Shimobayashi, T Sasaki, A Horie, R Arakawa, Z Kashino, M Inami
Augmented Humans Conference 2021, 19-30, 2021
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
VocabEncounter: NMT-powered Vocabulary Learning by Presenting Computer-Generated Usages of Foreign Words into Users’ Daily Lives
R Arakawa, H Yakura, S Kobayashi
CHI Conference on Human Factors in Computing Systems, 1-21, 2022
Reaction or Speculation: Building Computational Support for Users in Catching-Up Series Based on an Emerging Media Consumption Phenomenon
R Arakawa, H Yakura
Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1), 1-28, 2021
Hand with Sensing Sphere: Body-Centered Spatial Interactions with a Hand-Worn Spherical Camera
R Arakawa, A Maekawa, Z Kashino, M Inami
Symposium on Spatial User Interaction, 1-10, 2020
Exploration of reinforcement learning for event camera using car-like robots
R Arakawa, S Shiba
Proceedings of Workshop on Unconventional Sensors in Robotics, 2020
TransVoice: Real-Time Voice Conversion for Augmenting Near-Field Speech Communication
R Arakawa, S Takamichi, H Saruwatari
The Adjunct Publication of the 32nd Annual ACM Symposium on User Interface …, 2019
Human-AI communication for human-human communication: Applying interpretable unsupervised anomaly detection to executive coaching
R Arakawa, H Yakura
Communication in Human-AI Interaction Workshop (CHAI'22) at IJCAI 2022, 2022
AI for human assessment: What do professional assessors need?
R Arakawa, H Yakura
Workshop on Trust and Reliance in AI-Human Teams (TRAIT’22) at CHI 2022, 2022
CalmResponses: Displaying Collective Audience Reactions in Remote Communication
K Maeda, R Arakawa, J Rekimoto
arXiv preprint arXiv:2204.02308, 2022
BeParrot: Efficient Interface for Transcribing Unclear Speech via Respeaking
R Arakawa, H Yakura, M Goto
27th International Conference on Intelligent User Interfaces, 832-840, 2022
The system can't perform the operation now. Try again later.
Articles 1–20