フォロー
Takuma Yagi
タイトル
引用先
引用先
Ego4d: Around the world in 3,000 hours of egocentric video
K Grauman, A Westbury, E Byrne, Z Chavis, A Furnari, R Girdhar, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
9032022
Future Person Localization in First-Person Videos
T Yagi, K Mangalam, R Yonetani, Y Sato
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018
2242018
Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives
K Grauman, A Westbury, L Torresani, K Kitani, J Malik, T Afouras, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
962024
GO-finder: a registration-free wearable system for assisting users in finding lost objects via hand-held object discovery
T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato
Proceedings of the 26th International Conference on Intelligent User …, 2021
132021
Fine-grained affordance annotation for egocentric hand-object interaction videos
Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023
112023
Foreground-aware stylization and consensus pseudo-labeling for domain adaptation of first-person hand segmentation
T Ohkawa, T Yagi, A Hashimoto, Y Ushiku, Y Sato
IEEE Access 9, 94644-94655, 2021
102021
Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction
T Yagi, MT Hasan, Y Sato
32nd British Machine Vision Conference (BMVC), 2021
92021
FineBio: A Fine-Grained Video Dataset of Biological Experiments with Hierarchical Annotation
T Yagi, M Ohashi, Y Huang, R Furuta, S Adachi, T Mitsuyama, Y Sato
arXiv preprint arXiv:2402.00293, 2024
52024
Learning Object States from Actions via Large Language Models
M Tateno, T Yagi, R Furuta, Y Sato
arXiv preprint arXiv:2405.01090, 2024
22024
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos
T Ohkawa, T Yagi, T Nishimura, R Furuta, A Hashimoto, Y Ushiku, Y Sato
arXiv preprint arXiv:2311.16444, 2023
22023
Style Adapted DataBase: Generalizing Hand Segmentation via Semantics-aware Stylization
T Ohkawa, T Yagi, Y Sato
IEICE Technical Report; IEICE Tech. Rep. 120 (187), 26-31, 2020
12020
Label generation method, model generation method, label generation device, label generation program, model generation device, and model generation program
T Ohkawa, A Hashimoto, Y Ushiku, Y Sato, Y Takuma
US Patent App. 18/685,966, 2024
2024
PolarDB: Formula-Driven Dataset for Pre-Training Trajectory Encoders
S Miyamoto, T Yagi, Y Makimoto, M Ukai, Y Ushiku, A Hashimoto, N Inoue
ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and …, 2024
2024
GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects
T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato
ACM Transactions on Interactive Intelligent Systems 12 (4), 1-29, 2022
2022
Precise Affordance Annotation for Egocentric Action Video Datasets
Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato
arXiv preprint arXiv:2206.05424, 2022
2022
Object Instance Identification in Dynamic Environments
T Yagi, MT Hasan, Y Sato
arXiv preprint arXiv:2206.05319, 2022
2022
Egocentric pedestrian motion prediction by separately modeling body pose and position
D Wu, T Yagi, Y Matsui, Y Sato
IEICE Technical Report; IEICE Tech. Rep. 119 (481), 39-44, 2020
2020
Human-Computer Interaction: an User Evaluation Perspective--
T Yagi, S Shinagawa, K Akiyama, K Hirotaka, R Shimamura, T Matayoshi
IEICE Technical Report; IEICE Tech. Rep. 118 (260), 1-4, 2018
2018
Egocentric Pedestrian Motion Forecasting for Separately Modelling Pose and Location
D WU, T YAGI, Y MATSUI, Y SATO
Future Person Localization in First-Person Videos: Supplementary Material
T Yagi, K Mangalam, R Yonetani, Y Sato
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20