フォロー
Sho Yokoi
タイトル
引用先
引用先
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
1962020
Word Rotator's Distance
S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui
arXiv preprint arXiv:2004.15003, 2020
562020
Evaluation of Similarity-based Explanations
K Hanawa, S Yokoi, S Hara, K Inui
The International Conference on Learning Representations, 2021
462021
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, R Konno, K Inui
arXiv preprint arXiv:2004.14514, 2020
462020
Incorporating residual and normalization layers into analysis of masked language models
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2109.07152, 2021
242021
Efficient Estimation of Influence of a Training Instance
S Kobayashi, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2012.04207, 2020
172020
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness
R Akama, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2004.14008, 2020
172020
Unsupervised Learning of Style-sensitive Word Vectors
R Akama, K Watanabe, S Yokoi, S Kobayashi, K Inui
arXiv preprint arXiv:1805.05581, 2018
102018
Analyzing feed-forward blocks in transformers through the lens of attention map
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2302.00456, 2023
9*2023
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
T Otake, S Yokoi, N Inoue, R Takahashi, T Kuribayashi, K Inui
arXiv preprint arXiv:2011.01785, 2020
82020
Pointwise HSIC: A Linear-Time Kernelized Co-occurrence Norm for Sparse Linguistic Expressions
S Yokoi, S Kobayashi, K Fukumizu, J Suzuki, K Inui
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
62018
Link Prediction in Sparse Networks by Incidence Matrix Factorization
S Yokoi, H Kajino, H Kashima
Journal of Information Processing 25, 477-485, 2017
62017
Unbalanced Optimal Transport for Unbalanced Word Alignment
Y Arase, H Bao, S Yokoi
arXiv preprint arXiv:2306.04116, 2023
42023
Transformer Language Models Handle Word Frequency in Prediction Head
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2305.18294, 2023
32023
Norm of word embedding encodes information gain
M Oyama, S Yokoi, H Shimodaira
arXiv preprint arXiv:2212.09663, 2022
32022
Why is sentence similarity benchmark not predictive of application-oriented task performance?
K Abe, S Yokoi, T Kajiwara, K Inui
Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems …, 2022
32022
Instance-based neural dependency parsing
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, M Yoshikawa, ...
Transactions of the Association for Computational Linguistics 9, 1493-1507, 2021
32021
Learning Co-Substructures by Kernel Dependence Maximization
S Yokoi, D Mochihashi, R Takahashi, N Okazaki, K Inui
The 26th International Joint Conference on Artificial Intelligence (IJCAI …, 2017
32017
Computationally efficient Wasserstein loss for structured labels
A Toyokuni, S Yokoi, H Kashima, M Yamada
arXiv preprint arXiv:2103.00899, 2021
22021
Improving word mover's distance by leveraging self-attention matrix
H Yamagiwa, S Yokoi, H Shimodaira
arXiv preprint arXiv:2211.06229, 2022
12022
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20