Follow
Sho Yokoi
Title
Cited by
Cited by
Year
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
2072020
Word Rotator's Distance
S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui
arXiv preprint arXiv:2004.15003, 2020
602020
Evaluation of Similarity-based Explanations
K Hanawa, S Yokoi, S Hara, K Inui
The International Conference on Learning Representations, 2021
492021
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, R Konno, K Inui
arXiv preprint arXiv:2004.14514, 2020
482020
Incorporating residual and normalization layers into analysis of masked language models
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2109.07152, 2021
302021
Efficient Estimation of Influence of a Training Instance
S Kobayashi, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2012.04207, 2020
182020
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness
R Akama, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2004.14008, 2020
182020
Unsupervised Learning of Style-sensitive Word Vectors
R Akama, K Watanabe, S Yokoi, S Kobayashi, K Inui
arXiv preprint arXiv:1805.05581, 2018
122018
Analyzing feed-forward blocks in transformers through the lens of attention map
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2302.00456, 2023
10*2023
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
T Otake, S Yokoi, N Inoue, R Takahashi, T Kuribayashi, K Inui
arXiv preprint arXiv:2011.01785, 2020
82020
Unbalanced Optimal Transport for Unbalanced Word Alignment
Y Arase, H Bao, S Yokoi
arXiv preprint arXiv:2306.04116, 2023
72023
Norm of word embedding encodes information gain
M Oyama, S Yokoi, H Shimodaira
arXiv preprint arXiv:2212.09663, 2022
72022
Attention module is not only a weight: Analyzing transformers with vector norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
ArXiv, abs, 2004
72004
Pointwise HSIC: A Linear-Time Kernelized Co-occurrence Norm for Sparse Linguistic Expressions
S Yokoi, S Kobayashi, K Fukumizu, J Suzuki, K Inui
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
62018
Link Prediction in Sparse Networks by Incidence Matrix Factorization
S Yokoi, H Kajino, H Kashima
Journal of Information Processing 25, 477-485, 2017
62017
Transformer Language Models Handle Word Frequency in Prediction Head
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2305.18294, 2023
42023
Why is sentence similarity benchmark not predictive of application-oriented task performance?
K Abe, S Yokoi, T Kajiwara, K Inui
Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems …, 2022
42022
Instance-based neural dependency parsing
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, M Yoshikawa, ...
Transactions of the Association for Computational Linguistics 9, 1493-1507, 2021
42021
Learning Co-Substructures by Kernel Dependence Maximization
S Yokoi, D Mochihashi, R Takahashi, N Okazaki, K Inui
The 26th International Joint Conference on Artificial Intelligence (IJCAI …, 2017
32017
Revisiting Additive Compositionality: AND, OR and NOT Operations with Word Embeddings
M Naito, S Yokoi, G Kim, H Shimodaira
arXiv preprint arXiv:2105.08585, 2021
22021
The system can't perform the operation now. Try again later.
Articles 1–20