Follow
Tatsuki Kuribayashi
Tatsuki Kuribayashi
Other names栗林樹生
MBZUAI
Verified email at mbzuai.ac.ae - Homepage
Title
Cited by
Cited by
Year
Attention is not only a weight: Analyzing transformers with vector norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2004.10102, 2020
2202020
Lower perplexity is not always human-like
T Kuribayashi, Y Oseki, T Ito, R Yoshida, M Asahara, K Inui
arXiv preprint arXiv:2106.01229, 2021
562021
Instance-based learning of span representations: A case study through named entity recognition
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, R Konno, K Inui
arXiv preprint arXiv:2004.14514, 2020
562020
An empirical study of span representations in argumentation structure parsing
T Kuribayashi, H Ouchi, N Inoue, P Reisert, T Miyoshi, J Suzuki, K Inui
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
462019
Incorporating residual and normalization layers into analysis of masked language models
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2109.07152, 2021
342021
Feasible annotation scheme for capturing policy argument reasoning using argument templates
P Reisert, N Inoue, T Kuribayashi, K Inui
Proceedings of the 5th Workshop on Argument Mining, 79-89, 2018
252018
Context limitations make neural language models more human-like
T Kuribayashi, Y Oseki, A Brassard, K Inui
arXiv preprint arXiv:2205.11463, 2022
242022
Diamonds in the rough: Generating fluent sentences from early-stage drafts for academic writing assistance
T Ito, T Kuribayashi, H Kobayashi, A Brassard, M Hagiwara, J Suzuki, ...
arXiv preprint arXiv:1910.09180, 2019
232019
Langsmith: An interactive academic text revision system
T Ito, T Kuribayashi, M Hidaka, J Suzuki, K Inui
arXiv preprint arXiv:2010.04332, 2020
192020
Analyzing feed-forward blocks in transformers through the lens of attention map
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2302.00456, 2023
13*2023
TEASPN: Framework and protocol for integrated writing assistance environments
M Hagiwara, T Ito, T Kuribayashi, J Suzuki, K Inui
arXiv preprint arXiv:1909.02621, 2019
102019
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
T Otake, S Yokoi, N Inoue, R Takahashi, T Kuribayashi, K Inui
arXiv preprint arXiv:2011.01785, 2020
92020
Use of an AI-powered Rewriting Support Software in Context with Other Tools: A Study of Non-Native English Speakers
T Ito, N Yamashita, T Kuribayashi, M Hidaka, J Suzuki, G Gao, J Jamieson, ...
Proceedings of the 36th Annual ACM Symposium on User Interface Software and …, 2023
82023
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
K Kudo, Y Aoki, T Kuribayashi, A Brassard, M Yoshikawa, K Sakaguchi, ...
arXiv preprint arXiv:2302.07866, 2023
62023
Second language acquisition of neural language models
M Oba, T Kuribayashi, H Ouchi, T Watanabe
arXiv preprint arXiv:2306.02920, 2023
52023
Transformer language models handle word frequency in prediction head
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2305.18294, 2023
52023
Language models as an alternative evaluator of word order hypotheses: A case study in Japanese
T Kuribayashi, T Ito, J Suzuki, K Inui
arXiv preprint arXiv:2005.00842, 2020
52020
Psychometric predictive power of large language models
T Kuribayashi, Y Oseki, T Baldwin
arXiv preprint arXiv:2311.07484, 2023
42023
Assessing step-by-step reasoning against lexical negation: A case study on syllogism
M Ye, T Kuribayashi, J Suzuki, G Kobayashi, H Funayama
arXiv preprint arXiv:2310.14868, 2023
42023
Instance-based neural dependency parsing
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, M Yoshikawa, ...
Transactions of the Association for Computational Linguistics 9, 1493-1507, 2021
42021
The system can't perform the operation now. Try again later.
Articles 1–20