Follow
Tatsuki Kuribayashi
Tatsuki Kuribayashi
Other names栗林樹生
MBZUAI
Verified email at mbzuai.ac.ae - Homepage
Title
Cited by
Cited by
Year
Attention is not only a weight: Analyzing transformers with vector norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2004.10102, 2020
1962020
Instance-based learning of span representations: A case study through named entity recognition
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, R Konno, K Inui
arXiv preprint arXiv:2004.14514, 2020
462020
Lower perplexity is not always human-like
T Kuribayashi, Y Oseki, T Ito, R Yoshida, M Asahara, K Inui
arXiv preprint arXiv:2106.01229, 2021
452021
An empirical study of span representations in argumentation structure parsing
T Kuribayashi, H Ouchi, N Inoue, P Reisert, T Miyoshi, J Suzuki, K Inui
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
432019
Incorporating residual and normalization layers into analysis of masked language models
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2109.07152, 2021
242021
Feasible annotation scheme for capturing policy argument reasoning using argument templates
P Reisert, N Inoue, T Kuribayashi, K Inui
Proceedings of the 5th Workshop on Argument Mining, 79-89, 2018
222018
Context limitations make neural language models more human-like
T Kuribayashi, Y Oseki, A Brassard, K Inui
arXiv preprint arXiv:2205.11463, 2022
192022
Diamonds in the rough: Generating fluent sentences from early-stage drafts for academic writing assistance
T Ito, T Kuribayashi, H Kobayashi, A Brassard, M Hagiwara, J Suzuki, ...
arXiv preprint arXiv:1910.09180, 2019
192019
Langsmith: An interactive academic text revision system
T Ito, T Kuribayashi, M Hidaka, J Suzuki, K Inui
arXiv preprint arXiv:2010.04332, 2020
122020
Analyzing feed-forward blocks in transformers through the lens of attention map
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2302.00456, 2023
9*2023
TEASPN: Framework and protocol for integrated writing assistance environments
M Hagiwara, T Ito, T Kuribayashi, J Suzuki, K Inui
arXiv preprint arXiv:1909.02621, 2019
92019
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
T Otake, S Yokoi, N Inoue, R Takahashi, T Kuribayashi, K Inui
arXiv preprint arXiv:2011.01785, 2020
82020
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
K Kudo, Y Aoki, T Kuribayashi, A Brassard, M Yoshikawa, K Sakaguchi, ...
arXiv preprint arXiv:2302.07866, 2023
42023
Towards exploiting argumentative context for argumentative relation identification
T Kuribayashi, P Reisert, N Inoue, K Inui
Proceedings of the Annual Meeting of the Association for Natural Language …, 2018
42018
Transformer language models handle word frequency in prediction head
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2305.18294, 2023
32023
Topicalization in Language Models: A Case Study on Japanese
R Fujihara, T Kuribayashi, K Abe, R Tokuhisa, K Inui
Proceedings of the 29th International Conference on Computational …, 2022
32022
Instance-based neural dependency parsing
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, M Yoshikawa, ...
Transactions of the Association for Computational Linguistics 9, 1493-1507, 2021
32021
Language models as an alternative evaluator of word order hypotheses: A case study in Japanese
T Kuribayashi, T Ito, J Suzuki, K Inui
arXiv preprint arXiv:2005.00842, 2020
32020
Examining macro-level argumentative structure features for argumentative relation identification
T Kuribayashi, P Reisert, N Inoue, K Inui
電子情報通信学会技術研究報告= IEICE technical report: 信学技報 117 (367), 37-42, 2017
32017
Psychometric Predictive Power of Large Language Models
T Kuribayashi, Y Oseki, T Baldwin
arXiv preprint arXiv:2311.07484, 2023
22023
The system can't perform the operation now. Try again later.
Articles 1–20