Follow
Hitomi Yanaka
Hitomi Yanaka
Verified email at is.s.u-tokyo.ac.jp - Homepage
Title
Cited by
Cited by
Year
Can neural networks understand monotonicity reasoning?
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1906.06448, 2019
952019
HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning
H Yanaka, K Mineshima, D Bekki, K Inui, S Sekine, L Abzianidze, J Bos
arXiv preprint arXiv:1904.12166, 2019
692019
Do neural models learn systematicity of monotonicity inference in natural language?
H Yanaka, K Mineshima, D Bekki, K Inui
arXiv preprint arXiv:2004.14839, 2020
582020
Compositional evaluation on Japanese textual entailment and similarity
H Yanaka, K Mineshima
Transactions of the Association for Computational Linguistics 10, 1266-1284, 2022
282022
On the multilingual ability of decoder-based pre-trained language models: Finding and controlling language-specific neurons
T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo
arXiv preprint arXiv:2404.02431, 2024
272024
Acquisition of phrase correspondences using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1804.07656, 2018
272018
Exploring transitivity in neural NLI models through veridicality
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2101.10713, 2021
232021
Multimodal logical inference system for visual-textual entailment
R Suzuki, H Yanaka, M Yoshikawa, K Mineshima, D Bekki
arXiv preprint arXiv:1906.03952, 2019
192019
Do grammatical error correction models realize grammatical generalization?
M Mita, H Yanaka
arXiv preprint arXiv:2106.03031, 2021
182021
Assessing the generalization capacity of pre-trained language models through Japanese adversarial natural language inference
H Yanaka, K Mineshima
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting …, 2021
162021
SyGNS: A systematic generalization testbed based on natural language semantics
H Yanaka, K Mineshima, K Inui
arXiv preprint arXiv:2106.01077, 2021
152021
Determining semantic textual similarity using natural deduction proofs
H Yanaka, K Mineshima, P Martínez-Gómez, D Bekki
arXiv preprint arXiv:1707.08713, 2017
92017
Llm-jp: A cross-organizational project for the research and development of fully open japanese llms
A Aizawa, E Aramaki, B Chen, F Cheng, H Deguchi, R Enomoto, K Fujii, ...
arXiv preprint arXiv:2407.03963, 2024
72024
Analyzing social biases in japanese large language models
H Yanaka, N Han, R Kumon, J Lu, M Takeshita, R Sekizawa, T Kato, ...
arXiv preprint arXiv:2406.02050, 2024
72024
Compositional semantics and inference system for temporal order based on Japanese CCG
T Sugimoto, H Yanaka
arXiv preprint arXiv:2204.09245, 2022
62022
Topic modeling for short texts with large language models
T Doi, M Isonuma, H Yanaka
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
52024
Jamp: Controlled Japanese temporal inference dataset for evaluating generalization capacity of language models
T Sugimoto, Y Onoe, H Yanaka
arXiv preprint arXiv:2306.10727, 2023
52023
Neural sentence generation from formal semantics
K Manome, M Yoshikawa, H Yanaka, P Martínez-Gómez, K Mineshima, ...
Proceedings of the 11th International Conference on Natural Language …, 2018
52018
Logical inference for counting on semi-structured tables
T Kurosawa, H Yanaka
arXiv preprint arXiv:2204.07803, 2022
42022
Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models
H Yanaka, Y Nakamura, Y Chida, T Kurosawa
Proceedings of the 5th Clinical Natural Language Processing Workshop, 8-18, 2023
32023
The system can't perform the operation now. Try again later.
Articles 1–20