Exploring effectiveness of GPT-3 in grammatical error correction: A study on performance and controllability in prompt-based methods M Loem, M Kaneko, S Takase, N Okazaki arXiv preprint arXiv:2305.18156, 2023 | 27 | 2023 |
Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities K Fujii, T Nakamura, M Loem, H Iida, M Ohi, K Hattori, H Shota, S Mizuki, ... arXiv preprint arXiv:2404.17790, 2024 | 11 | 2024 |
Extraphrase: Efficient data augmentation for abstractive summarization M Loem, S Takase, M Kaneko, N Okazaki arXiv preprint arXiv:2201.05313, 2022 | 9 | 2022 |
SAIE Framework: Support Alone Isn't Enough--Advancing LLM Training with Adversarial Remarks M Loem, M Kaneko, N Okazaki arXiv preprint arXiv:2311.08107, 2023 | 2 | 2023 |
Building a Large Japanese Web Corpus for Large Language Models N Okazaki, K Hattori, H Shota, H Iida, M Ohi, K Fujii, T Nakamura, M Loem, ... arXiv preprint arXiv:2404.17733, 2024 | 1 | 2024 |
Likelihood-based Mitigation of Evaluation Bias in Large Language Models M Ohi, M Kaneko, R Koike, M Loem, N Okazaki arXiv preprint arXiv:2402.15987, 2024 | 1 | 2024 |
Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention M Loem, S Takase, M Kaneko, N Okazaki arXiv preprint arXiv:2207.13354, 2022 | 1 | 2022 |