Follow
Kazuki Osawa
Title
Cited by
Cited by
Year
Practical deep learning with Bayesian principles
K Osawa, S Swaroop, MEE Khan, A Jain, R Eschenhagen, RE Turner, ...
Advances in neural information processing systems 32, 2019
1552019
Large-Scale Distributed Second-Order Optimization Using Kronecker-Factored Approximate Curvature for Deep Convolutional Neural Networks
K Osawa, Y Tsuji, Y Ueno, A Naruse, R Yokota, S Matsuoka
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp …, 2019
103*2019
Scalable and practical natural gradient for large-scale deep learning
K Osawa, Y Tsuji, Y Ueno, A Naruse, CS Foo, R Yokota
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
162020
Accelerating matrix multiplication in deep learning by using low-rank approximation
K Osawa, A Sekiya, H Naganuma, R Yokota
2017 International Conference on High Performance Computing & Simulation …, 2017
142017
Understanding approximate fisher information for fast convergence of natural gradient descent in wide neural networks
R Karakida, K Osawa
Advances in neural information processing systems 33, 10891-10901, 2020
112020
Rich information is affordable: A systematic performance analysis of second-order optimization using k-fac
Y Ueno, K Osawa, Y Tsuji, A Naruse, R Yokota
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
82020
Performance optimizations and analysis of distributed deep learning with approximated second-order optimization method
Y Tsuji, K Osawa, Y Ueno, A Naruse, R Yokota, S Matsuoka
Proceedings of the 48th International Conference on Parallel Processing …, 2019
62019
Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs.(2018)
K Osawa, Y Tsuji, Y Ueno, A Naruse, R Yokota, S Matsuoka
arXiv preprint arXiv:1811.12019, 2018
42018
Evaluating the compression efficiency of the filters in convolutional neural networks
K Osawa, R Yokota
International Conference on Artificial Neural Networks, 459-466, 2017
32017
Understanding approximate Fisher information for fast convergence of natural gradient descent in wide neural networks
R Karakida, K Osawa
Journal of Statistical Mechanics: Theory and Experiment 2021 (12), 124010, 2021
2021
Improvement of speed using low precision arithmetic in deep learning and performance evaluation of accelerator
H Naganuma, A Sekiya, K Osawa, H Ootomo, Y Kuwamura, R Yokota
IEICE Technical Report; IEICE Tech. Rep. 117 (238), 101-107, 2017
2017
Accelerating Convolutional Neural Networks Using Low-Rank Tensor Decomposition
K Osawa, A Sekiya, H Naganuma, R Yokota
IEICE Technical Report; IEICE Tech. Rep. 117 (238), 1-6, 2017
2017
The system can't perform the operation now. Try again later.
Articles 1–12