Diffusion models are minimax optimal distribution estimators K Oko, S Akiyama, T Suzuki International Conference on Machine Learning, 26517-26582, 2023 | 37 | 2023 |
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods T Suzuki, S Akiyama arXiv preprint arXiv:2012.03224, 2020 | 18 | 2020 |
On learnability via gradient method for two-layer relu neural networks in teacher-student setting S Akiyama, T Suzuki International Conference on Machine Learning, 152-162, 2021 | 13 | 2021 |
Excess risk of two-layer relu neural networks in teacher-student settings and its superiority to kernel methods S Akiyama, T Suzuki arXiv preprint arXiv:2205.14818, 2022 | 6 | 2022 |
Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method K Oko, S Akiyama, T Murata, T Suzuki OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022 | 1 | 2022 |
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning K Oko, S Akiyama, T Murata, T Suzuki arXiv preprint arXiv:2209.00361, 2022 | 1 | 2022 |
Optimal design of lottery with cumulative prospect theory S Akiyama, M Obara, Y Kawase arXiv preprint arXiv:2209.00822, 2022 | | 2022 |
Learning Sparse Representation of Graph Embedding with General Similarities Using Grouplasso and Luckiness Normalized Maximum Likelihood Code-Length R Yuki, S Akiyama, A Suzuki, K Yamanishi Available at SSRN 4663084, 0 | | |