フォロー
Shunta Akiyama
Shunta Akiyama
確認したメール アドレス: mist.i.u-tokyo.ac.jp
タイトル
引用先
引用先
Diffusion models are minimax optimal distribution estimators
K Oko, S Akiyama, T Suzuki
International Conference on Machine Learning, 26517-26582, 2023
372023
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods
T Suzuki, S Akiyama
arXiv preprint arXiv:2012.03224, 2020
182020
On learnability via gradient method for two-layer relu neural networks in teacher-student setting
S Akiyama, T Suzuki
International Conference on Machine Learning, 152-162, 2021
132021
Excess risk of two-layer relu neural networks in teacher-student settings and its superiority to kernel methods
S Akiyama, T Suzuki
arXiv preprint arXiv:2205.14818, 2022
62022
Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method
K Oko, S Akiyama, T Murata, T Suzuki
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
12022
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
K Oko, S Akiyama, T Murata, T Suzuki
arXiv preprint arXiv:2209.00361, 2022
12022
Optimal design of lottery with cumulative prospect theory
S Akiyama, M Obara, Y Kawase
arXiv preprint arXiv:2209.00822, 2022
2022
Learning Sparse Representation of Graph Embedding with General Similarities Using Grouplasso and Luckiness Normalized Maximum Likelihood Code-Length
R Yuki, S Akiyama, A Suzuki, K Yamanishi
Available at SSRN 4663084, 0
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–8