フォロー
Runa Eschenhagen
Runa Eschenhagen
確認したメール アドレス: cam.ac.uk - ホームページ
タイトル
引用先
引用先
Practical deep learning with Bayesian principles
K Osawa, S Swaroop, A Jain, R Eschenhagen, RE Turner, R Yokota, ...
NeurIPS 2019, 2019
2642019
Laplace Redux--Effortless Bayesian Deep Learning
E Daxberger*, A Kristiadi*, A Immer*, R Eschenhagen*, M Bauer, ...
NeurIPS 2021, 2021
2392021
Continual deep learning by functional regularisation of memorable past
P Pan, S Swaroop, A Immer, R Eschenhagen, RE Turner, ME Khan
NeurIPS 2020, 2020
1292020
Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
R Eschenhagen, E Daxberger, P Hennig, A Kristiadi
Bayesian Deep Learning Workshop, NeurIPS 2021, 2021
202021
Benchmarking neural network training algorithms
GE Dahl, F Schneider, Z Nado, N Agarwal, CS Sastry, P Hennig, ...
arXiv preprint arXiv:2306.07179, 2023
102023
Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
R Eschenhagen, A Immer, RE Turner, F Schneider, P Hennig
NeurIPS 2023, 2023
82023
Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks
A Kristiadi, R Eschenhagen, P Hennig
NeurIPS 2022, 2022
82022
Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
A Kristiadi, A Immer, R Eschenhagen, V Fortuin
AABI 2023, 2023
62023
Approximate Bayesian neural operators: Uncertainty quantification for parametric PDEs
E Magnani, N Krämer, R Eschenhagen, L Rosasco, P Hennig
arXiv preprint arXiv:2208.01565, 2022
52022
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, A Makhzani
ICML 2024, 2024
22024
Natural Gradient Variational Inference for Continual Learning in Deep Neural Networks
R Eschenhagen
University of Osnabrück, 2019
12019
Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC
W Lin, F Dangel, R Eschenhagen, K Neklyudov, A Kristiadi, RE Turner, ...
ICML 2024, 0
1*
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–12