フォロー
Sanyam Kapoor
Sanyam Kapoor
確認したメール アドレス: nyu.edu - ホームページ
タイトル
引用先
引用先
Multi-agent reinforcement learning: A report on challenges and approaches
S Kapoor
arXiv preprint arXiv:1807.09427, 2018
432018
Backplay:" man muss immer umkehren"
C Resnick, R Raileanu, S Kapoor, A Peysakhovich, K Cho, J Bruna
arXiv preprint arXiv:1807.06919, 2018
412018
Pac-bayes compression bounds so tight that they can explain generalization
S Lotfi, M Finzi, S Kapoor, A Potapczynski, M Goldblum, AG Wilson
Advances in Neural Information Processing Systems 35, 31459-31473, 2022
372022
Pre-train your loss: Easy bayesian transfer learning with informative priors
R Shwartz-Ziv, M Goldblum, H Souri, S Kapoor, C Zhu, Y LeCun, ...
Advances in Neural Information Processing Systems 35, 27706-27715, 2022
292022
On uncertainty, tempering, and data augmentation in bayesian classification
S Kapoor, WJ Maddox, P Izmailov, AG Wilson
Advances in Neural Information Processing Systems 35, 18211-18225, 2022
252022
Variational auto-regressive Gaussian processes for continual learning
S Kapoor, T Karaletsos, TD Bui
International Conference on Machine Learning, 5290-5300, 2021
222021
Skiing on simplices: Kernel interpolation on the permutohedral lattice for scalable gaussian processes
S Kapoor, M Finzi, KA Wang, AGG Wilson
International Conference on Machine Learning, 5279-5289, 2021
122021
Function-space regularization in neural networks: A probabilistic perspective
TGJ Rudner, S Kapoor, S Qiu, AG Wilson
International Conference on Machine Learning, 29275-29290, 2023
72023
When are Iterative Gaussian Processes Reliably Accurate?
WJ Maddox, S Kapoor, AG Wilson
arXiv preprint arXiv:2112.15246, 2021
62021
Policy Gradients in a Nutshell
S Kapoor
Towards Data Science: A Medium publication sharing concepts, ideas, and codes, 2020
62020
First-order preconditioning via hypergradient descent
T Moskovitz, R Wang, J Lan, S Kapoor, T Miconi, J Yosinski, A Rawal
arXiv preprint arXiv:1910.08461, 2019
52019
A simple and fast baseline for tuning large XGBoost models
S Kapoor, V Perrone
arXiv preprint arXiv:2111.06924, 2021
32021
Should We Learn Most Likely Functions or Parameters?
S Qiu, TGJ Rudner, S Kapoor, AG Wilson
Advances in Neural Information Processing Systems 36, 2024
12024
Calibration-Tuning: Teaching Large Language Models to Know What They Don’t Know
S Kapoor, N Gruver, M Roberts, A Pal, S Dooley, M Goldblum, A Wilson
Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024 …, 2024
2024
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–14