Masatoshi Uehara
Title
Cited by
Cited by
Year
Generative adversarial nets from a density ratio estimation perspective
M Uehara, I Sato, M Suzuki, K Nakayama, Y Matsuo
arXiv preprint arXiv:1610.02920, 2016
512016
Double reinforcement learning for efficient off-policy evaluation in markov decision processes
N Kallus, M Uehara
Journal of Machine Learning Research 21 (167), 1-63, 2020
232020
Minimax weight and q-function learning for off-policy evaluation
M Uehara, N Jiang
arXiv preprint arXiv:1910.12809, 2019
192019
Efficiently breaking the curse of horizon: Double reinforcement learning in infinite-horizon processes
N Kallus, M Uehara
stat 1050, 12, 2019
172019
Intrinsically efficient, stable, and bounded off-policy evaluation for reinforcement learning
N Kallus, M Uehara
Advances in Neural Information Processing Systems, 3325-3334, 2019
172019
Analysis of noise contrastive estimation from the perspective of asymptotic variance
M Uehara, T Matsuda, F Komaki
arXiv preprint arXiv:1808.07983, 2018
62018
Statistically efficient off-policy policy gradients
N Kallus, M Uehara
arXiv preprint arXiv:2002.04014, 2020
42020
A Unified Statistically Efficient Estimation Framework for Unnormalized Models
M Uehara, T Kanamori, T Takenouchi, T Matsuda
International Conference on Artificial Intelligence and Statistics, 809-819, 2020
3*2020
Imputation estimators for unnormalized models with missing data
M Uehara, T Matsuda, JK Kim
International Conference on Artificial Intelligence and Statistics, 831-841, 2020
22020
Double reinforcement learning for efficient and robust off-policy evaluation
N Kallus, M Uehara
Proceedings of the 37th International Conference on Machine Learning, 2020
22020
Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies
N Kallus, M Uehara
arXiv preprint arXiv:2006.03900, 2020
12020
Off-Policy Evaluation and Learning for External Validity under a Covariate Shift
M Kato, M Uehara, S Yasui
arXiv preprint arXiv:2002.11642, 2020
12020
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning
N Kallus, M Uehara
arXiv preprint arXiv:2006.03886, 2020
2020
Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond
N Kallus, X Mao, M Uehara
arXiv preprint arXiv:1912.12945, 2019
2019
Information criteria for non-normalized models
T Matsuda, M Uehara, A Hyvarinen
arXiv preprint arXiv:1905.05976, 2019
2019
Semiparametric response model with nonignorable nonresponse
M Uehara, JK Kim
arXiv preprint arXiv:1810.12519, 2018
2018
The system can't perform the operation now. Try again later.
Articles 1–16