Benjamin Eysenbach
Benjamin Eysenbach
CMU, Google
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Diversity is all you need: Learning skills without a reward function
B Eysenbach, A Gupta, J Ibarz, S Levine
International Conference on Learning Representations, 2019
2642019
Clustervision: Visual supervision of unsupervised clustering
BC Kwon, B Eysenbach, J Verma, K Ng, C De Filippi, WF Stewart, A Perer
IEEE transactions on visualization and computer graphics 24 (1), 142-151, 2017
732017
Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings
JD Co-Reyes, YX Liu, A Gupta, B Eysenbach, P Abbeel, S Levine
International Conference on Machine Learning, 2018
662018
Leave no trace: Learning to reset for safe and autonomous reinforcement learning
B Eysenbach, S Gu, J Ibarz, S Levine
International Conference on Learning Representations, 2018
562018
Unsupervised meta-learning for reinforcement learning
A Gupta, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:1806.04640, 2018
512018
Search on the replay buffer: Bridging planning and reinforcement learning
B Eysenbach, RR Salakhutdinov, S Levine
Advances in Neural Information Processing Systems, 15246-15257, 2019
502019
Efficient exploration via state marginal matching
L Lee, B Eysenbach, E Parisotto, E Xing, S Levine, R Salakhutdinov
arXiv preprint arXiv:1906.05274, 2019
412019
Unsupervised curricula for visual meta-reinforcement learning
A Jabri, K Hsu, A Gupta, B Eysenbach, S Levine, C Finn
Advances in Neural Information Processing Systems, 2019
222019
If maxent rl is the answer, what is the question?
B Eysenbach, S Levine
arXiv preprint arXiv:1910.01913, 2019
142019
Learning to reach goals without reinforcement learning
D Ghosh, A Gupta, J Fu, A Reddy, C Devin, B Eysenbach, S Levine
arXiv preprint arXiv:1912.06088, 2019
82019
Who is mistaken?
B Eysenbach, C Vondrick, A Torralba
arXiv preprint arXiv:1612.01175, 2016
52016
Rewriting history with inverse rl: Hindsight inference for policy improvement
B Eysenbach, X Geng, S Levine, R Salakhutdinov
arXiv preprint arXiv:2002.11089, 2020
42020
F-irl: Inverse reinforcement learning via state marginal matching
T Ni, H Sikchi, Y Wang, T Gupta, L Lee, B Eysenbach
arXiv preprint arXiv:2011.04709, 2020
22020
Learning to be Safe: Deep RL with a Safety Critic
K Srinivasan, B Eysenbach, S Ha, J Tan, C Finn
arXiv preprint arXiv:2010.14603, 2020
12020
Weakly-Supervised Reinforcement Learning for Controllable Behavior
L Lee, B Eysenbach, R Salakhutdinov, C Finn
arXiv preprint arXiv:2004.02860, 2020
12020
Learning to reach goals via iterated supervised learning
D Ghosh, A Gupta, A Reddy, J Fu, C Devin, B Eysenbach, S Levine
arXiv e-prints, arXiv: 1912.06088, 2019
12019
Reinforcement learning with unknown reward functions
B Eysenbach, J Tyo, S Gu, G Brain, R Salakhutdinov, Z Lipton, S Levine
Task-Agnostic Reinforcement Learning Workshop at ICLR 2019, 2019
12019
Model-Based Visual Planning with Self-Supervised Functional Distances
S Tian, S Nair, F Ebert, S Dasari, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:2012.15373, 2020
2020
ViNG: Learning Open-World Navigation with Visual Goals
D Shah, B Eysenbach, G Kahn, N Rhinehart, S Levine
arXiv preprint arXiv:2012.09812, 2020
2020
C-Learning: Learning to Achieve Goals via Recursive Classification
B Eysenbach, R Salakhutdinov, S Levine
arXiv preprint arXiv:2011.08909, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20