Decision transformer: Reinforcement learning via sequence modeling L Chen, K Lu, A Rajeswaran, K Lee, A Grover, M Laskin, P Abbeel, ... Advances in neural information processing systems 34, 15084-15097, 2021 | 1640 | 2021 |
Learning complex dexterous manipulation with deep reinforcement learning and demonstrations A Rajeswaran, V Kumar, A Gupta, G Vezzani, J Schulman, E Todorov, ... Robotics: Science and Systems (RSS), 2018 | 1164 | 2018 |
Meta-Learning with Implicit Gradients A Rajeswaran, C Finn, S Kakade, S Levine Advances in Neural Information Processing Systems (NeurIPS), 2019 | 936 | 2019 |
MOReL: Model-Based Offline Reinforcement Learning R Kidambi, A Rajeswaran, P Netrapalli, T Joachims Advances in Neural Information Processing Systems (NeurIPS), 2020 | 753 | 2020 |
Online Meta-Learning C Finn, A Rajeswaran, S Kakade, S Levine International Conference on Machine Learning (ICML), 2019 | 526 | 2019 |
R3m: A universal visual representation for robot manipulation S Nair, A Rajeswaran, V Kumar, C Finn, A Gupta arXiv preprint arXiv:2203.12601, 2022 | 488 | 2022 |
Combo: Conservative offline model-based policy optimization T Yu, A Kumar, R Rafailov, A Rajeswaran, S Levine, C Finn Advances in neural information processing systems 34, 28954-28967, 2021 | 431 | 2021 |
EPOpt: Learning Robust Neural Network Policies Using Model Ensembles A Rajeswaran, S Ghotra, B Ravindran, S Levine International Conference on Learning Representations (ICLR), 2017 | 425 | 2017 |
Towards generalization and simplicity in continuous control A Rajeswaran, K Lowrey, EV Todorov, SM Kakade Advances in neural information processing systems 30, 2017 | 359 | 2017 |
Identifying topology of low voltage distribution networks based on smart meter data SJ Pappu, N Bhatt, R Pasumarthy, A Rajeswaran IEEE Transactions on Smart Grid 9 (5), 5113-5122, 2017 | 284 | 2017 |
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control K Lowrey, A Rajeswaran, S Kakade, E Todorov, I Mordatch International Conference on Learning Representations (ICLR), 2019 | 258 | 2019 |
Dexterous manipulation with deep reinforcement learning: Efficient, general, and low-cost H Zhu, A Gupta, A Rajeswaran, S Levine, V Kumar International Conference on Robotics and Automation (ICRA), 2019 | 248 | 2019 |
The unsurprising effectiveness of pre-trained vision models for control S Parisi, A Rajeswaran, S Purushwalkam, A Gupta international conference on machine learning, 17359-17371, 2022 | 186 | 2022 |
Variance reduction for policy gradient with action-dependent factorized baselines C Wu, A Rajeswaran, Y Duan, V Kumar, AM Bayen, S Kakade, I Mordatch, ... International Conference on Learning Representations (ICLR), 2018 | 174 | 2018 |
A Game Theoretic Framework for Model Based Reinforcement Learning A Rajeswaran, I Mordatch, V Kumar International Conference on Machine Learning, 7953-7963, 2020 | 145 | 2020 |
Offline reinforcement learning from images with latent space models R Rafailov, T Yu, A Rajeswaran, C Finn Learning for dynamics and control, 1154-1168, 2021 | 130 | 2021 |
Divide-and-conquer reinforcement learning D Ghosh, A Singh, A Rajeswaran, V Kumar, S Levine International Conference on Learning Representations (ICLR), 2018 | 128 | 2018 |
Where are we in the search for an artificial visual cortex for embodied intelligence? A Majumdar, K Yadav, S Arnaud, J Ma, C Chen, S Silwal, A Jain, ... Advances in Neural Information Processing Systems 36, 655-677, 2023 | 123 | 2023 |
Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system K Lowrey, S Kolev, J Dao, A Rajeswaran, E Todorov 2018 IEEE International Conference on Simulation, Modeling, and Programming …, 2018 | 85 | 2018 |
Can foundation models perform zero-shot task specification for robot manipulation? Y Cui, S Niekum, A Gupta, V Kumar, A Rajeswaran Learning for dynamics and control conference, 893-905, 2022 | 82 | 2022 |