Peng Xu
Peng Xu
Google Deepmind
Verified email at
Cited by
Cited by
Do as i can, not as i say: Grounding language in robotic affordances
M Ahn, A Brohan, N Brown, Y Chebotar, O Cortes, B David, C Finn, C Fu, ...
arXiv preprint arXiv:2204.01691, 2022
Rt-1: Robotics transformer for real-world control at scale
A Brohan, N Brown, J Carbajal, Y Chebotar, J Dabis, C Finn, ...
arXiv preprint arXiv:2212.06817, 2022
Code as policies: Language model programs for embodied control
J Liang, W Huang, F Xia, P Xu, K Hausman, B Ichter, P Florence, A Zeng
2023 IEEE International Conference on Robotics and Automation (ICRA), 9493-9500, 2023
Rt-2: Vision-language-action models transfer web knowledge to robotic control
A Brohan, N Brown, J Carbajal, Y Chebotar, X Chen, K Choromanski, ...
arXiv preprint arXiv:2307.15818, 2023
Do as i can, not as i say: Grounding language in robotic affordances
A Brohan, Y Chebotar, C Finn, K Hausman, A Herzog, D Ho, J Ibarz, ...
Conference on robot learning, 287-318, 2023
Learning to walk in the real world with minimal human effort
S Ha, P Xu, Z Tan, S Levine, J Tan
arXiv preprint arXiv:2002.08550, 2020
Language to rewards for robotic skill synthesis
W Yu, N Gileadi, C Fu, S Kirmani, KH Lee, MG Arenas, HTL Chiang, ...
arXiv preprint arXiv:2306.08647, 2023
Open x-embodiment: Robotic learning datasets and rt-x models
A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, A Irpan, A Khazatsky, ...
arXiv preprint arXiv:2310.08864, 2023
Robots that ask for help: Uncertainty alignment for large language model planners
AZ Ren, A Dixit, A Bodrova, S Singh, S Tu, N Brown, P Xu, L Takayama, ...
arXiv preprint arXiv:2307.01928, 2023
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models
P Xu, W Shao, K Zhang, P Gao, S Liu, M Lei, F Meng, S Huang, Y Qiao, ...
arXiv preprint arXiv:2306.09265, 2023
Visual-locomotion: Learning to walk on complex terrains with vision
W Yu, D Jain, A Escontrela, A Iscen, P Xu, E Coumans, S Ha, J Tan, ...
5th Annual Conference on Robot Learning, 2021
Rt-2: Vision-language-action models transfer web knowledge to robotic control
B Zitkovich, T Yu, S Xu, P Xu, T Xiao, F Xia, J Wu, P Wohlhart, S Welker, ...
Conference on Robot Learning, 2165-2183, 2023
Omniquant: Omnidirectionally calibrated quantization for large language models
W Shao, M Chen, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ...
arXiv preprint arXiv:2308.13137, 2023
Imagebind-llm: Multi-modality instruction tuning
J Han, R Zhang, W Shao, P Gao, P Xu, H Xiao, K Zhang, C Liu, S Wen, ...
arXiv preprint arXiv:2309.03905, 2023
Principles and guidelines for evaluating social robot navigation algorithms
A Francis, C Pérez-d'Arpino, C Li, F Xia, A Alahi, R Alami, A Bera, ...
arXiv preprint arXiv:2306.16740, 2023
Learning model predictive controllers with real-time attention for real-world navigation
X Xiao, T Zhang, K Choromanski, E Lee, A Francis, J Varley, S Tu, ...
arXiv preprint arXiv:2209.10780, 2022
Value function spaces: Skill-centric state abstractions for long-horizon reasoning
D Shah, P Xu, Y Lu, T Xiao, A Toshev, S Levine, B Ichter
arXiv preprint arXiv:2111.03189, 2021
Analysis of vibration monitoring data of flexible suspension lifting structure based on time-varying theory
Q Peng, P Xu, H Yuan, H Ma, J Xue, Z He, S Li
Sensors 20 (22), 6586, 2020
Tiny lvlm-ehub: Early multimodal experiments with bard
W Shao, Y Hu, P Gao, M Lei, K Zhang, F Meng, P Xu, S Huang, H Li, ...
arXiv preprint arXiv:2308.03729, 2023
Diffrate: Differentiable compression rate for efficient vision transformers
M Chen, W Shao, P Xu, M Lin, K Zhang, F Chao, R Ji, Y Qiao, P Luo
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
The system can't perform the operation now. Try again later.
Articles 1–20