Follow
Dorsa Sadigh
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
36812021
Planning for autonomous cars that leverage effects on human actions.
D Sadigh, S Sastry, SA Seshia, AD Dragan
Robotics: Science and systems 2, 1-9, 2016
6082016
Active preference-based learning of reward functions
D Sadigh, A Dragan, S Sastry, S Seshia
4032017
Toward verified artificial intelligence
SA Seshia, D Sadigh, SS Sastry
Communications of the ACM 65 (7), 46-55, 2022
3862022
Reactive synthesis from signal temporal logic specifications
V Raman, A Donzé, D Sadigh, RM Murray, SA Seshia
Proceedings of the 18th international conference on hybrid systems …, 2015
3462015
Open problems and fundamental limitations of reinforcement learning from human feedback
S Casper, X Davies, C Shi, TK Gilbert, J Scheurer, J Rando, R Freedman, ...
arXiv preprint arXiv:2307.15217, 2023
2882023
Hierarchical game-theoretic planning for autonomous vehicles
JF Fisac, E Bronstein, E Stefansson, D Sadigh, SS Sastry, AD Dragan
2019 International conference on robotics and automation (ICRA), 9590-9596, 2019
2732019
Multi-agent generative adversarial imitation learning
J Song, H Ren, D Sadigh, S Ermon
Advances in neural information processing systems 31, 2018
2672018
Information gathering actions over human internal state
D Sadigh, SS Sastry, SA Seshia, A Dragan
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2016
2402016
Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state
D Sadigh, N Landolfi, SS Sastry, SA Seshia, AD Dragan
Autonomous Robots 42, 1405-1426, 2018
2182018
A learning based approach to control synthesis of markov decision processes for linear temporal logic specifications
D Sadigh, ES Kim, S Coogan, SS Sastry, SA Seshia
53rd IEEE Conference on Decision and Control, 1091-1096, 2014
2092014
Language to rewards for robotic skill synthesis
W Yu, N Gileadi, C Fu, S Kirmani, KH Lee, MG Arenas, HTL Chiang, ...
arXiv preprint arXiv:2306.08647, 2023
1842023
Open x-embodiment: Robotic learning datasets and rt-x models
A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, A Irpan, A Khazatsky, ...
arXiv preprint arXiv:2310.08864, 2023
1782023
Safe control under uncertainty with probabilistic signal temporal logic
D Sadigh, A Kapoor
Proceedings of Robotics: Science and Systems XII, 2016
1692016
Synthesis for human-in-the-loop control systems
W Li, D Sadigh, SS Sastry, SA Seshia
Tools and Algorithms for the Construction and Analysis of Systems: 20th …, 2014
1562014
Learning reward functions by integrating human demonstrations and preferences
M Palan, NC Landolfi, G Shevchuk, D Sadigh
arXiv preprint arXiv:1906.08928, 2019
1552019
Reward design with language models
M Kwon, SM Xie, K Bullard, D Sadigh
arXiv preprint arXiv:2303.00001, 2023
1492023
Will UML 2.0 be agile or awkward?
C Kobryn
Communications of the ACM 45 (1), 107-110, 2002
147*2002
Robots that ask for help: Uncertainty alignment for large language model planners
AZ Ren, A Dixit, A Bodrova, S Singh, S Tu, N Brown, P Xu, L Takayama, ...
arXiv preprint arXiv:2307.01928, 2023
1362023
Asking easy questions: A user-friendly approach to active reward learning
E Bıyık, M Palan, NC Landolfi, DP Losey, D Sadigh
arXiv preprint arXiv:1910.04365, 2019
1352019
The system can't perform the operation now. Try again later.
Articles 1–20