Dylan Slack
Title
Cited by
Cited by
Year
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
D Slack, S Hilgard, E Jia, S Singh, H Lakkaraju
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2020
75*2020
Assessing the Local Interpretability of Machine Learning Models
D Slack, SA Friedler, C Scheidegger, C Dutta Roy
Workshop on Human Centric Machine Learning, NeurIPS, 2019
202019
Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
D Slack, S Friedler, E Givental
ACM Conference on Fairness, Accountability and Transparency (FAccT), 2020
72020
How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
D Slack, S Hilgard, S Singh, H Lakkaraju
arXiv preprint arXiv:2008.05030, 2020
32020
Fair Meta-Learning: Learning How to Learn Fairly
D Slack, S Friedler, E Givental
NeurIPS HCML Workshop, 2019
12019
Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
D Slack, N Rauschmayr, K Kenthapadi
arXiv preprint arXiv:2102.06162, 2021
2021
Differentially Private Language Models Benefit from Public Pre-training
G Kerrigan, D Slack, J Tuyls
EMNLP PrivateNLP Workshop, 2020
2020
Expert-Assisted Transfer Reinforcement Learning
D Slack
Haverford College Thesis, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–8