Pin-Yu Chen
Pin-Yu Chen
Research Staff Member, IBM Research AI; MIT-IBM Watson AI Lab; RPI-IBM AIRC
Verified email at ibm.com - Homepage
Title
Cited by
Cited by
Year
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
PY Chen, H Zhang, Y Sharma, J Yi, CJ Hsieh
ACM CCS Workshop on AI and Security, 2017
5692017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
PY Chen, Y Sharma, H Zhang, J Yi, CJ Hsieh
AAAI 2018, 2017
2782017
Smart attacks in smart grid communication networks
PY Chen, SM Cheng, KC Chen
IEEE Communications Magazine 50 (8), 24-29, 2012
1832012
Efficient Neural Network Robustness Certification with General Activation Functions
H Zhang, TW Weng, PY Chen, CJ Hsieh, L Daniel
NeurIPS 2018, 2018
1772018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao, CJ Hsieh, L Daniel
ICLR 2018, 2018
1502018
Is Robustness the Cost of Accuracy?--A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
D Su, H Zhang, H Chen, J Yi, PY Chen, Y Gao
ECCV 2018, 2018
1422018
Query-efficient hard-label black-box attack: An optimization-based approach
M Cheng, T Le, PY Chen, J Yi, H Zhang, CJ Hsieh
ICLR 2019, 2018
1262018
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
A Dhurandhar, PY Chen, R Luss, CC Tu, P Ting, K Shanmugam, P Das
NeurIPS 2018, 2018
1212018
On modeling malware propagation in generalized social networks
SM Cheng, WC Ao, PY Chen, KC Chen
IEEE Communications Letters 15 (1), 25-27, 2010
1202010
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
CC Tu, P Ting, PY Chen, S Liu, H Zhang, J Yi, CJ Hsieh, SM Cheng
AAAI 2019 (oral presentation), 2018
1172018
Attacking visual language grounding with adversarial examples: A case study on neural image captioning
H Chen, H Zhang, PY Chen, J Yi, CJ Hsieh
ACL 2018 (Long Papers) 1, 2587-2597, 2018
99*2018
Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples
M Cheng, J Yi, PY Chen, H Zhang, CJ Hsieh
AAAI 2020, 2018
882018
Information Fusion to Defend Intentional Attack in Internet of Things
PY Chen, SM Cheng, KC Chen
IEEE Internet of Things Journal, 2014
832014
Optimal Control of Epidemic Information Dissemination Over Networks
PY Chen, SM Cheng, KC Chen
IEEE Transactions on Cybernetics, 2014
712014
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
K Xu, S Liu, P Zhao, PY Chen, H Zhang, D Erdogmus, Y Wang, X Lin
ICLR 2019, 2018
692018
Attacking the Madry defense model with -based adversarial examples
Y Sharma, PY Chen
ICLR 2018 Workshop, 2017
68*2017
One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques
V Arya, RKE Bellamy, PY Chen, A Dhurandhar, M Hind, SC Hoffman, ...
arXiv preprint arXiv:1909.03012, 2019
612019
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
K Xu, H Chen, S Liu, PY Chen, TW Weng, M Hong, X Lin
IJCAI 2019, 2019
592019
Word Mover's Embedding: From Word2Vec to Document Embedding
L Wu, IEH Yen, K Xu, F Xu, A Balakrishnan, PY Chen, P Ravikumar, ...
EMNLP 2018, 2018
512018
Characterizing Audio Adversarial Examples Using Temporal Dependency
Z Yang, B Li, PY Chen, D Song
ICLR 2019, 2018
502018
The system can't perform the operation now. Try again later.
Articles 1–20