Jason D. Lee
Jason D. Lee
Assistant Professor of Electrical Engineering, Princeton University
Verified email at princeton.edu - Homepage
TitleCited byYear
Exact post-selection inference, with application to the lasso
JD Lee, DL Sun, Y Sun, JE Taylor
The Annals of Statistics 44 (3), 907-927, 2016
2982016
Matrix completion has no spurious local minimum
R Ge, JD Lee, T Ma
Advances in Neural Information Processing Systems, 2973-2981, 2016
2942016
Gradient descent only converges to minimizers
JD Lee, M Simchowitz, MI Jordan, B Recht
Conference on learning theory, 1246-1257, 2016
2182016
Matrix completion and low-rank SVD via fast alternating least squares
T Hastie, R Mazumder, J Lee, R Zadeh
Journal of Machine Learning Research, 2014
1682014
Proximal Newton-type methods for minimizing composite functions
JD Lee, Y Sun, MA Saunders
SIAM Journal on Optimization 24 (3), 1420-1443, 2014
1492014
Practical large-scale optimization for max-norm regularization
JD Lee, B Recht, N Srebro, J Tropp, RR Salakhutdinov
Advances in neural information processing systems, 1297-1305, 2010
1312010
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
M Soltanolkotabi, A Javanmard, JD Lee
IEEE Transactions on Information Theory 65 (2), 742-769, 2018
1192018
Gradient descent converges to minimizers
JD Lee, M Simchowitz, MI Jordan, B Recht
arXiv preprint arXiv:1602.04915, 2016
1132016
A kernelized Stein discrepancy for goodness-of-fit tests
Q Liu, J Lee, M Jordan
International conference on machine learning, 276-284, 2016
1032016
Learning the structure of mixed graphical models
JD Lee, TJ Hastie
Journal of Computational and Graphical Statistics 24 (1), 230-253, 2015
942015
Gradient descent finds global minima of deep neural networks
SS Du, JD Lee, H Li, L Wang, X Zhai
arXiv preprint arXiv:1811.03804, 2018
932018
Proximal Newton-type methods for convex optimization
JD Lee, Y Sun, M Saunders
Advances in Neural Information Processing Systems, 827-835, 2012
892012
Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima
SS Du, JD Lee, Y Tian, B Poczos, A Singh
arXiv preprint arXiv:1712.00779, 2017
862017
Learning one-hidden-layer neural networks with landscape design
R Ge, JD Lee, T Ma
arXiv preprint arXiv:1711.00501, 2017
792017
Communication-efficient sparse regression
JD Lee, Q Liu, Y Sun, JE Taylor
The Journal of Machine Learning Research 18 (1), 115-144, 2017
78*2017
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
H Monajemi, S Jafarpour, M Gavish, DL Donoho, ...
Proceedings of the National Academy of Sciences 110 (4), 1181-1186, 2013
782013
First-order methods almost always avoid saddle points
JD Lee, I Panageas, G Piliouras, M Simchowitz, MI Jordan, B Recht
arXiv preprint arXiv:1710.07406, 2017
692017
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems, 1067-1077, 2017
682017
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
JD Lee, Q Lin, T Ma, T Yang
The Journal of Machine Learning Research 18 (1), 4404-4446, 2017
65*2017
l1-regularized neural networks are improperly learnable in polynomial time
Y Zhang, JD Lee, MI Jordan
International Conference on Machine Learning, 993-1001, 2016
632016
The system can't perform the operation now. Try again later.
Articles 1–20