Follow
Xiyu Zhai
Xiyu Zhai
Unknown affiliation
Verified email at mit.edu
Title
Cited by
Cited by
Year
Gradient descent finds global minima of deep neural networks
S Du, J Lee, H Li, L Wang, X Zhai
International conference on machine learning, 1675-1685, 2019
12892019
Gradient descent provably optimizes over-parameterized neural networks
SS Du, X Zhai, B Poczos, A Singh
arXiv preprint arXiv:1810.02054, 2018
7862018
Generalization bounds of sgld for non-convex learning: Two theoretical viewpoints
W Mou, L Wang, X Zhai, K Zheng
Conference on Learning Theory, 605-638, 2018
1492018
On the multiple descent of minimum-norm interpolants and restricted lower isometry of kernels
T Liang, A Rakhlin, X Zhai
Conference on Learning Theory, 2683-2711, 2020
1232020
How many samples are needed to estimate a convolutional neural network?
SS Du, Y Wang, X Zhai, S Balakrishnan, RR Salakhutdinov, A Singh
Advances in Neural Information Processing Systems 31, 2018
802018
Consistency of interpolation with Laplace kernels is a high-dimensional phenomenon
A Rakhlin, X Zhai
Conference on Learning Theory, 2595-2623, 2019
792019
On the risk of minimum-norm interpolants and restricted lower isometry of kernels
T Liang, A Rakhlin, X Zhai
arXiv preprint arXiv:1908.10292 27, 2019
282019
How many samples are needed to estimate a convolutional or recurrent neural network?
SS Du, Y Wang, X Zhai, S Balakrishnan, R Salakhutdinov, A Singh
arXiv preprint arXiv:1805.07883, 2018
162018
Near optimal stratified sampling
T Yu, X Zhai, S Sra
arXiv preprint arXiv:1906.11289, 2019
32019
The system can't perform the operation now. Try again later.
Articles 1–9