フォロー
Amit Daniely
Amit Daniely
確認したメール アドレス: mail.huji.ac.il - ホームページ
タイトル
引用先
引用先
Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity
A Daniely, R Frostig, Y Singer
Advances in neural information processing systems 29, 2016
3832016
SGD learns the conjugate kernel class of the network
A Daniely
Advances in neural information processing systems 30, 2017
1952017
Strongly adaptive online learning
A Daniely, A Gonen, S Shalev-Shwartz
International Conference on Machine Learning, 1405-1411, 2015
1872015
Complexity theoretic limitations on learning halfspaces
A Daniely
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing …, 2016
1482016
Complexity theoretic limitations on learning dnf’s
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 815-830, 2016
1192016
From average case complexity to improper learning complexity
A Daniely, N Linial, S Shalev-Shwartz
Proceedings of the forty-sixth annual ACM symposium on Theory of computing …, 2014
1172014
Depth separation for neural networks
A Daniely
Conference on Learning Theory, 690-696, 2017
982017
Optimal learners for multiclass problems
A Daniely, S Shalev-Shwartz
Conference on Learning Theory, 287-316, 2014
932014
Learning parities with neural networks
A Daniely, E Malach
Advances in Neural Information Processing Systems 33, 20356-20365, 2020
892020
Multiclass learnability and the ERM principle.
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
J. Mach. Learn. Res. 16 (1), 2377-2404, 2015
882015
Multiclass learnability and the erm principle
A Daniely, S Sabato, S Ben-David, S Shalev-Shwartz
Proceedings of the 24th Annual Conference on Learning Theory, 207-232, 2011
872011
The implicit bias of depth: How incremental learning drives generalization
D Gissin, S Shalev-Shwartz, A Daniely
arXiv preprint arXiv:1909.12051, 2019
752019
Multiclass learning approaches: A theoretical comparison with implications
A Daniely, S Sabato, S Shwartz
Advances in Neural Information Processing Systems 25, 2012
562012
A PTAS for agnostically learning halfspaces
A Daniely
Conference on Learning Theory, 484-502, 2015
542015
Learning economic parameters from revealed preferences
MF Balcan, A Daniely, R Mehta, R Urner, VV Vazirani
Web and Internet Economics: 10th International Conference, WINE 2014 …, 2014
522014
Clustering is difficult only when it does not matter
A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4891, 2012
492012
On the practically interesting instances of MAXCUT
Y Bilu, A Daniely, N Linial, M Saks
arXiv preprint arXiv:1205.4893, 2012
462012
More data speeds up training time in learning halfspaces over sparse vectors
A Daniely, N Linial, S Shalev-Shwartz
Advances in Neural Information Processing Systems 26, 2013
432013
From local pseudorandom generators to hardness of learning
A Daniely, G Vardi
Conference on Learning Theory, 1358-1394, 2021
332021
Neural networks learning and memorization with (almost) no over-parameterization
A Daniely
Advances in Neural Information Processing Systems 33, 9007-9016, 2020
322020
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20