Learners that use little information R Bassily, S Moran, I Nachum, J Shafer, A Yehudayoff Algorithmic Learning Theory, 25-55, 2018 | 114 | 2018 |

A direct sum result for the information complexity of learning I Nachum, J Shafer, A Yehudayoff Conference On Learning Theory, 1547-1568, 2018 | 20 | 2018 |

Average-case information complexity of learning I Nachum, A Yehudayoff Algorithmic Learning Theory, 633-646, 2019 | 14 | 2019 |

Fantastic generalization measures are nowhere to be found M Gastpar, I Nachum, J Shafer, T Weinberger arXiv preprint arXiv:2309.13658, 2023 | 10* | 2023 |

Finite Littlestone dimension implies finite information complexity A Pradeep, I Nachum, M Gastpar 2022 IEEE International Symposium on Information Theory (ISIT), 3055-3060, 2022 | 9 | 2022 |

A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs I Nachum, J Hązła, M Gastpar, A Khina ICLR 2022, 2021 | 9 | 2021 |

On symmetry and initialization for neural networks I Nachum, A Yehudayoff LATIN 2020: Theoretical Informatics: 14th Latin American Symposium, São …, 2020 | 8 | 2020 |

Almost-Reed–Muller codes achieve constant rates for random errors E Abbe, J Hązła, I Nachum IEEE Transactions on Information Theory 67 (12), 8034-8050, 2021 | 7 | 2021 |

On the perceptron’s compression S Moran, I Nachum, I Panasoff, A Yehudayoff Beyond the Horizon of Computability: 16th Conference on Computability in …, 2020 | 3 | 2020 |

Regularization by Misclassification in ReLU Neural Networks E Cornacchia, J Hązła, I Nachum, A Yehudayoff arXiv preprint arXiv:2111.02154, 2021 | 1 | 2021 |

Which Algorithms Have Tight Generalization Bounds? M Gastpar, I Nachum, J Shafer, T Weinberger arXiv preprint arXiv:2410.01969, 2024 | | 2024 |

1 Past Research I Nachum | | |

LINX M Bondaschi, MB Dogan, AR Esposito, F Faille, C Feng, MC Gastpar, ... | | |