Follow
Tianle Cai
Tianle Cai
Verified email at princeton.edu - Homepage
Title
Cited by
Cited by
Year
Do Transformers Really Perform Badly for Graph Representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
NeurIPS 2021, arXiv preprint arXiv:2106.05234, 2021
9502021
Adversarially robust generalization just requires more unlabeled data
R Zhai, T Cai, D He, C Dan, K He, J Hopcroft, L Wang
arXiv preprint arXiv:1906.00555, 2019
1482019
Graphnorm: A principled approach to accelerating graph neural network training
T Cai, S Luo, K Xu, D He, T Liu, L Wang
ICML 2021, arXiv preprint arXiv:2009.03294, 2020
1412020
Convergence of adversarial training in overparametrized neural networks
R Gao, T Cai, H Li, CJ Hsieh, L Wang, JD Lee
NeurIPS 2019 Spotlight, arXiv preprint arXiv:1906.07916, 13029-13040, 2019
1352019
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
ICLR 2024, arXiv preprint arXiv:2305.17126, 2023
902023
Towards a Theoretical Framework of Out-of-Distribution Generalization
H Ye, C Xie, T Cai, R Li, Z Li, L Wang
NeurIPS 2021, arXiv preprint arXiv:2106.04496, 2021
892021
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
J Su, Y Chen, T Cai, T Wu, R Gao, L Wang, JD Lee
NeurIPS 2020, arXiv preprint arXiv:2009.11094, 2020
722020
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
T Cai, R Gao, J Hou, S Chen, D Wang, D He, Z Zhang, L Wang
NeurIPS 2019 Beyond First Order Methods in ML Workshop, arXiv preprint arXiv …, 2019
622019
What Makes Convolutional Models Great on Long Sequence Modeling?
Y Li, T Cai, Y Zhang, D Chen, D Dey
ICLR 2023, arXiv preprint arXiv:2210.09298, 2022
592022
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
B Zhang, T Cai, Z Lu, D He, L Wang
ICML 2021, arXiv preprint arXiv:2102.05363, 12368-12379, 2021
58*2021
Locally differentially private (contextual) bandits learning
K Zheng, T Cai, W Huang, Z Li, L Wang
NeurIPS 2020, arXiv preprint arXiv:2006.00701, 2020
502020
A Theory of Label Propagation for Subpopulation Shift
T Cai, R Gao, JD Lee, Q Lei
ICML 2021, arXiv preprint arXiv:2102.11203, 2021
452021
Medusa: Simple llm inference acceleration framework with multiple decoding heads
T Cai, Y Li, Z Geng, H Peng, JD Lee, D Chen, T Dao
arXiv preprint arXiv:2401.10774, 2024
44*2024
Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
NeurIPS 2021, arXiv preprint arXiv:2106.12566, 2021
372021
Rest: Retrieval-based speculative decoding
Z He, Z Zhong, T Cai, JD Lee, D He
NAACL 2024, arXiv preprint arXiv:2311.08252, 2023
202023
Defective Convolutional Networks
T Luo, T Cai, M Zhang, S Chen, D He, L Wang
arXiv preprint arXiv:1911.08432, 2019
20*2019
Reward collapse in aligning large language models
Z Song, T Cai, JD Lee, WJ Su
arXiv preprint arXiv:2305.17608, 2023
132023
DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
M Li, T Cai, J Cao, Q Zhang, H Cai, J Bai, Y Jia, MY Liu, K Li, S Han
CVPR 2024, arXiv preprint arXiv:2402.19481, 2024
22024
SnapKV: LLM Knows What You are Looking for Before Generation
Y Li, Y Huang, B Yang, B Venkitesh, A Locatelli, H Ye, T Cai, P Lewis, ...
arXiv preprint arXiv:2404.14469, 2024
12024
JetMoE: Reaching Llama2 Performance with 0.1 M Dollars
Y Shen, Z Guo, T Cai, Z Qin
arXiv preprint arXiv:2404.07413, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20