Research

Research Interests

  • Conformal prediction
  • Causal inference
  • Multiple hypothesis testing
  • Distributional robustness, generalizability, and replicability


Papers by year

(* equal contribution or alphabetical order)

2024

Beyond reweighting: On the predictive role of covariate shift in effect generalization
Ying Jin, Naoki Egami, and Dominik Rothenhäusler, 2024. Arxiv | GitHub

Optimized Conformal Selection: Powerful selective inference after conformity score optimization
Tian Bai and Ying Jin, 2024. Arxiv | GitHub

Conformal alignment: Knowing when to trust foundation models with guarantees
Yu Gui*, Ying Jin* and Zhimei Ren*, 2024.
Conference on Neural Information Processing Systems (NeurIPS), 2024. Arxiv | GitHub

Confidence on the focal: Conformal prediction with selection-conditional coverage
Ying Jin* and Zhimei Ren*, 2024. Arxiv | GitHub


2023

Diagnosing the role of observable distribution shift in scientific replications
Ying Jin*, Kevin Guo*, and Dominik Rothenhäusler, 2023.
Arxiv | awesome-replicability-data | R package | shiny app

Model-free selective inference under covariate shift via weighted conformal p-values
Ying Jin and Emmanuel Candès, 2023.
Arxiv | software and reproduction code

Uncertainty quantification over graph with conformalized graph neural networks
Kexin Huang, Ying Jin, Emmanuel Candès, and Jure Leskovec.
Conference on Neural Information Processing Systems (NeurIPS), 2023 (Spotlight). Arxiv | GitHub


2022

Policy learning “without” overlap: pessimism and generalized empirical Bernstein’s inequality
Ying Jin*, Zhimei Ren*, Zhuoran Yang, and Zhaoran Wang, 2022.
Arxiv | Fun read: an article on this work

Modular regression: improving linear models by incorporating auxiliary data
Ying Jin and Dominik Rothenhäusler.
Journal of Machine Learning Research (JMLR), 2023. Arxiv

Selection by prediction with conformal p-values
Ying Jin and Emmanuel Candès.
Journal of Machine Learning Research (JMLR), 2023. Arxiv | Reproduction code

Upper bounds on the Natarajan dimensions of some function classes
Ying Jin.
IEEE International Symposium on Information Theory (ISIT), 2023. Arxiv | Slides

Sensitivity analysis under the f-sensitivity models: a distributional robustness perspective
Ying Jin*, Zhimei Ren*, and Zhengyuan Zhou, 2022. Arxiv
Student Paper Award at ICSA Applied Statistics Symposium, 2022.


2021

Sensitivity analysis of individual treatment effects: a robust conformal inference approach
Ying Jin*, Zhimei Ren*, and Emmanuel Candès.
Proceedings of the National Academy of Sciences (PNAS), 2023.
Arxiv | Software | Reproduction code | Website | OCIS talk | Commentary from Chernozhukov et al.
Runner up for Tom Ten Have Award at American Causal Inference Conference (ACIC), 2022.

Towards optimal variance reduction in online controlled experiments
Ying Jin and Shan Ba. (2021 summer internship project at LinkedIn)
Technometrics, 2023. Arxiv
2024 Jack Youden Prize for the best expository paper in Technometrics.

Contemporary symbolic regression methods and their relative performance
William La Cava, P. Orzechowski, B. Burlacu, F. Olivetti de Franca, M. Virgolin, Ying Jin, M. Kommenda and J. Moore, 2021.
Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021. Arxiv

Tailored inference for finite populations: conditional validity and transfer across distributions
Ying Jin and Dominik Rothenhäusler.
Biometrika, 2023. Arxiv | Software


2020

Is pessimism provably efficient for offline RL?
Ying Jin, Zhuoran Yang, and Zhaoran Wang.
Mathematics of Operations Research, 2024+. Short version in ICML 2021. MathOR | Arxiv | RL seminar talk | Slides


Undergraduate works

Computational-statistical tradeoffs in inferring combinatorial structures of Ising model
Ying Jin, Zhaoran Wang, and Junwei Lu.
International Conference on Machine Learning (ICML), 2020. PMLR

Bayesian symbolic regression
Ying Jin, Weilin Fu, Jian Kang, Jiadong Guo, and Jian Guo. (undergrad internship project)
Proceedings of the 9th International Workshop on Statistical Relational Artificial Intelligence, 2020. Arxiv