Advertisement · 728 × 90

Posts by Yonghan Jung

✈️ Heading to #CLeaR2026 in Cambridge next week (4/6-4/8).

I’ll be presenting:
“Information-Theoretic Causal Bounds under Unmeasured Confounding”

🗓️ Poster Session II
📍 Tue, Apr 7
🕓 4:00-6:00 PM ET

🤝 Let’s connect for coffee and chat about causal inference, AI, or research!

2 weeks ago 2 0 0 0
Post image

As shown, with unbounded outcome benchmark dataset (IHDP), our approach provides a valid "INDIVIDAULIZED" bound!

1 month ago 0 0 0 0
Preview
Information-Theoretic Causal Bounds under Unmeasured Confounding We develop a data-driven information-theoretic framework for sharp partial identification of causal effects under unmeasured confounding. Existing approaches often rely on restrictive assumptions, suc...

📄 Paper: arxiv.org/abs/2601.17160
💻 GitHub: github.com/yonghanjung/...
📦 Install: pip install itbound

1 month ago 0 0 1 0

Causal inference needs strong assumptions 😔

However, BOUNDING CAUSAL EFFECTS should not need strong assumptions. 😃

`itbound` gives data-driven causal bounds under assumption-lean settings: unmeasured confounding, unbounded outcomes, no sensitivity parameters, etc.

📦 Install: pip install itbound

1 month ago 2 0 1 1
Post image

Specifically, our result shows that the proposed CATE estimator for the front-door (FD-DR, FD-R) outperforms the plug-in front-door estimator, which empirically evidenced the sample efficiency and robust behavior against bias.

1 month ago 0 0 0 0

Front-door (FD) enables causal effect identification through an observed mediator even when treatment-outcome confounding is unobserved.

Our work provides estimators that achieves sample efficiency and allowing personalized treatment effects when treatment-outcome confounding is unobserved.

1 month ago 0 0 1 0
Preview
Debiased Front-Door Learners for Heterogeneous Effects In observational settings where treatment and outcome share unmeasured confounders but an observed mediator remains unconfounded, the front-door (FD) adjustment identifies causal effects through the m...

Our paper “Debiased Front-Door Learners for Heterogeneous Effects” was accepted to ICLR 2026.

- Paper (arXiv): arxiv.org/abs/2509.22531
- Reproducible code: github.com/yonghanjung/...

Quick start:
pip install fd-cate
fdcate demo --outdir ./fdcate-demo
#ICLR2026 #CausalInference #MachineLearning

1 month ago 2 1 1 0

Yonghan Jung: Debiased Front-Door Learners for Heterogeneous Effects https://arxiv.org/abs/2509.22531 https://arxiv.org/pdf/2509.22531 https://arxiv.org/html/2509.22531

6 months ago 0 2 0 0
Advertisement
Preview
Debiased Front-Door Learners for Heterogeneous Effects In observational settings where treatment and outcome share unmeasured confounders but an observed mediator remains unconfounded, the front-door (FD) adjustment identifies causal effects through the m...

Thrilled to share our new paper!
📄 Paper: arxiv.org/abs/2509.22531
💻 Code: github.com/yonghanjung/...

We develop the first orthogonal ML estimators for heterogeneous treatment effects (HTE) under front-door adjustment, enabling HTE identification even with unmeasured confounders.

6 months ago 1 0 0 0

If you're interested in working with me, feel free to reach out at yhansjung@gmail.com.

10 months ago 1 0 0 0
Preview
Jung to join the faculty The iSchool is pleased to announce that Yonghan Jung will join the faculty as an assistant professor in August 2025, pending approval by the University of Illinois Board of Trustees.

I'm excited to share that I'll be joining the School of Information Sciences at UIUC as an Assistant Professor this Fall (ischool.illinois.edu/news-events/...). If you're interested in causal inference and its applications to trustworthy AI and healthcare, join me & let's work together!

10 months ago 6 0 1 0
Post image Post image Post image

PhDone 🎓 I’ve successfully defended my thesis!
Huge thanks to my amazing advisor Elias Bareinboim and committee—Jennifer Neville, Jin Tian, Yexiang Xue, and @idiaz.bsky.social.
Grateful to collaborators, colleagues, lab mates, friends, neighbors—and above all, my wife, kid, and family!

10 months ago 4 0 0 0
Preview
Spatiotemporal causal inference with arbitrary spillover and carryover effects Micro-level data with granular spatial and temporal information are becoming increasingly available to social scientists. Most researchers aggregate such data into a convenient panel data format and a...

New paper alert (hey, I can't doom scroll all the time): This one's on doing causal inference with "microlevel data" where we suspect that the treatment has spatial spillover & temporal carryover effects. We illustrate our new approach + package w/ application to US counterinsurgency efforts in Iraq

1 year ago 8 4 0 0

📌Interesting way of using copula method for the sensitivity analysis in causal inference.

1 year ago 0 0 0 0
Post image

Reinforcement learning has led to amazing breakthroughs in reasoning (e.g., R1), but can it discover truly new behaviors not already present in the base model?

A new paper with Zak Mhammedi and Dhruv Rohatgi:
The Computational Role of the Base Model in Exploration

arxiv.org/abs/2503.07453

1 year ago 44 13 1 0

It looks interesting!

1 year ago 0 0 0 0
Advertisement

I really enjoy reading this paper. On a perspective of causal inference researcher, I agree that ML's real-world impact relies on science theory, because understanding causal mechanisms requires domain knowledge or theoretical assumptions. ML without theory simply leads us nowhere.

1 year ago 1 0 0 0

link 📈🤖
Adaptive Experimentation When You Can't Experiment () arXiv:2406.10738v1 Announce Type: cross
Abstract: This paper introduces the \emph{confounded pure exploration transductive linear bandit} (\texttt{CPET-LB}) problem. As a motivating example, often online services cannot directly assig

1 year ago 1 1 0 0
Post image

👉 Join our #CIIG seminar next month for an Introduction to Mechanism Learning

👉 Mechanism learning proposes using front-door causal bootstrapping such that ML models learn causal rather than "associational" (or spurious) relationships

See abstract and register: turing-uk.zoom.us/meeting/regi...

1 year ago 3 2 0 1
Preview
Reinforcement Learning in Modern Biostatistics: Constructing Optimal Adaptive Interventions In recent years, reinforcement learning (RL) has acquired a prominent position in health-related sequential decision-making problems, gaining traction as a valuable tool for delivering adaptive inter....

@pedrosantanna.bsky.social onlinelibrary.wiley.com/doi/10.1111/... biostatistics literature will use PO notation to describe the relevant objects. Just treat RL as MDP with unknown transitions (it's true RL doesn't use PO notation - it gets cumbersome and many key objects relate to the Bellman eqn)

1 year ago 12 2 2 1
Pedro H. C. Sant’Anna

I've decided to collect my DiD materials in a single place.

psantanna.com/did-resources

There, you will find
- 14 lectures of my comprehensive DiD course
- Shorter lectures/talks I have given on DiD
- My DiD R/Stata/Python packages
- Some DiD checklists
- DiD materials from my friends

Enjoy!

1 year ago 457 144 23 9
Post image

Merry Christmas, friends and colleagues! Hope you all have wonderful days with joys! 🎄

1 year ago 1 0 0 0

Looking ahead, my future direction will explore:
1️⃣ High-dimensional, online streaming datasets.
2️⃣ Multi-modal data (e.g., text, images).
3️⃣ Robust causal inference with uncertainty quantification.

1 year ago 0 0 0 0

My past work focuses on estimating causal effects identifiable from graphs, with applications in xAI and healthcare. This includes advancing methods to handle multi-domain experimental data, distributional treatment effects, and designing computationally efficient estimators.

1 year ago 0 0 1 0
Advertisement
Preview
CausalAI Aficionado Yonghan Jung

Excited to share that I’m on the academic job market! I’ve been fortunate to work with Elias Bareinboim on causal inference, developing causal effect estimators using modern ML methods. Published in ICML, NeurIPS, AAAI, & more. Details: www.yonghanjung.me

1 year ago 3 0 1 0

In sum, our work provides a computationally efficient and statistically robust estimator for various covariate adjustment estimands, including cases where no such estimators previously existed.

Come see our poster and let us chat more!

1 year ago 0 0 1 0

Next, we developed Double-machine learning (DML)-based estimators for the UCA-class and provided finite sample guarantees, showing that it achieves doubly robustness and scalability (i.e., computational efficiency).

1 year ago 0 0 1 0

The UCA class incorporates a functional in a form of a product of various conditional probabilities. It includes the front-door adjustment, Verma’s equation, S-admissibility, Effect of treatment on the treated, soft-intervention, and many other practical causal estimands.

1 year ago 0 0 1 0

In this work,
1. We define a function class called "Unified Covariate Adjustment (UCA)" that incorporates various covariate adjustments; and
2. We developed a double machine learning (DML)-based estimator for the UCA-classes and provided finite-sample learning guarantees.

1 year ago 0 0 1 0
Post image Post image

We will present our work "Unified Covariate Adjustment for Causal Inference” (joint work with Jin Tian &
Elias Bareinboim) at #NeurIPS2024!
- Wed (12/11) from 11am - 2pm
- Poster Session 1 (East Hall A-C) #4901
- Link: openreview.net/pdf?id=aX9z2...
Come and see us!

1 year ago 3 0 2 0