Advertisement · 728 × 90

Posts by Hamsa Bastani

Effective Personalized AI Tutors via LLM-Guided Reinforcement Learning <div> Generative AI (GenAI) is rapidly reshaping education by unlocking the potential for personalized tutoring. Yet, emerging platforms largely focus on GenAI

Paper: papers.ssrn.com/sol3/papers....

1 month ago 14 2 2 0
Post image Post image

AI really can help education: controlled experiment found a GPT-4o powered tutor that personalized problems for students raised final test scores by .15 SD "equivalent to as much as six to nine months of additional schooling by some estimates—without increasing instruction time or teacher workload"

1 month ago 107 17 5 4

This work was led by Angel Chung, and joint work with Botong Zhang, Ling-Chieh Kung, &
@obastani.bsky.social . We are grateful for generous funding from the Wharton AI & Analytics Initiative, Mack Institute, Amazon Research, the Taipei City Government, and the American Institute in Taiwan.

1 month ago 0 0 0 0
Preview
Generative AI without guardrails can harm learning: Evidence from high school mathematics | PNAS Generative AI is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. A key...

Unlike our earlier work where AI assistance harmed learning by substituting for productive struggle (tinyurl.com/5fa9kvvb & tinyurl.com/mphttkuf), this system *created* productive struggle by matching students to problems of appropriate difficulty. Design matters!

1 month ago 1 0 1 0

Result: across 770 students, adaptive sequencing improved performance on an in-person final exam taken without AI assistance by 0.15 SD, with larger effects for beginners. Our evidence suggests the gains came from stronger engagement and more productive AI use.

1 month ago 0 0 1 0

We tested this in a 5-month randomized field experiment in a Python course across 10 high schools in Taipei. All students had the same course material/instruction and the same AI tutor. The only difference was adaptive vs. fixed problem sequencing.

1 month ago 1 0 1 0
Post image

🚨🚨 Excited to share our first positive results on AI in education! Most AI tutor work focuses on better chatbots. We suggest another lever: deciding what students practice next. We combine an LLM tutor with reinforcement learning to personalize problem sequencing using student-LLM interaction data.

1 month ago 4 1 1 0
Advertisement

This work was led by Angel Chung, and joint work with Botong Zhang, Ling-Chieh Kung, & @obastani.bsky.social. We are grateful for generous funding from the Wharton AI & Analytics Initiative, Mack Institute, Amazon Research, the Taipei City Government, and the American Institute in Taiwan.

1 month ago 0 0 0 0

When we fit a sigmoid (S-curve) to the exact same dataset by @metr.org, we find it fits the data much better (in-sample) than their exponential, suggesting a compelling alternative. Importantly, the inflection point (June 2025) has already passed.

2 months ago 1 0 0 0

Great example! I agree autonomous driving suffers the exact same issue

3 months ago 2 0 0 0

Interesting and definitely related, thank you for sharing!

3 months ago 1 0 0 0
The Human-AI Contracting Paradox The integration of state-of-the-art AI tools into professional workflows promises substantial productivity gains. Rather than replacing workers, there is hope t

Barriers to AI adoption isn’t just about technological trust—addressing the economics of attention will be key to achieving human-AI collaboration.

Read the full paper here: papers.ssrn.com/sol3/papers....

3 months ago 6 1 0 0

This creates a perverse incentive where employers will:
🚫 Ban the AI entirely to avoid unmonitored risks,
👋 Fire the human, even when human oversight would have improved outcomes, or
📉 Adopt a less reliable AI tool, simply because it keeps the human engaged at a lower cost.

3 months ago 2 0 1 0

If the AI is right 99% of the time, the human-in-the-loop has almost zero incentive to inspect the outputs. To force them to stay vigilant against that rare mistake, the employer has to pay a massive wage premium—one that scales inversely with the error probability.

3 months ago 4 0 2 0
Post image

We keep saying: "AI will handle the boring stuff, and humans will supervise." But the problem is--as AI reliability improves, it becomes really hard to motivate a human to conscientiously monitor it.

In a new WP with Gerard Cachon, we describe the "human-AI contracting paradox."

3 months ago 57 12 4 1
Advertisement

This is not to say AI can't improve education, but it requires careful measurement, experimentation, and evaluation - none of which is happening. The tech industry has put FAR more effort into A/B testing for ad optimization than into building for education.

5 months ago 0 0 0 0

Yet another example - adopting over-hyped AI/EdTech in classrooms without careful thought (especially replacing time with “trained, caring teachers”) is likely to have many negative consequences on kids' learning, motivation, and well-being ⁦https://wired.com/story/ai-teacher-inside-alpha-school/

5 months ago 1 0 1 0
Preview
Opinion | I Teach Creative Writing. This Is What A.I. Is Doing to Students.

Such a great article!! Well worth reading instead of using chatgpt to summarize 😂
www.nytimes.com/2025/07/18/o...

9 months ago 2 0 0 0

Thanks!! P.s. just so you know, you're the only reason I post on bluesky 😂

9 months ago 1 0 1 0

Joint work w/ amazing team @obastani.bsky.social, Alp Süngü, Haosen Ge, Özge Kabakcı, & Rei Mariman.

Grateful for thoughtful feedback from Eric Bradlow, Angela Duckworth, Stefan Feuerriegel, Benjamin Lira Luttges, Lilach M., Ananya Sen, Christian Terwiesch, Lyle Ungar, & many others

9 months ago 0 0 0 0
Post image

Out in @pnas.org today!! We ran a field experiment with ~1000 high school students & found:

✅ GenAI tutoring boosts practice perf
⚠️ But hinders human learning, hurting perf when AI access is removed
🛡️ Safeguards like hint-based help can offset this
www.pnas.org/doi/10.1073/...

9 months ago 25 9 3 0
Preview
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed thr...

Cool paper using brain imaging to track cognitive load while writing essays! As expected, people under-use their brain when using LLM assistance, but interestingly, they also struggle more when switching back to writing on their own -- AI assistance creates "cognitive debt"
arxiv.org/abs/2506.08872

10 months ago 3 0 0 0
Advertisement

Research increasingly shows that AI without appropriate guardrails can harm students' long-term learning and engagement. This is not something to be rushed into without careful experimentation and resources, especially when it comes to our young kids.

11 months ago 1 0 0 0
Preview
Opinion | A.I. Will Destroy Critical Thinking in K-12

"Putting more screens in our classrooms is not going to automatically lead to a smarter, healthier or better-employed population. And parents of all backgrounds need to stand up and shout it now."

www.nytimes.com/2025/05/14/o...

11 months ago 3 1 2 0
Preview
Stochastic Online Conformal Prediction with Semi-Bandit Feedback Conformal prediction has emerged as an effective strategy for uncertainty quantification by modifying a model to output sets of labels instead of a single label. These prediction sets come with the gu...

Conformal prediction sets are a useful way to capture uncertainty for LLMs & deep learning models. But they're data-hungry! We propose a semi-bandit algo to learn these sets online. Check out our @icmlconf.bsky.social paper: arxiv.org/abs/2405.13268

Work led by Haosen Ge, w/ @obastani.bsky.social

11 months ago 10 3 0 0
How AI Vaporizes Long-Term Learning
How AI Vaporizes Long-Term Learning YouTube video by Edutopia

Great video summarizing our research! AI in education needs to be carefully designed to ensure we support critical thinking & learning

W/ @obastani.bsky.social Alp Haosen Ozge Rei

youtube.com/watch?v=n2W_...

11 months ago 6 2 0 0
Workshop on AI & Analytics for Social Good | Smith School Join us at the Workshop on AI & Analytics for Social Good, themed around 'Analytics for Doing Good.' Hosted by the Smith School of Business at the University of Maryland on May 2, 2025, this event gat...

Excited to once again co-organize the 4th Annual Workshop on AI & Analytics for Social Good at UMD on 5/2 with Margret Bjarnadottir, Jessica Clark, Jui Ramprasad, and John Silberholz! Early-career scholars, please submit your "AI for good" work by 2/14:
www.rhsmith.umd.edu/departments/...

1 year ago 3 0 0 0
Preview
The 2025 Economic Report of the President | CEA | The White House Today, the Council of Economic Advisers under the leadership of Chair Jared Bernstein released the 2025 Economic Report of the President, the 79th report since the establishment of CEA in 1946. The 20...

Excited to see our paper "Generative AI Can Harm Learning" cited in Ch 7 of the 2025 Economic Report of the President: whitehouse.gov/cea/written-...

Paper: papers.ssrn.com/sol3/papers...., co-authored with @obastani.bsky.social, Alp, Haosen, Ozge & Rei

1 year ago 5 0 0 1

Hi folks, I'm new here! 👋🏾

1 year ago 5 0 3 0