Thanks Matthew! Grateful for all your help in this journey.
Posts by Upol Ehsan | hiring PhD students Fall'26
Picture of the award
Congratulations to @upolehsan.bsky.social, who won the Georgia Tech College of Computing Doctoral Dissertation Award. The impact of his work on human-centered explainable AI (XAI) cannot be understated.
The last chapter of Upol's dissertation also just won an Honorable Mention at CHI. #ProudAdvisor
As the daughter of a cancer survivor who went through radiation, the partner of a physician, an educator, and a sociotechnical researcher who loves long-term in-situ work, really appreciating @upolehsan.bsky.social presentation and can't wait to read the paper! #chi2026 dl.acm.org/doi/10.1145/...
Initial [AI-LLM] operational gains hid ``intuition rust'': the gradual dulling of expert judgment. These asymptomatic effects evolved into chronic harms, such as skill atrophy and identity commoditization.
Learn about the "AI-as-Amplifier Paradox" at #CHI2026. Skill amplification? Or skill erosion? Or both? (CHI Honorable Mention Paper)
Institutions represented--- @northeasternu.bsky.social, Berkman Klein Center at Harvard University, @gtresearch.bsky.social , @nuglobalnews.bsky.social , @University of Minnesota, Johns Hopkins Medicine, Microsoft AI, University of Illinois Urbana-Champaign
With my amazing co-authors (a great team of clinicians and CS researchers): Samir Passi, @kous2v.bsky.social, Todd McNutt, @markriedl.bsky.social, Sara Alcorn.
6/n
This is the first work to reveal how AI not only can deskill you but how it can do so in an *asymptomatic* manner. Come by to learn why no one else caught it till now, and why it's actually hard to catch.
5/n
Come by to learn about why cancer doctors' identity of who they are were eroded because of long-term AI use, what we can do about it, and why we need to talk about it.
4/n
What we found is this something I wish we didn't find--- we knew AI can deskill you. What we didn't anticipate was *HOW* it would do it. What we also didn't anticipate was the barriers we faced in sharing about it.
3/n
The paper is almost 4 years in the making; the study itself ran for 1 year. Very rare for AI studies to run that long. We are not aware of anyone else who did this with real doctors across these many sites in such a high stakes domain (radiation oncology) in the wild in the way we did.
2/n
I travelled 27 hrs, fought endless visa issues to give this talk for our award-winning paper at #CHI2026.
Your turn to show up π P1, Room 112, Apr 15 (Wed), 12 noon. But why?
I wish the didn't impact every knowledge worker. Sadly, it does, and we need to talk about it.
1/n
Is it even CHI if the wifi works?
The "Premier" venue of Human computer interaction research currently facing workshop rooms that don't have tables or extension cords.
Each of us paid $1200+ to attend.
Many of us traveled 27+ hours in the middle of a war. Fought months of visa wats.
As Khoury College's faculty and student researchers descend on Barcelona this week, they're asking how technology β particularly AI β can fulfill users' needs, enhance their abilities, and amplify their humanity.
Read more: https://bit.ly/4miAO6t
Congrats to @upolehsan.bsky.social and team. Their CHI'26 paper won an Honorable Mention award!
"From Future of Work to Future of Workers: Addressing
Asymptomatic AI Harms for Dignified Human-AI Interaction" (arxiv.org/abs/2601.21920) dives into the AI-as-Amplifier paradox, explained in comic form.
Indeed, how'd we even do that? Esp since we don't have decades of research and not that we literally developed the tech
Lol... CS conferences that banned hybrid options are now scrambling to go hybrid given geopolitical events. Would have been nice to keep this accessible in the first place, right?
π’ Deadline (Feb 19) is just around the corner! Get those hot takes, studies, and provocations in and join the most fun workshop at #CHI2026!
w/ Amal Alabdulkarim Justin Weisz Andreas Riener Min Kyung Lee @kenholstein.bsky.social
#ExplainableAI #HCXAI #XAI #HCI #AI
We are actually working behind the scenes to explore this avenue! If you've tips and tricks to share (or better join the effort), I'd love to chat
π Past participants say HCXAI has become "central to their research practice", not just for the content, but for the authentic community that "preserves connection even as attendance soars past 100."
w/Justin Weisz Andreas Riener @kenholstein.bsky.social Min Kyun Lee Amal Alabdulkarim
8/8
n=8
We're calling for papers, prototypes, and provocations that:
π₯ Challenges assumptions about what constitutes explainability
π¨ Exposes limits, failures, and unintended consequences
π Bridges disciplines: HCI, AI, social science, law, design, domain expertise
7/n
4οΈβ£ Sociotechnical Evaluation & Futures: How do we move beyond technical metrics to measure real understanding and decision quality? What participatory approaches center affected communities? What should agentic XAI look like in 2030?
6/n
3οΈβ£ Trust, Accountability & Failure Modes: How do we support calibrated reliance: appropriate trust vs. dangerous over-reliance? What happens when explanations fail through dark patterns, manipulation, or cognitive overload? What's the difference between excusable AI and explainable AI?
5/n
1οΈβ£ Stakeholder Needs: What do users vs. developers actually need to know before, during, and after agent execution?
2οΈβ£ Explaining Agentic Behavior: Are chain-of-thought traces useful as explanations? How do we explain multi-step plans, tool invocations, and cascading effects?
4/n
Since 2021, our HCXAI workshops have built a community of 450+ researchers, practitioners, and policymakers from 21 countries. This year, we're reimagining explainability for agentic systems across the following areas:
3/n
π― The challenge: LLM-based agents are challenging XAI techniques. When AI systems plan multi-step strategies, invoke toolsβwhat does explainability even mean?
β‘οΈ The urgency: Without explainability, there can be no accountability. And unaccountable AI leads to automated injustice.
2/n
π¨[Pls repost!] Agentic AI is stress-testing Explainable AI. We need to fix it. That's why I'm thrilled to announce the 6th Human-Centered Explainable AI (HCXAI) workshop at CHI 2026 in Barcelona! π
π 2-5 page single column papers (excluding refs)
ποΈ Deadline: Feb 19, 2026
π hcxai.jimdosite.com
1/n
The person who knows the most about all of this is @upolehsan.bsky.social who has spent half a decade thinking about the human factors side of explainations.
And if you want to know about mechanistic faithfulness of rationales and chain of thought then @sarah-nlp.bsky.social is your go-to.
OpenAIβs big idea is to teach the LLM to post-hoc explain how it solved a problem. This is an extension of chain of thought.
It looks very similar in nature to βrationale generationβ, an explanation technique that has been around since 2018.
www.technologyreview.com/2025/12/03/1...