Advertisement Β· 728 Γ— 90

Posts by Upol Ehsan | hiring PhD students Fall'26

Thanks Matthew! Grateful for all your help in this journey.

17 hours ago 1 0 0 0
Picture of the award

Picture of the award

Congratulations to @upolehsan.bsky.social, who won the Georgia Tech College of Computing Doctoral Dissertation Award. The impact of his work on human-centered explainable AI (XAI) cannot be understated.

The last chapter of Upol's dissertation also just won an Honorable Mention at CHI. #ProudAdvisor

1 day ago 20 2 1 0
Preview
From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms to Foster Dignified Human-AI Interaction | Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems

As the daughter of a cancer survivor who went through radiation, the partner of a physician, an educator, and a sociotechnical researcher who loves long-term in-situ work, really appreciating @upolehsan.bsky.social presentation and can't wait to read the paper! #chi2026 dl.acm.org/doi/10.1145/...

6 days ago 6 2 0 0

Initial [AI-LLM] operational gains hid ``intuition rust'': the gradual dulling of expert judgment. These asymptomatic effects evolved into chronic harms, such as skill atrophy and identity commoditization.

1 week ago 3 2 1 0

Learn about the "AI-as-Amplifier Paradox" at #CHI2026. Skill amplification? Or skill erosion? Or both? (CHI Honorable Mention Paper)

1 week ago 25 6 2 0
Preview
From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction In the future of work discourse, AI is touted as the ultimate productivity amplifier. Yet, beneath the efficiency gains lie subtle erosions of human expertise and agency. This paper shifts focus from ...

arxiv.org/abs/2601.21920

1 week ago 6 1 2 1

Institutions represented--- @northeasternu.bsky.social, Berkman Klein Center at Harvard University, @gtresearch.bsky.social , @nuglobalnews.bsky.social , @University of Minnesota, Johns Hopkins Medicine, Microsoft AI, University of Illinois Urbana-Champaign

1 week ago 2 0 0 0

With my amazing co-authors (a great team of clinicians and CS researchers): Samir Passi, @kous2v.bsky.social, Todd McNutt, @markriedl.bsky.social, Sara Alcorn.

6/n

1 week ago 2 0 1 0

This is the first work to reveal how AI not only can deskill you but how it can do so in an *asymptomatic* manner. Come by to learn why no one else caught it till now, and why it's actually hard to catch.

5/n

1 week ago 3 1 1 0
Advertisement

Come by to learn about why cancer doctors' identity of who they are were eroded because of long-term AI use, what we can do about it, and why we need to talk about it.

4/n

1 week ago 2 0 1 0

What we found is this something I wish we didn't find--- we knew AI can deskill you. What we didn't anticipate was *HOW* it would do it. What we also didn't anticipate was the barriers we faced in sharing about it.

3/n

1 week ago 5 1 1 0

The paper is almost 4 years in the making; the study itself ran for 1 year. Very rare for AI studies to run that long. We are not aware of anyone else who did this with real doctors across these many sites in such a high stakes domain (radiation oncology) in the wild in the way we did.

2/n

1 week ago 5 1 1 0
Post image

I travelled 27 hrs, fought endless visa issues to give this talk for our award-winning paper at #CHI2026.

Your turn to show up πŸ“ P1, Room 112, Apr 15 (Wed), 12 noon. But why?

I wish the didn't impact every knowledge worker. Sadly, it does, and we need to talk about it.

1/n

1 week ago 48 15 4 3

Is it even CHI if the wifi works?

The "Premier" venue of Human computer interaction research currently facing workshop rooms that don't have tables or extension cords.

Each of us paid $1200+ to attend.

Many of us traveled 27+ hours in the middle of a war. Fought months of visa wats.

1 week ago 1 0 0 0
Post image

As Khoury College's faculty and student researchers descend on Barcelona this week, they're asking how technology β€” particularly AI β€” can fulfill users' needs, enhance their abilities, and amplify their humanity.

Read more: https://bit.ly/4miAO6t

1 week ago 6 4 2 4
Preview
From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction In the future of work discourse, AI is touted as the ultimate productivity amplifier. Yet, beneath the efficiency gains lie subtle erosions of human expertise and agency. This paper shifts focus from ...

Congrats to @upolehsan.bsky.social and team. Their CHI'26 paper won an Honorable Mention award!

"From Future of Work to Future of Workers: Addressing
Asymptomatic AI Harms for Dignified Human-AI Interaction" (arxiv.org/abs/2601.21920) dives into the AI-as-Amplifier paradox, explained in comic form.

1 month ago 31 7 0 1

Indeed, how'd we even do that? Esp since we don't have decades of research and not that we literally developed the tech

1 month ago 1 0 0 0
Advertisement

Lol... CS conferences that banned hybrid options are now scrambling to go hybrid given geopolitical events. Would have been nice to keep this accessible in the first place, right?

1 month ago 9 0 3 0

πŸ“’ Deadline (Feb 19) is just around the corner! Get those hot takes, studies, and provocations in and join the most fun workshop at #CHI2026!

w/ Amal Alabdulkarim Justin Weisz Andreas Riener Min Kyung Lee @kenholstein.bsky.social

#ExplainableAI #HCXAI #XAI #HCI #AI

2 months ago 0 0 0 0

We are actually working behind the scenes to explore this avenue! If you've tips and tricks to share (or better join the effort), I'd love to chat

3 months ago 1 0 1 0

🌟 Past participants say HCXAI has become "central to their research practice", not just for the content, but for the authentic community that "preserves connection even as attendance soars past 100."

w/Justin Weisz Andreas Riener @kenholstein.bsky.social Min Kyun Lee Amal Alabdulkarim

8/8
n=8

3 months ago 0 0 0 0
Preview
Home | HCXAI ACM CHI 2026 Workshop on Human-Centered Explainable AI (HCXAI). April 13-16, 2025 (Barcelona Spain). This is the flagship workshop on HCXAI and one of the most well-attended and longest running worksh...

We're calling for papers, prototypes, and provocations that:
πŸ”₯ Challenges assumptions about what constitutes explainability
🚨 Exposes limits, failures, and unintended consequences
🌍 Bridges disciplines: HCI, AI, social science, law, design, domain expertise

7/n

3 months ago 0 0 1 1
Preview
Home | HCXAI ACM CHI 2026 Workshop on Human-Centered Explainable AI (HCXAI). April 13-16, 2025 (Barcelona Spain). This is the flagship workshop on HCXAI and one of the most well-attended and longest running worksh...

4️⃣ Sociotechnical Evaluation & Futures: How do we move beyond technical metrics to measure real understanding and decision quality? What participatory approaches center affected communities? What should agentic XAI look like in 2030?

6/n

3 months ago 1 0 1 0

3️⃣ Trust, Accountability & Failure Modes: How do we support calibrated reliance: appropriate trust vs. dangerous over-reliance? What happens when explanations fail through dark patterns, manipulation, or cognitive overload? What's the difference between excusable AI and explainable AI?

5/n

3 months ago 0 0 1 0

1️⃣ Stakeholder Needs: What do users vs. developers actually need to know before, during, and after agent execution?

2️⃣ Explaining Agentic Behavior: Are chain-of-thought traces useful as explanations? How do we explain multi-step plans, tool invocations, and cascading effects?

4/n

3 months ago 0 0 1 0
Advertisement

Since 2021, our HCXAI workshops have built a community of 450+ researchers, practitioners, and policymakers from 21 countries. This year, we're reimagining explainability for agentic systems across the following areas:

3/n

3 months ago 0 0 1 0

🎯 The challenge: LLM-based agents are challenging XAI techniques. When AI systems plan multi-step strategies, invoke toolsβ€”what does explainability even mean?

⚑️ The urgency: Without explainability, there can be no accountability. And unaccountable AI leads to automated injustice.

2/n

3 months ago 0 0 1 0
Post image

🚨[Pls repost!] Agentic AI is stress-testing Explainable AI. We need to fix it. That's why I'm thrilled to announce the 6th Human-Centered Explainable AI (HCXAI) workshop at CHI 2026 in Barcelona! πŸš€

πŸ“ 2-5 page single column papers (excluding refs)
πŸ—“οΈ Deadline: Feb 19, 2026
πŸ”— hcxai.jimdosite.com

1/n

3 months ago 5 5 2 2

The person who knows the most about all of this is @upolehsan.bsky.social who has spent half a decade thinking about the human factors side of explainations.

And if you want to know about mechanistic faithfulness of rationales and chain of thought then @sarah-nlp.bsky.social is your go-to.

4 months ago 7 1 0 0
Preview
OpenAI has trained its LLM to confess to bad behavior Large language models often lie and cheat. We can’t stop thatβ€”but we can make them own up.

OpenAI’s big idea is to teach the LLM to post-hoc explain how it solved a problem. This is an extension of chain of thought.

It looks very similar in nature to β€œrationale generation”, an explanation technique that has been around since 2018.

www.technologyreview.com/2025/12/03/1...

4 months ago 13 2 1 0