Advertisement · 728 × 90

Posts by Sauvik Das

CHI '26 logo letters filled with brightly colored mosaic tiles

CHI '26 logo letters filled with brightly colored mosaic tiles

Carnegie Mellon University authors from 12 different @cmu.edu depts contributed to 76 #CHI2026 papers:
🛠️ New tools & systems
🧠 New frameworks & taxonomies
☑️ New ways to audit tech
🆕 New advancements in Accessibility, Health, Design & so much more... Details here: hcii.cmu.edu/news/cmu-chi...

3 days ago 2 1 1 0
Post image

Excited to be heading to Barcelona for #CHI2026 to host our workshop PoliSim: LLM Agent Simulation for Policy!

This year, we’ve seen incredible interest from researchers across HCI, NLP, CSS, and Policy. We accepted 25 outstanding papers, with 5 selected as Best Paper nominees.

1 week ago 8 2 2 0
Post image

The overleaf git project for one of my #uist2026 submissions was assigned a uuid starting with "67". I believe this lets me mine the latest block on the gen alpha blockchain using the "Proof of Relevance" protocol. Who should I speak with about this

2 weeks ago 3 0 0 0
Post image

Will AI become a confirmation bias machine?

AI can be a powerful tool for truth-seeking. Yet, people might prefer to use AI to confirm their pre-existing beliefs, and features of AI systems (eg sycophancy) may make AI effective at justifying what people want to believe.
osf.io/preprints/ps...

1 month ago 33 13 2 2

Really excited that “Privy” received a CHI ’26 Honorable Mention Award 🏅

Privy is part of my PhD work on AI privacy: understanding AI privacy risks, studying why they’re hard to address in practice, and building tools that help product teams act on them.

1 month ago 2 2 2 0
Post image Post image Post image

I am working on a UX audit agent for my company (fuguux.com/).

I passed in the output of our audit agent into Claude code for my personal website and...voila, a refreshed and much better site. Surprised it worked!

Before -> after

sauvik.me

1 month ago 1 0 1 1

An exciting development: Privy was recognized with a best paper honorable mention at #chi2026!

Congrats to the whole team, including: @hankhplee.bsky.social @kyzyl.me @jodiforlizzi.bsky.social

1 month ago 3 1 0 0
Preview
CBP Tapped Into the Online Advertising Ecosystem To Track Peoples’ Movements An internal DHS document obtained by 404 Media shows for the first time CBP used location data sourced from the online advertising industry to track phone locations. ICE has bought access to similar t...

SCOOP: An internal DHS document obtained by 404 Media shows for the first time CBP used location data sourced from the online advertising industry to track phone locations.

This surveillance can happen through all sorts of apps, such as video games, news apps, weather trackers, and dating apps.

1 month ago 2147 1417 59 165

This is functionally an end-run around judicial oversight and due process, transforming consumer data into a tool for tracking and enforcement while inevitably sweeping in non-targets, including U.S. citizens and lawful residents.

1 month ago 0 0 0 0
Advertisement

Data collected for advertising should not be repurposed for government surveillance or tracking. This secondary use of personal data violates contextual integrity and breaches basic data minimization by turning commercially gathered information into a general investigate resource

1 month ago 0 0 1 0

In January, ICE/DHS put out an RFI on how companies with “commercial Big Data and Ad Tech” products can “directly support investigations activities.”

This effort would go against the best practice of minimizing data collection as a safeguard against misuse.

1 month ago 0 0 1 0

Honored to be named Privacy co-chair of ACM's U.S. Technology Policy Committee with @benwinters.bsky.social

On that note: please read USTPC's statement discouraging adtech vendors from sharing personal data with DHS/ICE.

1 month ago 4 0 1 0

Can LLMs really serve as "crash dummies" for security & privacy testing? We put this assumption to the test.

🚨New preprint 🚨: "How Well Can LLM Agents Simulate End-User Security and Privacy Attitudes and Behaviors?"

👇 THREAD 👇
[Link to paper: arxiv.org/abs/2602.184...
[1/n]

1 month ago 4 2 1 0

New statement from the ACM U.S. Technology Policy Committee on AdTech & DHS privacy/security: computing experts urge stronger safeguards around government use of advertising-tech data collection. #Privacy #Security #TechPolicy

acm.org/binaries/con...

1 month ago 3 3 0 2
Video

🚨New paper🚨

We spent a year working with emergency preparedness policymakers to answer a simple question: can LLM agent simulations actually help real institutions make better decisions?
The answer is yes—but perhaps not how you'd expect.

👇 THREAD 👇
[Link to paper: arxiv.org/abs/2509.218...
[1/n]

2 months ago 1 2 1 0

In short: Quantifying privacy risks can help users make more informed decisions—but the UX needs to present risks in a manner that is interpretable and actionable to truly *empower* users, rather than scare them.

Thanks @NSF for supporting this work!

2 months ago 0 0 0 0
Post image

(1) Pair risk flags with actionable guidance (how to preserve intent, reduce risk)
(2) Explain plausible attacker exploits (not just “risk: high”)
(3) Communicate risk without pushing unnecessary self-censorship
(4) Use intuitive language/visuals; avoid jargon

2 months ago 0 0 1 0
Post image

Interestingly, no single UI for presenting PREs to users “won”.

Participants didn’t show a strong overall preference across the five designs (though “risk by disclosure” tended to be liked more; the meter less).

So what *should* PRE designs do? 4 design recommendations:

2 months ago 0 0 1 0

…but sometimes PREs encouraged self-censorship.

A meaningful chunk of reflections ended with deleting the post, not posting at all, or even leaving the platform.

2 months ago 0 0 1 0
Advertisement

Finding #2: PREs drove action (often good!).
In 66% of reflections, participants envisioned the user editing the post.

Most commonly: “evasive but still expressive” edits (change details, generalize, remove a pinpoint).

2 months ago 0 0 1 0

Finding #1: PREs often *shifted perspective*.
In ~74% of reflections, participants expected higher privacy awareness / risk concern.

…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.

2 months ago 0 0 1 0
Post image

The 5 concepts ranged from:

(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)

2 months ago 0 0 1 0
Post image

Method: speculative design + design fictions.

We storyboarded 5 PRE UI concepts using comic-boards (different ways to show risk + what’s driving it).

2 months ago 0 0 1 0

The core design question:

How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?

We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.

2 months ago 0 0 1 0
Post image

This paper explores how to present “population risk estimates” (PREs): an AI-driven estimate of how uniquely identifiable you are based on your disclosures.

Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)

2 months ago 0 0 1 0

This paper is the latest of a productive collaboration between my lab, @cocoweixu, and @alan_ritter.

ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk

2 months ago 0 0 1 0
Post image

📣 New at #CHI2026
People share sensitive things “anonymously”… but anonymity is hard to reason about.

What if we could quantify re-identification risk with AI? How should we present those AI-estimated risks to users?

Led by my student Isadora Krsek

Paper: www.sauvik.me/papers/70/s...

2 months ago 1 1 1 0
Post image

Check out the paper! It's one of the coolest papers from my lab in that includes both a fully working system *and* a very comprehensive mixed-methods evaluation. Still had a reviewer that wanted even more, but c'est la vie 😂

www.sauvik.me/papers/69/s...

Thank for the support @NSF!

2 months ago 0 0 0 0
Advertisement

Thus, even though LLM assistance improved outputs, it also raised practitioner-expectations of what the AI would handle for them and made the manual work they *did* have to do feel extra burdensome. A stark design tension for the future of AI-assisted work.

2 months ago 1 0 1 0
Post image

A surprising aside: we added a number of design frictions to Privy-LLM to encourage critical thinking. As a result, some practitioners rated Privy-LLM as being *less helpful* than those who used just the static template (where they had to do much more of the work manually).

2 months ago 0 0 1 0