Advertisement · 728 × 90

Posts by Byron Wallace

Post image

Patients ask LLMs medical questions — but how they phrase it matters more than it should.

Our new preprint explores how different phrasings of patient health questions can lead to inconsistent conclusions, even with the same evidence. [1/6]

Full Paper: arxiv.org/abs/2604.05051

1 week ago 25 6 2 4
Post image

Our ICML 2025 workshop on Actionable Interpretability drew massive interest. But the same questions kept coming up: What does "actionable" mean? Is it achievable? How?
We're ready to answer.
🧵

1 month ago 23 10 1 1
Post image

Can you solve this algebra puzzle? 🧩

cb=c, ac=b, ab=?

A small transformer can learn to solve problems like this!

And since the letters don't have inherent meaning, this lets us study how context alone imparts meaning. Here's what we found:🧵⬇️

2 months ago 49 11 2 2
Post image Post image

Hello world 👋
My first paper at UT Austin!

We ask: what happens when medical “evidence” fed into an LLM is wrong? Should your AI stay faithful, or should it play it safe when the evidence is harmful?

We show that frontier LLMs accept counterfactual medical evidence at face value.🧵

2 months ago 14 6 3 2

Check out @hibaahsan.bsky.social's paper on spotting (problematic) racial biases in LLMs for healthcare applications 👇

5 months ago 2 0 0 0
Post image

3/ 🏥 A separate team at Northeastern located where certain signals live inside Olmo and made targeted edits that reduced biased clinical predictions. This kind of audit is only possible because Olmo exposes all its components.
→ buff.ly/HkChr4Q

5 months ago 0 1 1 1

Chantal (and Vinith) find that you can jailbreak LLMs with syntax! Some examples: cshaib.github.io/syntax_domai...

5 months ago 2 0 0 0
Advertisement

Now to appear at #EMNLP2025 (Findings). We've added more models and experiments: arxiv.org/abs/2502.13319

5 months ago 2 0 0 0

Can we distill *circuits* from teacher models into smaller students? 👇

6 months ago 1 0 0 0
Post image

Who is going to be at #COLM2025?

I want to draw your attention to a COLM paper by my student @sfeucht.bsky.social that has totally changed the way I think and teach about LLM representations. The work is worth knowing.

And you can meet Sheridan at COLM, Oct 7!
bsky.app/profile/sfe...

6 months ago 39 8 1 2

Can we quantify what makes some text read like AI "slop"? We tried 👇

6 months ago 8 1 0 0

Our new paper asks: what is the goal of “natural language verbalization” interpretability approaches? If a verbalizer is supposed to tell us something about what’s in the target LM and NOT just what’s in the verbalizer LM, how do we actually evaluate that?

7 months ago 13 3 0 0
Post image

Wouldn’t it be great to have questions about LM internals answered in plain English? That’s the promise of verbalization interpretability. Unfortunately, our new paper shows that evaluating these methods is nuanced—and verbalizers might not tell us what we hope they do. 🧵👇1/8

7 months ago 26 8 1 1
Preview
As AI expands into medicine, Northeastern study finds AI models influenced by medical bias  - Khoury College of Computer Sciences Humans can be easily influenced by language that is one-sided, especially in complex fields like medicine. But a new Khoury-led study shows that large language models, too, can be tricked […]

Thrilled to share our research showing how LLM models can be influenced by bias from "spun" medical literature is now featured in Northeastern's Khoury news! This shows critical insights as AI enters healthcare.
The full paper can be found at arxiv.org/abs/2502.07963

7 months ago 3 1 0 0
Advertisement
New England Mechanistic Interpretability Workshop
New England Mechanistic Interpretability Workshop About:The New England Mechanistic Interpretability (NEMI) workshop aims to bring together academic and industry researchers from the New England and surround...

This Friday NEMI 2025 is at Northeastern in Boston, 8 talks, 24 roundtables, 90 posters; 200+ attendees. Thanks to
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/

If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...

8 months ago 16 7 1 3

📢 How factual are LLMs in healthcare?
We’re excited to release FactEHR — a new benchmark to evaluate factuality in clinical notes. As generative AI enters the clinic, we need rigorous, source-grounded tools to measure what these models get right — and what they don’t. 🏥 🤖

8 months ago 3 1 1 2

Chatted with @byron.bsky.social at icml about my recent work, so look out for his upcoming "Tokenization is More Than More Than Compression".

9 months ago 13 1 1 0
An overview of our AI-in-the-loop expert study pipeline: given a claim from a subreddit, we extract the PIO elements and retrieve the evidence automatically. The evidence, its context, and the evidence are then presented to a medical expert to provide a judgment and a rationale for the factuality of the claim.

An overview of our AI-in-the-loop expert study pipeline: given a claim from a subreddit, we extract the PIO elements and retrieve the evidence automatically. The evidence, its context, and the evidence are then presented to a medical expert to provide a judgment and a rationale for the factuality of the claim.

Are we fact-checking medical claims the right way? 🩺🤔

Probably not. In our study, even experts struggled to verify Reddit health claims using end-to-end systems.

We show why—and argue fact-checking should be a dialogue, with patients in the loop

arxiv.org/abs/2506.20876

🧵1/

9 months ago 5 2 1 1
Post image

[📄] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.

1 year ago 76 19 1 6
Preview
Oxford Word of the Year 2024 - Oxford University Press The Oxford Word of the Year 2024 is 'brain rot'. Discover more about the winner, our shortlist, and 20 years of words that reflect the world.

I'm searching for some comp/ling experts to provide a precise definition of “slop” as it refers to text (see: corp.oup.com/word-of-the-...)

I put together a google form that should take no longer than 10 minutes to complete: forms.gle/oWxsCScW3dJU...
If you can help, I'd appreciate your input! 🙏

1 year ago 10 8 0 0

🌟Job ad🌟 We (@gregdnlp.bsky.social, @mattlease.bsky.social and I) are hiring a postdoc fellow within the CosmicAI Institute, to do galactic work with LLMs and generative AI! If you would like to push the frontiers of foundation models to help solve myths of the universe, please apply!

1 year ago 13 7 0 3
Advertisement
Post image

LLMs are known to perpetuate social biases in clinical tasks. Can we locate and intervene upon LLM activations that encode patient demographics like gender and race? 🧵

Work w/ @arnabsensharma.bsky.social, @silvioamir.bsky.social, @davidbau.bsky.social, @byron.bsky.social

arxiv.org/abs/2502.13319

1 year ago 18 7 3 2
Post image

🚨 Do LLMs fall for spin in medical literature? 🤔

In our new preprint, we find that LLMs are susceptible to biased reporting of clinical treatment benefits in abstracts—more so than human experts. 📄🔍 [1/7]

Full Paper: arxiv.org/abs/2502.07963

🧵👇

1 year ago 63 25 3 4
Preview
Who Taught You That? Tracing Teachers in Model Distillation Model distillation -- using outputs from a large teacher model to teach a small student model -- is a practical means of creating efficient models for a particular task. We ask: Can we identify a stud...

📢 Can we trace a small distilled model back to its teacher? 🤔New work (w/ @chantalsh.bsky.social, @silvioamir.bsky.social & @byron.bsky.social) finds some footprints left by LLMs in distillation! [1/6]

🔗 Full paper: arxiv.org/abs/2502.06659

1 year ago 8 2 1 0

DeepSeek R1 shows how important it is to be studying the internals of reasoning models. Try our code: Here @canrager.bsky.social shows a method for auditing AI bias by probing the internal monologue.

dsthoughts.baulab.info

I'd be interested in your thoughts.

1 year ago 28 9 1 1

📣 🌍 We're hiring for 2 Machine Learning researchers to join SOLACE-AI @kingscollegelondon.bsky.social , funded by @wellcometrust.bsky.social . This is your chance to develop cutting-edge AI to directly impact global health responses to climate emergencies. jobs.ac.uk/job/DLM377

1 year ago 2 3 0 0

OLMo 2 is out 🥳 7B and 13B trained on 5T tokens, and meticulousy instruction tuned using Tulu 3 recipe.

Simply the best fully open models yet.

Really proud of the work & the amazing team at
@ai2.bsky.social

1 year ago 260 44 9 2

And Sheridan Feucht investigates the "implicit vocabulary" of LLMs via token erasure: arxiv.org/abs/2406.20086 (w/David Atkinson and @davidbau.bsky.social)

1 year ago 2 0 1 0

Somin Wadhwa has some intriguing findings on distillation with "chain of thought" sequences (e.g., this works better when "reasoning" follows labels, and individual tokens seem to be sufficient): arxiv.org/abs/2406.14511 (w/@Silvio Amir)

1 year ago 1 0 1 0
Advertisement

Chantal Shaib reports on syntactic "templates" that LLM's like to repeat: arxiv.org/abs/2407.00211 (w/@yanai.bsky.social and @jessyjli.bsky.social)

1 year ago 6 1 1 0