Advertisement · 728 × 90

Posts by Karen Ullrich (s/h) ✈️ Neurips

Preview
Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation Computing next-token likelihood ratios between two language models (LMs) is a standard task in training paradigms such as knowledge distillation. Since this requires both models to share the same prob...

Read all the details in our ICLR 2026 paper
arxiv.org/abs/2512.14954

authored by Buu Phan, Ashish Khisti and myself

2 months ago 2 0 0 0
Post image

LLMs are largely memory bound!

Tokenizers can eat >30% of on‑device memory for small (<1B) models, because we reuse the huge vocabularies from big models.

Our method lets you distill large models into small ones with much smaller vocabulary, reducing overall memory footprint.

2 months ago 14 1 1 1
Starting with LLM Agents: Installing and Running OpenApps
Starting with LLM Agents: Installing and Running OpenApps YouTube video by Dr. Karen Ullrich - Machine Learning Research

If “getting started with agents” feels like setup hell — same.
So we made a starter tutorial:
First agent running in <14 minutes, no Docker/AWS.
Laptop + API key only.
👇 www.youtube.com/watch?v=gzNW...

4 months ago 2 0 0 1
Video

Want to teach AI agents to use apps like humans? Get started with digital agents research using OpenApps, our new Python-based environment.

4 months ago 4 3 1 0
Start with OpenApps

⭐️ facebookresearch.github.io/OpenApps/
📖 arxiv.org/abs/2511.20766

🛠️ Want to build new apps or tasks? Contributions welcome.

Let’s push UI agents forward — together. 🚀

4 months ago 0 0 0 0

This isn’t just a benchmark — it’s a generator. 🏭

With simple YAML configs, you can spawn thousands of variants of 6 fully functional apps.

Swap themes, scramble layouts, inject “challenging” fonts…

If your agent really understands UIs, this will show it.

4 months ago 0 0 1 0

Why OpenApps?

✅ Unlimited data for evaluation and training
✅ Lightweight — single CPU, no Docker, no OS emulators
✅ Reliable rewards from ground-truth app state
✅ Plug-and-play with any RL Gym or multimodal LLM agent

4 months ago 0 0 1 0
Video

Release Day 🎉

Meet OpenApps — a pure-Python, open-source ecosystem for stress-testing UI agents at scale.

Runs on a single CPU. Generates thousands of unique UI variations. And it reveals just how fragile today’s SOTA agents are.

(Yes, even GPT-4 and Claude struggle.)

4 months ago 5 0 1 0
Advertisement
Post image

Excited to be attending NeurIPS 2025 — my 10-year anniversary since I first walked into NeurIPS back in 2015, just before starting my PhD in Amsterdam.

Grateful for a decade of learning, growing, and being part of this incredible community.

If you’re around this year, would love to meet.

4 months ago 6 0 0 0
Post image

One can manipulate LLM rankings to put any model in the lead—only by modifying the single character separating demonstration examples. Learn more in our new paper arxiv.org/abs/2510.05152
w/ Jingtong Su, Jianyu Zhang, @karen-ullrich.bsky.social , and Léon Bottou.
🧵

6 months ago 31 3 1 2
Post image

Y’all, I am at #COLM this week, very excited to learn, and meet old and new friends. Please reach out on Whova!

6 months ago 5 0 0 0

Check out the full paper here: www.arxiv.org/pdf/2506.17052 🎓 Work by Jingtong Su, @kempelab.bsky.social, @nyudatascience.bsky.social , @aiatmeta.bsky.social

9 months ago 1 0 0 0
Post image

Plus, we generate importance maps showing where in the transformer the concept is encoded — providing interpretable insights into model internals.

9 months ago 0 0 1 0
Post image

SAMI: Diminishes or amplifies these modules to control the concept's influence

With SAMI, we can scale the importance of these modules — either amplifying or suppressing specific concepts.

9 months ago 0 0 1 0
Post image

SAMD: Finds the attention heads most correlated with a concept

Using SAMD, we find that only a few attention heads are crucial for a wide range of concepts—confirming the sparse, modular nature of knowledge in transformers.

9 months ago 0 0 1 0
Advertisement
Post image Post image

How would you make an LLM "forget" the concept of dog — or any other arbitrary concept? 🐶❓

We introduce SAMD & SAMI — a novel, concept-agnostic approach to identify and manipulate attention modules in transformers.

9 months ago 14 1 1 1
Post image

Aligned Multi-Objective Optimization (A-🐮) has been accepted at #ICML2025! 🎉
We explore optimization scenarios where objectives align rather than conflict, introducing new scalable algorithms with theoretical guarantees. #MachineLearning #AI #Optimization

11 months ago 9 2 0 0
Screenshot of arxiv paper "EXACT BYTE-LEVEL PROBABILITIES FROM TOKENIZED LANGUAGE MODELS FOR FIM-TASKS AND MODEL ENSEMBLES."

Screenshot of arxiv paper "EXACT BYTE-LEVEL PROBABILITIES FROM TOKENIZED LANGUAGE MODELS FOR FIM-TASKS AND MODEL ENSEMBLES."

🎉🎉 Our paper just got accepted to #ICLR2025! 🎉🎉

Byte-level LLMs without training and guaranteed performance? Curious how? Dive into our work! 📚✨

Paper: arxiv.org/abs/2410.09303
Github: github.com/facebookrese...

1 year ago 12 0 0 0
NeurIPS Poster Mission Impossible: A Statistical Perspective on Jailbreaking LLMsNeurIPS 2024

Thursday is busy:
9-11am I will be at the Meta AI Booth
12.30-2pm
Mission Impossible: A Statistical Perspective on Jailbreaking LLMs (neurips.cc/virtual/2024...)
OR
End-To-End Causal Effect Estimation from Unstructured Natural Language Data (neurips.cc/virtual/2024...)

1 year ago 3 0 0 0

Starting with Fei-Fei Li’s talk 2.30, after that I will mostly be meeting people and wonder the poster sessions.

1 year ago 5 0 0 0

Folks, I am posting my NeurIPS schedule daily in hopes to see folks, thanks @tkipf.bsky.social for the idea ;)

11-12.30 WiML round tables
1.30-4 Beyond Decoding, Tutorial

1 year ago 5 0 1 0

I will be at #Neurips2024 next week to talk about these two papers and host a workshop on #NeuralCompression.

1 year ago 3 0 0 0

next one on the list is Yury Polyanskiy's "Information Theory: From Coding to Learning" which will hopefully hit the shelfs in February... can not wait

1 year ago 3 0 0 0
Post image

Pro-tip: Use massive black Friday deals at scientific publishing houses to for example buy a copy of @jmtomczak.bsky.social
book on generative modeling (long overdue)

1 year ago 10 0 1 0

🫠

1 year ago 0 0 1 0
Advertisement

Me

1 year ago 2 0 0 0

Me

1 year ago 0 0 1 0
Post image

What do you think do we need to sharpen our understanding of tokenization? Or will we soon be rid of it by developing models such as "MegaByte" by
Yu et al?
And add more paper to the threat!

1 year ago 4 0 2 0
Post image

Phan et al, found a method to mitigate some of the tokenization problems Karpathy mentioned by projecting tokens into byte space. The key to their method is to develop a map between statistically equivalent token and byte-level models.

1 year ago 3 1 0 0
Post image

In "The Foundations of Tokenization:
Statistical and Computational Concerns", Gastaldi et al. try to make first steps towards defining what a tokenizer should be and define properties it ought to have.

1 year ago 4 0 0 0