Thanks! Would love to hear any thoughts!
Posts by Anna Schapiro
We’ve got an exciting new thing to share! We have causal evidence (using TMR) that memory reactivation during sleep promotes abstract understanding of underlying structure, allowing transfer learning in a new domain with zero superficial feature overlap with the learned one.
Super excited to share this preprint! How do we disentangle underlying structure from the particular features of a learning episode to benefit future learning? We find that memory reactivation during sleep promotes this structure abstraction process.
www.biorxiv.org/content/10.6...
BREAKING: Hungarian PM Viktor Orbán has called Péter Magyar to concede defeat in today's election.
With 53.45% of votes counted, Magyar's TISZA looks set to secure a super majority.
A defeat for Putin, a defeat for Trump, a victory for Europe, and above all, a victory for the Hungarian people.
Check out @smonsays.bsky.social's work on detecting LLMs in experiments based on their lack of human memory constraints
Can we really measure replay in humans using MEG with current methods? In our most recent paper we simulated replay under realistic conditions via a novel hybrid approach with astonishing results.
we're delighted that it has now been published @elife.bsky.social!
elifesciences.org/articles/108...
Congrats!!!! Celebration breakfast tomorrow!! 😀
I really enjoyed listening to this! Highly recommend!
It’s okay for the administration to demand lists of Jews because they’re doing it to FIGHT antisemitism, by which they mean conflate Israel and Jewish identity and use government power to suppress criticism of Israel. So no worries!
While the situation is grim at NIH, it's closer to catastrophic at NSF. They're just not able to move any money out the door. It appears OMB has them on lockdown. www.science.org/content/arti....
With @imarinescu.bsky.social I argue that the economy is bottlenecked by the physical, rendering anything resembling a singularity unlikely: www.transformernews.ai/p/the-key-de...
How do the brain’s event representations change as we gain familiarity with an experience?
Brain regions’ representations can become coarser or finer as events become familiar. Slow-timescale structure predicts memory.
Excited to share this work w/ Narjes Al-Zahli & @chrisbaldassano.bsky.social!
Really powerful letter with lots of great passages, including this eye-popping one.
Proud to see so many great folks at UCLA Law among the signatories.
Wondering why NIH and NSF aren't making new grants? We explain what's happening at OMB. www.science.org/content/arti...
super excited about this, wish I could be there to chat with you!!
excited to share some recent work!
neural networks trained on multi-view sensory data are the first to match human-level 3D shape perception
we predict human accuracy, error patterns, and reaction time—all zero-shot, no training on experimental data
arxiv.org/abs/2602.17650
1/🧠
Thanks @natmesanash.bsky.social for covering our new work, in @thetransmitter.bsky.social!
Congrats!!!
How do we balance external attention to the outside world and internal attention to our thoughts & memories?
We review evidence that external and internal attention can compete, unfold concurrently, or cooperate!
Loved working on this with @samversc.bsky.social & @tobiasegner.bsky.social!
(Perceptual) space and time are warped by the gravity of objects and events in their vicinity. There's been a flurry of work recently documenting examples of this gravity, all resulting in some really neat illusions.
@brynnsherman.bsky.social and I discuss all of those, here:
rdcu.be/e5SWo
Congress rejected massive cuts to US science budgets for 2026, but much of the money still isn’t flowing to researchers.
The culprit? The White House Office of Management and Budget (OMB) is quietly slow-walking the release of funds. 🧵👇
Well, the DOJ has done it: they have filed a lawsuit against the University of California over antisemitism.
The complaint contains some falsehoods. But as someone who teaches and writes about Title VII, I'm equally struck by what the complaint doesn't say.
A few thoughts— 🧵
Cool, and yes we've been reading your recent papers with great interest! I *think* this is all consistent with my worldview, which is that novel domains where you need to learn quickly are the true challenge for CL, and necessitate separate memory systems plus replay
So my intuition then is that modern systems are avoiding interference through a form of orthogonalization induced by scale, but then don't benefit from the forms of generalization you are referring to. Does this sound right?
My sense of the class of solutions to continual learning has always been that you EITHER get useful generalization across time and have to deal with the very real retroactive interference OR you avoid interference through some kind of orthogonalization but fail to benefit from the productive overlap
Seconding the request for a CI post!! I feel like I still see industry people talking about continual learning all the time. What kinds of problems are they referring to?
Awesome, that is super helpful for intuition building, thanks so much
Do you have an intuition for how it is possible to sometimes get perfect memorization of paragraphs of text (putting aside any RAG-like sidecars)? I just can't wrap my head around how systems with such distributed representations can do that, unless those paragraphs appear many times in training?
Really enjoyed this one. (All of them have been awesome, actually, I recommend subscribing!)
Excited to launch Principia, a nonprofit research organisation at the intersection of deep learning theory and AI safety.
Our goal is to develop theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment.
1