Ha, true, not very arousing for seasoned parents and educators :)
Posts by Nico Schuck
I wish my inbox summary would stop insisting that "some children in your son's daycare have lice" is my most important email today
sounds exciting, looking forward to read!
We’ve got an exciting new thing to share! We have causal evidence (using TMR) that memory reactivation during sleep promotes abstract understanding of underlying structure, allowing transfer learning in a new domain with zero superficial feature overlap with the learned one.
I agree, for me reasoning and coding (or looking at code) are related. But a new generation might find different ways to learn the kind of reasoning process you describe without coding. I am not sure I know how, but I do want to be open to the idea it’s possible
yes, it’s difficult to assess these now; the looming question is if these competencies are still “needed”, and what will replace them.
important work from @skjerns.de !
The Causality in Cognition Lab -- a supportive, bluesky-colored team -- is looking for a predoc to join us! Here are infos about the lab (cicl.stanford.edu) and the position (careersearch.stanford.edu/jobs/iriss-p...). The application deadline is May 1st.
Please share, thank you 🙏
So for me these were entangled: I would have an idea but by the time I start coding I am forced to a higher level of precision in my thinking, and also get insights into predictions of my own idea that I didn’t anticipate. I always found that quite useful. Q is how we adapt that to the new realities
Some reports say over 500 schools, 55 libraries, & 25 universities hit.
You can debate the numbers, but hitting Sharif University & Beheshti is like hitting MIT & Stanford. I keep wondering: How would the scientific community respond differently if it was those universities? What’s the difference?
I agree that there is no shortcut. Problem is that driven by metrics and competition many will use it as one, and in the short run that will pay off career wise.
How to teach that is what we have to figure out. It’s an issue even now, as we can never be sure that there isn’t a bug somewhere in our code. But I am somewhat optimistic that we can figure this out, using eg units tests Russ Poldrack promotes.
fully agree with Stefano. I spent years of my training learning to translate scientific thinking into models and code. if that’s no longer a bottleneck, what was the point — and what should replace it?
my gut instinct is that implementation knowledge and “which and why” knowledge are separable. I see plenty of ways they’re entangled in my pre-AI trained brain but does that have to be the case? also, there are good (deeper understanding of ground level) and bad (limiting my imagination) connections
The Memory Disorders Research Society (www.memorydisorders.org) is now seeking nominations for new members! Self-nominations are welcome. Application is open until April 15 @ 11:59pm PT.
Reach out if you have questions about the society or its (amazing) annual meeting! forms.gle/Qn7mchoPpaqL...
📢 Join our team in Hamburg!
The Trustworthy AI lab 🤝 is looking for a Research Associate for a novel DFG-funded project on ethical multi-agent systems of LLMs. 🤖 Full-time | EGR. 13 TV-L | Apply by 8 April 2026
🔗 www.uni-hamburg.de/en/stellenan...
#TrustworthyAI #LLM #AcademicJobs #Hamburg
New paper from the lab! Luianta’s fabulous project shows that trait anxiety relates to value based aversive generalization. The data also highlights the diversity of generalization forms that occur in a random sample—from gaussian generalisation, linear extrapolation to perceptual confusion.
True. And provide free universal mental health care.
But 4000 homeless people in NYC doesn’t sound quite correct to me :)
Die Universität Hamburg bleibt exzellent! 🙌 Dies verkündete der Wissenschaftsrat am Mittwoch in einer Pressekonferenz, die direkt in das Audimax der Universität übertragen wurde. Mit dabei waren 250 Forschende, Mitarbeitende, Studierende und Gäste der Universität Hamburg. Zum Nachbericht:
A new Department of Cognitive Science is being created at Bocconi University in Milan, Italy.
Here is the call for a cluster hire (for around 10 faculty) in all areas of cognitive science, at both junior and senior levels:
www.unibocconi.it/en/faculty-a...
Deadline: May 4th, 2026
New review from our group out in Nature Reviews Psychology:
Determinants of individual navigation ability
with my excellent co-author: @emre-yavuz-21.bsky.social
I think one of the biggest challenges for science education is to make sure people understand, and are transparent about, what they know vs don't know. I see this line crossed often & my fear is it'll get worse. We need to celebrate declarations of not knowing!
Nir gives excellent talks and will present his exciting work on how rewards morph space next week in London
Call for Applications - Max Planck Postdoc Program, apply now!
The Max Planck #PostdocProgram offers top emerging talents a range of attractive measures and opportunities. The current call for applications is open from March 1st until April 13th, 2026. Please share or consider applying! 🙏
www.mpg.de/en/max-planc... #careerinscience #sciencecareer
Seconded!
mhh, interesting, I wasn’t aware of the scale effect on CI and curious now to read up on it. if you wrote a blog post about it I’d definitely read it! I am still wondering why then humans have such bad episodic memory, at least in terms of details/precision. Most of it is gist, rather than pixels.
New preprint out 🎉
What happens to the hippocampal “place code” when an animal is actively engaged in a task?
The answer surprised us (and might surprise you too!).
Let's dive in ⬇️
Link:
"Hippocampal trace coding dominates and disrupts place coding" www.biorxiv.org/content/10.6...
The cost might not be there during learning, but isn’t there one for protecting memorized items from forgetting when learning new events?
Humans indeed appear to do both, but perhaps bc they have (semi) separate systems and staged learning processes which makes it easier. And still forget at lot.
Thank you for writing this Andrew! Makes me wonder whether the balance between memorization and generalisation can be learned by the model, or if there‘s rather a constant „leverage threshold“ beyond which memorization sets in?
Super useful and interesting read on generalization, memory and overfitting, highly recommended!