Congratulations!
Posts by Solim LeGris
LLM agents are a serious problem for online experiments.
It is very easy to use them and very hard to spot them. What can researchers do?
With @brendenlake.bsky.social, we suggest detecting LLMs based on their lack of human cognitive constraints in our #CogSci2026 paper: arxiv.org/abs/2604.00016
🚨New preprint and our results are rather concerning..
We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.
And these effects emerge after just 10–15 minutes of AI use!
1/
The issue with LLM writing is it breaks standard heuristics for deciding if something is worth spending time on
If asked to draw a person's portrait, many people would say “I cannot draw.” Why is observational drawing so hard? In a new paper, @judithfan.bsky.social and I answer: it's the limitations of human vision. Learning to draw is learning skills to overcome them.
aaronhertzmann.com/2026/03/23/d... 1/
On dit que la définition de la folie, c'est faire la même chose en espérant un résultat différent. Dans les 50 dernières années, combien de fois les États-Unis ont réussi à libérer un pays par les bombes déjà?
Every sentence in this post is a masterpiece of anthropomorphic absurdity:
substack.com/home/post/p-...
AI may be reshaping not just the economy, but the political assumptions built around labor. In this blog post, I explore what this could mean for the future landscape of political economy: mengyeren.substack.com/p/politics-a...
lol at ‘many left ideas seem to reflect academic views’ as a bad thing
The idea that someone who gets around with a $2000 piece of equipment is "elitist" while someone who uses a $30,000+ piece of equipment that costs hundreds of dollars per month to operate is "normal" is quite the brainwashing.
Writing is thinking
"On the value of human-generated scientific writing in the age of large-language models."
www.nature.com/articles/s44...
once again being driven insane by ML conference submissions
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
AGI is just astrology for smart computer boys
Today in Nature Machine Intelligence, Kazuki Irie & I discuss 4 classic challenges for neural nets — systematic generalization, catastrophic forgetting, few-shot learning, & reasoning. We argue there is a unifying fix: the right incentives & practice. rdcu.be/eLRmg
🚨 New preprint: "Decision rule inference limits social escape from learning traps" (with Rheza Budiono and Cate Hartley of the @hartleylabnyu.bsky.social ✨). Read here: osf.io/preprints/ps.... This is more work on a very curious phenomena!
Today we open-sourced a new project for developing behavioral experiments online. It is called Smile. Announcement of v0.1.0: todd.gureckislab.org/2025/07/22/s... Smile has been used internally in my lab for several years and has substantially increased our productivity.
I don't know what world hassabis is living in but the reality is the reverse.
AI is creating a world whereby there's less trust (by making it difficult to differentiate real from ai generated), ever wider inequity gap, and ever more intrusive surveillance
Fantastic new work by @johnchen6.bsky.social (with @brendenlake.bsky.social and me trying not to cause too much trouble).
We study systematic generalization in a safety setting and find LLMs struggle to consistently respond safely when we vary how we ask naive questions. More analyses in the paper!
New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N
my god
Out today in Nature Machine Intelligence!
From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N
The part of George Orwell’s 1984 that everyone forgets is how the music and publishing industries have been replaced by a machine that spits out songs and bad novels “without any human intervention.” The goal is to keep you from ever having to think.
I wrote about the concept of agency (both human and artificial) in the year 2025. gracewlindsay.com/2025/01/24/2...
Our paper on if you can incentivize rule induction in humans with money is finally out (answer is: it appears to be a very weak/0-ish effect in contrast to the huge effect of financial incentives on rote, repetitive tasks). credit to pamop, ben newell & dan bartels psycnet.apa.org/fulltext/202...
the rapid transition of academics off x (despite temporarily reducing reach/followers) makes you wonder what’s stopping us from ending the for-profit, closed access publishing industry. it’s, like…. we can just do it? or if not, interesting to consider what the inertial differences are.