Advertisement · 728 × 90

Posts by Solim LeGris

Congratulations!

3 days ago 1 0 0 0
Video

LLM agents are a serious problem for online experiments.
It is very easy to use them and very hard to spot them. What can researchers do?

With @brendenlake.bsky.social, we suggest detecting LLMs based on their lack of human cognitive constraints in our #CogSci2026 paper: arxiv.org/abs/2604.00016

1 week ago 24 4 0 1
Post image

🚨New preprint and our results are rather concerning..

We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.

And these effects emerge after just 10–15 minutes of AI use!

1/

2 weeks ago 1513 679 27 74

The issue with LLM writing is it breaks standard heuristics for deciding if something is worth spending time on

2 weeks ago 114 4 5 6
Preview
Why Drawing is Hard: Visual Limitations and the Skills to Overcome Them If asked to draw a picture of a tree or a person in front of them, many people would say “I cannot draw.” Thirty years ago, two psychologists pointed out that this should be surprising. They reasoned ...

If asked to draw a person's portrait, many people would say “I cannot draw.” Why is observational drawing so hard? In a new paper, @judithfan.bsky.social and I answer: it's the limitations of human vision. Learning to draw is learning skills to overcome them.
aaronhertzmann.com/2026/03/23/d... 1/

3 weeks ago 106 37 3 6
Lab AI Policy | Todd Gureckis Clear expectations for how every member of our lab should use generative AI tools responsibly, transparently, and in a way that upholds rigorous, reproducible, open science.

draft lab ai policy, feel free to use, modify, or discuss! todd.gureckislab.org/2026/03/06/g...

1 month ago 51 12 1 1

On dit que la définition de la folie, c'est faire la même chose en espérant un résultat différent. Dans les 50 dernières années, combien de fois les États-Unis ont réussi à libérer un pays par les bombes déjà?

1 month ago 44 4 9 2
Advertisement
Preview
Introducing Claude's Corner Why Anthropic is giving Claude Opus 3 its own Substack.

Every sentence in this post is a masterpiece of anthropomorphic absurdity:

substack.com/home/post/p-...

1 month ago 54 9 5 5
Preview
Politics After AI From Capital vs. Labor to Acceleration vs. Preservation

AI may be reshaping not just the economy, but the political assumptions built around labor. In this blog post, I explore what this could mean for the future landscape of political economy: mengyeren.substack.com/p/politics-a...

2 months ago 4 1 0 0

lol at ‘many left ideas seem to reflect academic views’ as a bad thing

2 months ago 16 2 1 0

The idea that someone who gets around with a $2000 piece of equipment is "elitist" while someone who uses a $30,000+ piece of equipment that costs hundreds of dollars per month to operate is "normal" is quite the brainwashing.

2 months ago 2075 259 74 9
Post image

Writing is thinking

"On the value of human-generated scientific writing in the age of large-language models."

www.nature.com/articles/s44...

3 months ago 180 60 4 5
Post image Post image

once again being driven insane by ML conference submissions

3 months ago 112 19 3 0
Preview
Why isn’t modern AI built around principles from cognitive science? First post in a series on cognitive science and AI

Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3

4 months ago 118 35 4 5

AGI is just astrology for smart computer boys

5 months ago 28 7 3 0
Advertisement
Post image

Today in Nature Machine Intelligence, Kazuki Irie & I discuss 4 classic challenges for neural nets — systematic generalization, catastrophic forgetting, few-shot learning, & reasoning. We argue there is a unifying fix: the right incentives & practice. rdcu.be/eLRmg

6 months ago 44 8 0 0
Preview
This form is for students and scientists interested in doing research with us. Before applying, please read this page to learn more about our lab and opportunities. Keep in mind that we have a limited number of spots and many applicants. We review applications submitted through this form and usually can respond within two weeks. Due to limited time, Prof. Gureckis does not reply to emails about research opportunities that are not submitted through this form.

Interested in research in my lab? intake.gureckislab.org/interest/

6 months ago 6 4 0 0
OSF

🚨 New preprint: "Decision rule inference limits social escape from learning traps" (with Rheza Budiono and Cate Hartley of the @hartleylabnyu.bsky.social ✨). Read here: osf.io/preprints/ps.... This is more work on a very curious phenomena!

6 months ago 18 5 1 0
Post image

Today we open-sourced a new project for developing behavioral experiments online. It is called Smile. Announcement of v0.1.0: todd.gureckislab.org/2025/07/22/s... Smile has been used internally in my lab for several years and has substantially increased our productivity.

9 months ago 28 7 2 0

I don't know what world hassabis is living in but the reality is the reverse.
AI is creating a world whereby there's less trust (by making it difficult to differentiate real from ai generated), ever wider inequity gap, and ever more intrusive surveillance

10 months ago 273 68 8 3

Fantastic new work by @johnchen6.bsky.social (with @brendenlake.bsky.social and me trying not to cause too much trouble).

We study systematic generalization in a safety setting and find LLMs struggle to consistently respond safely when we vary how we ask naive questions. More analyses in the paper!

10 months ago 10 3 0 0
Post image

New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N

10 months ago 46 7 1 2
Advertisement
Post image

my god

1 year ago 5943 2556 121 341
Post image

Out today in Nature Machine Intelligence!

From childhood on, people can create novel, playful, and creative goals. Models have yet to capture this ability. We propose a new way to represent goals and report a model that can generate human-like goals in a playful setting... 1/N

1 year ago 135 41 5 4

The part of George Orwell’s 1984 that everyone forgets is how the music and publishing industries have been replaced by a machine that spits out songs and bad novels “without any human intervention.” The goal is to keep you from ever having to think.

1 year ago 9488 2518 146 121
Preview
2025: Agency gained and lost If you’ve had even a passing glance at tech journalism over the past few months, you know the top buzzword for AI in 2025 is agentic. Agentic AI (according to many think pieces and press releases a…

I wrote about the concept of agency (both human and artificial) in the year 2025. gracewlindsay.com/2025/01/24/2...

1 year ago 124 46 8 22
APA PsycNet

Our paper on if you can incentivize rule induction in humans with money is finally out (answer is: it appears to be a very weak/0-ish effect in contrast to the huge effect of financial incentives on rote, repetitive tasks). credit to pamop, ben newell & dan bartels psycnet.apa.org/fulltext/202...

1 year ago 16 7 0 2

the rapid transition of academics off x (despite temporarily reducing reach/followers) makes you wonder what’s stopping us from ending the for-profit, closed access publishing industry. it’s, like…. we can just do it? or if not, interesting to consider what the inertial differences are.

1 year ago 185 40 14 9