New paper out in cognition with @arikahn.bsky.social, @nathanieldaw.bsky.social, Cate Hartley, and @katenuss.bsky.social !!
We show that children 👶 use predictive representations (e.g. SR) to guide their choices, providing an account of how they can make flexible choices in a changing world
Posts by Qihong (Q) Lu
New Annual Review with @nathanieldaw.bsky.social: “Planning in the Brain: It's Not What You Think It Is.” We argue that the brain's 'planning' machinery is mostly used for learning from simulated experience, and that thinking prospectively at decision time is just one special case of this process.
We're happy to release NeuralSet: a simple, fast, scalable package for Neuro-AI
Supports:
🧠 fMRI, EEG, MEG, iEEG, spikes… preprocessing
💬 text 🔊 audio ▶️ video 🏞️ image… embeddings
📦 pip install neuralset
🔍 facebookresearch.github.io/neuroai/neur...
📄 kingjr.github.io/files/neural...
🧵 Details👇
This is finally out as Version of Record 🎉
Read to find out how and when humans strategically switch between approaching and avoiding uncertainty
with Michael Shadlen and Daphna Shohamy
elifesciences.org/articles/94231
🧵:
another great paper from @mh-christiansen.bsky.social, showing that non-constituents* can be primed
It's more evidence that traditional linguists were mistaken to believe memory was in short supply:
Human memory is compressed, clustered, implicit and vast
Final paper of my PhD 🤗
www.nature.com/articles/s44...
There is growing interest in how cognitive control may improve value-based decision making.
However, we find that a recent paper overestimated the role of control in their task, leading to erroneous interpretations of dACC recordings.
We are hiring a research specialist, to start this summer! This position would be a great fit for individuals looking to get more experience in computational and cognitive neuroscience research before applying to graduate school. #neurojobs Apply here: research-princeton.icims.com/jobs/21503/r...
Does memory fade slowly, or in drops and bursts? We analyzed 728k tests from 210k people. Key finding: “stability” isn’t a trait you either have or don’t have - it’s often a time-limited state at different points in aging. Preprint "Punctuated Memory Change": 👇 www.biorxiv.org/content/10.6...
Congrats Jonathan! Excited to see these amazing results get published officially!
Our experiences have countless details, and it can be hard to know which matter.
How can we behave effectively in the future when, right now, we don't know what we'll need?
Out today in @nathumbehav.nature.com , @marcelomattar.bsky.social and I find that people solve this by using episodic memory.
Fantastic thread and a must-read for anyone working on spatial cognition.
Excited to announce a new book telling the story of mathematical approaches to studying the mind, from the origins of cognitive science to modern AI! The Laws of Thought will be published in February and is available for pre-order now.
What a privilege and a delight to work with @coltoncasto.bsky.social @ev_fedorenko and @neuranna
on this new speculative piece on What it means to understand language, nicely summarized in this
Tweeprint from @coltoncasto.bsky.social arxiv.org/abs/2511.19757
I am really proud that eLife have published this paper. It is a very nice paper, but you need to also read the reviews to understand why! 1/n
I'm going to present our latest memory model that learns causal inference during narrative comprehension! Stop by the poster on Monday to chat about causality, memory, brain🧠, and AI🤖!
#sfn2025 #sfn25
A RNN with episodic memory, trained on free recall, learned the memory palace strategy -- the network developed an abstract item index code so that it can “walk along” the same trajectory in the hidden state space to encode/retrieve item sequences!
Feedback appreciated!
Foraging in conceptual spaces: hippocampal oscillatory dynamics underlying searching for concepts in memory
www.biorxiv.org/content/10.1...
I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!
www.biorxiv.org/content/10.1...
Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap — which unifies findings from language modeling to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.
Paper: arxiv.org/pdf/2509.08825
Our new lab for Human & Machine Intelligence is officially open at Princeton University!
Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
Cognitive scientists and AI researchers make a forceful call to reject “uncritical adoption" of AI in academia
www.bloodinthemachine.com/p/cognitive-...
I'm excited to share that my new postdoctoral position is going so well that I submitted a new paper at the end of my first week! www.biorxiv.org/content/10.1... A thread below
Key-value memory network can learn to represent event memories by their causal relations to support event cognition!
Congrats to @hayoungsong.bsky.social on this exciting paper! So fun to be involved!
Our new study (Titled: Memory Loves Company) asks whether working memory hold more when objects belong together.
And yes, when everyday objects are paired meaningfully (Bow-Arrow), people remember them better than when they’re unrelated (Glass-Arrow). (mini thread)
Now out in print at @jephpp.bsky.social ! doi.org/10.1037/xhp0...
Yu, X., Thakurdesai, S. P., & Xie, W. (2025). Associating everything with everything else, all at once: Semantic associations facilitate visual working memory formation for real-world objects. JEP:HPP.