I’m hiring a PhD student!
The candidate will work alongside @zefreeman.bsky.social, who is joining our research group as postdoc.
jobs.unibe.ch/job-vacancie...
Posts by Francisco Garre-Frutos
The number of submissions will only increase and I believe their solution is a poor fix. But my friends and I disagree about better solutions. To make resubmission intervals longer for people who get a low rating acts as a penalty for low ratings. If these ratings reliably evaluated quality: good.
making all the p-hackers look like rookies
I'm proud to be part of the Scientific Committee for the new $5M Digital Brain Project, to accelerate development of open source models of the human brain. Apply by May 15th for funding at digitalbrainproject.org
We're hiring! Looking for a postdoc to work at UNSW Sydney, studying impacts of reward and information on attention, using eye-tracking, EEG, and modelling - with Kelly Garner, Daniel Pearson and me. Application link below, please spread the word!
external-careers.jobs.unsw.edu.au/cw/en/job/53...
New post on The 100% CI: Science needs downvotes.
www.the100.ci/2026/04/13/s...
In which I make the case that grant funders should add funding lines that include a module for bug bounties.
New discovery! Spoiler alert: Neural dynamics are key.
Evidence for predictive computations in a brain hierarchy during a visual search task
doi.org/10.64898/202...
Work led by @pinotsislab.bsky.social
#neuroscience
AI seems to be the topic of the year — nearly every conversation I have in my role as academic lead for good research practice touches on it in some way. I’d like to lay out my developing thoughts for conversation and critique. (1/7)
🚨IMPORTANT:
Do you want to conduct research on misinformation? At @cimcyc.bsky.social we’re putting together a summer school for PhD students / early postdocs and, not to brag, but it’s looking fantastic. 😬
📍 Granada
🗓️ 15-18 September 2026
ℹ️ Info: sites.google.com/view/misinfo...
More details about the Bayesian Workflow book and case studies now available on the book web site avehtari.github.io/Bayesian-Wor... (but you still need to wait a bit for the book)
cover of the book "Bayesian Workflow" by Gelman, Vehtari, et al. Coming out later this year, in the summer probably.
I would have preferred to have the "draw the rest of the owl" meme on the cover, but this will do. Seems like it is on schedule, and we'll leave some typos so you know we didn't write it with AI.
Appreciated this history of citation indices, though of course the alternative history where citation indices were never introduced does not seem non-ruinous.
open.substack.com/pub/davidoks...
I don't know how hard or costly it is to fit some of these models to real-world data, but perhaps this could also be useful for amortized Bayesian inference workflows! arxiv.org/abs/2602.07098
Simulator of the following (attentional) models:
🔎 Rescorla-Wagner
🔎 Pearce-Kaye-Hall
🔎 Mackintosh Extended
🔎 Le Pelley’s Hybrid
🔎 Rescorla-Wagner with a unified variable learning rate (integrating Mackintosh’s and Pearce and Hall’s quasi-opposing conceptualisations).
github.com/cal-r/PALMS-...
At @psicologicajournal.bsky.social, the amazing editorial team is putting a lot of work into this issue, with an extra round of reviews just to check these issues before papers are sent out for peer review.
Perhaps AI can automate some of this burden, or perhaps not. I’m curious to see where this go in the future!
Either researchers are incentivized, as in registered reports, to make their results as reproducible as possible, or journals and other institutions start paying people or implementing new infrastructures to do serious reviews that assess different stages of the scientific process.
And sometimes the problem is not just effort. A person may be asked to review a paper without having the expertise or the resources to really evaluate whether a result is reproducible. In some areas, like neuroscience, reproducing a result may require months of computation, not just expertise.
On the one hand, journals can pressure reviewers and editors to check these issues more carefully, and they can push authors to improve the reproducibility of their results. But peer review is already a huge amount of work, usually with almost no compensation for the extra burden.
This project is amazing. So much coordination among so many incredible researchers to assess relevant and timely questions. But when reproducibility falls short, where does the responsibility lie? And who has to make an effort to make things better?
🧵 I gave Claude two things: a short paper (doi.org/10.1073/pnas...) and a raw behavioural dataset with 3 lines of variable descriptions.
Then I asked it to fit three computational RL models described only by equations in the manuscript. No code, no toolbox, no guidance on the fitting procedure. 1/3
SCORE, a collaboration of 865 researchers, is now released as three papers in Nature, six preprints, and a lot of data (cos.io/score/). SCORE examined repeatability of findings from the social-behavioral sciences and tested whether human and automated methods could predict replicability.
📣 Exciting news:
As we've been unable to decide which is the best open-source package for psychology (PsychoPy, Open Sesame or jsPsych) @peircej.bsky.social, @cogsci.nl and @joshdeleeuw.bsky.social have agreed to resolve this once and for all, in a live-streamed, three-way.....arm wrestle!
Part 2 of my shrinkage estimator series is out! Part 1 covered the univariate case, but now we dive into multivariate shrinkage 🤓
We cover Spearman's classic correlation disattenuation formula, multivariate James-Stein estimators, and hierarchical methods too
haines-lab.com/post/how-to-...
New preprint with Chris Nolan and Kelly Garner in which we develop a new metric - transition entropy - that can be used to measure the extent to which behaviour in cognitive tasks is based on a routine.
www.biorxiv.org/cgi/content/...
España se incorpora a la plataforma Open Research Europe para impulsar la publicación científica en abierto www.ciencia.gob.es/Noticias/202...
Figure 1 Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]
✨ Updated preprint ✨
Iris van Rooij & Olivia Guest (2026). Combining Psychology with Artificial Intelligence: What Could Possibly Go Wrong? PsyArXiv osf.io/preprints/psyarxiv/aue4m_v2 @olivia.science
Our aim is to make these ideas accessible for a.o. psych students. Hope we succeeded 🙂
Human Gaze Behaviors Track Abstract Stimulus Categories
doi.org/10.1162/JOCN...
#neuroscience
Check out this new cross-journal special Collection on Visual Imagery at Nature Communications, Communications Psychology and Scientific Reports: www.nature.com/collections/...! Get submitting!
@natcomms.nature.com, @commspsychol.nature.com
🚨 2 PhD positions at the University of Granada.
Work on AI + brain + language (NeurSpeechXAI):
PhD1: Explainable AI for EEG/sEEG speech decoding
PhD2: Multimodal neuroimaging (EEG/fMRI) & experimental design
3.5y contracts, interdisciplinary & international
👉 investigacion.ugr.es/recursos-hum...