However, about 50% of the reviewed studies did not report effect size in terms of model frequency (did fixed effect), which makes such post hoc power analysis virtually impossible without a complete re-analysis.
Posts by Payam Piray
The shared python library can also be configured with a user-input effect size. I agree that the best “post hoc” practice is to use the observed effect size, which can be easily used with the proposed method for power analysis, and it is highlighted in the discussion.
Sounds interesting, thank you for sharing. There is a secondary analysis in the paper with different assumptions about the effect size, which is a bit more in line with your conclusion (Ext fig 1). That analysis corresponds to a medium effect size per cohen.
We review studies showing that when brain areas face similar computational demands in social and non-social context, they perform the same computations. We argue that exaptation (repurposing of traits for new functions) played a key role in brain evolution.
Our experiences have countless details, and it can be hard to know which matter.
How can we behave effectively in the future when, right now, we don't know what we'll need?
Out today in @nathumbehav.nature.com , @marcelomattar.bsky.social and I find that people solve this by using episodic memory.
Thrilled that my paper is out in the @nature.com. We explored how the brain builds complex tasks by compositionally combining simpler sub-task representations. The brain flexibly performs multiple tasks by dynamically reusing neural subspaces for sensory inputs and motor actions
rdcu.be/eRVUk
Huge congrats Sina!
Thank you! Yeah the results are based on xp (and not protected xp). I’ve also introduced a new method for determining critical values of xp by controlling false positives; the commonly assumed 0.95 critical value for either xp or pxp can be misleading in my view (usually too conservative).
Estimated power for 52 reviewed studies based on their sample sizes and model space sizes. Among these, 41 studies fell below the standard 0.8 power threshold. About half of these studies used fixed effect model selection approach.
I argue that we need to account for the size of the model space when determining sample size, as larger model spaces reduce power. I also show that the commonly used “fixed effects” model selection approach is statistically unreliable. An analysis of the literature suggests shortcomings in both
Happy to share my new paper published in @nathumbehav.nature.com: A critical look at statistical power in computational modeling studies, particularly those based on model selection.
www.nature.com/articles/s41...
🎇 Excited to finally share JL Romero Sosa’s publication! Results are from single-cell imaging in different subregions of rat frontal cortex during ✨de novo learning. Spoiler: everything is not everywhere all at once www.nature.com/articles/s41...
More generally, we link MEC coding to planning-ready compositional representations, with invariant and modular responses in ubiquitous MEC object vector cells. These cells provide the building blocks of compositionality in the model.
Neurally, influential work proposed grid cells encode eigenvectors of the successor map. Nice idea, but it struggles when barriers or goals change. Our model ties grid code to the compositional map, keeping them useful even as the world changes, consistent with empirical findings on local remapping.
Computationally, the model builds a successor map piece by piece, by putting together representations related to barriers and goals. We propose translation/rotation-invariant code for representation of task components (objects/goals) that plans near-optimally in complicated navigation tasks.
New paper with @nathanieldaw.bsky.social in Nature Communications: an RL model that builds a successor map compositionally. The new model plans as well as the best models, and it links components of the map used for planning to neural codes in the medial entorhinal cortex.
rdcu.be/eAofi
It looks like all NSF/NIH grants to UCLA (including mine and all fundamental neuroscience grants) have been suspended.
www.science.org/content/arti...
Check out Zaid's open "Podcast" ECoG dataset for natural language comprehension (w/ Hasson Lab). The paper is now out at Scientific Data (nature.com/articles/s41...) and the data are available on OpenNeuro (openneuro.org/datasets/ds0...).
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
Great great work, congrats!
new paper from a collaborative endeavor! (@co0p3r.bsky.social) we find & replicate food-reward biases in a reinforcement learning task (where food stim are incidental)
people with eating disorder symptoms show a low-calorie food bias while those without show a high-calorie food bias... (1/3)
So happy for you, congratulations!
Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.
So so sorry for you loss. That’s a beautiful piece!
Very cool work on the intersection of interpretability and multi-lingual LLMs, led by @elnaz-rahmati.bsky.social
Our new paper explores how to align LLMs with System 1 (intuitive) and System 2 (analytical) thinking styles. This work challenges the idea that step-by-step reasoning (CoT) is always best and highlights the need for adapting reasoning strategies based on the task
arxiv.org/abs/2502.12470
hello world. we have an opening for a strong theory postdoc to work in my lab on a exciting collaboration with Josh Berke and Loren Frank labs modeling and analyzing data on rat hipp-pfc-bg-da involvement in spatial maze foraging, replay, value etc. apply here: www.princeton.edu/acad-positio...
Poster submissions for the Computational Psychiatry Conference 2025 in Tübingen are now open. Deadline is 7th February. Symposium submissions are open until 15th January. www.cpconf.org. Please RB.
(1/4) Our new JEP:G paper dives into how moral values and misinformation spread on social media: media.mola-lab.org/file/1737039...
Detailed job description, which can also be found here: https://cldlab.org/join/
I'm hiring a full-time lab manager / research tech for my new psychology lab at Boston University, to start this summer (July 2025)!
The lab's research focuses on understanding developmental changes in learning, memory, and exploration.
More details here: cldlab.org/join/
🧠💻 #psychscisky