Our work on building world models from episodic memories is accepted to #ICLR26!
Led by (then undergraduate, now masters student) @herbiehe.bsky.social
Check out the tweeprint!
Posts by Pouya Bashivan 🇮🇷🇨🇦
congrats!
Super proud of this work co-led by graduate students Ali Saheb Pasand and @johanoosterman.bsky.social and fantastic collaboration with Aaron Courville and @pcastr.bsky.social
Very proud of this work by @motaharehpr.bsky.social ! Glad to see it accepted as a talk at #Cosyne2026
Tldr; training a neural net to search for objects in natural scenes made it not only behave like humans but also made it converge on similar computations and representations as primates!
Today we’re releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities, emerging risks, and safety measures to date. 🧵
(1/19)
Soon hiring a lab manager! Looking for someone who is really interested in language neuroscience, who is organised, motivated, a great communicator, and who works well in a research team. Express interest by submitting this form: tinyurl.com/glysn-labman...
Reposts appreciated!
Submit your #NeuroAI papers to the full proceedings track of CCN! 8 page papers, ML conference style fast review turnaround
Pretty cool!
My approach to writing a modeling grant is pretty much not writing it and instead write one that would make the data for it collected and analyzed.
Very much looking forward to this series of workshops on the computational ingredients of reasoning. We have an amazing lineup of speakers from diverse backgrounds, and there will be lots of opportunities for discussion. Please consider attending!
Interesting essay by Tim Dettmers. Although I don’t fully agree with all the predictions, the contrast between China and North America’s approach to practical AI will potentially be defining in years to come
Great start at our cognitive benchmarking in large models session at #MontAIN2025 #MAIN2025
w/ @lune-bellec.bsky.social @audurand.bsky.social @shahbanu.kawaii.social @lucasmgomez.bsky.social @zhweng.bsky.social Delcan Campbell (Princeton) Doris Voina (UdeM)
Attending the Montreal AI and Neuroscience (MAIN) Conference this week? #MontAIN2025
We have put together some exciting educational workshops on cognitive benchmarking large models, RL and video games and dynamical systems! More info and registration here: main-educational.github.io/program/
cool! Congrats!
Congratulations!
For those attending the #CogInterp workshop at NeurIPS, please check out our work on visual symbolic mechanisms led by @rassouel.bsky.social and @thisisadax.bsky.social. We find that visual feature binding in VLMs is supported by emergent symbolic mechanisms.
Great to see such efforts curating larger scale neuroimaging datasets! Really important for building foundation models for neuroscience!
Joint modelling of brain and behaviour dynamics with artificial intelligence
www.nature.com/articles/s41...
We are excited to announce that the Cognitive Computational Neuroscience meeting (CCN 2026) will be held at New York University from August 3–6, 2026.
2026.ccneuro.org
Afterthought… maybe it will become smart enough to find a way to build a smarter but aligned AI. Would have been nice if we could do this first though…
There’s a lot of debate about “superhuman AI” that might end it for us. All this made me think if that AI, presumably conscious-like, would build even better AI superseding itself? Specially if, it has been trained on these debates. If it choses not to, then it might become the smartest AI ever (?)
PSA for academics involved in designing admissions systems:
Setting a specific time-of-day deadline is *ri*dic*ul*ous* and super annoying!!!!
Do you really care if I submit this lettter at 6PM rather than 4PM?
Were you planning on reviewing my letter that evening?
Get real...
#academia
VLMs are truly caught in the middle! We found that they are great at describing what they see AND great at reasoning from text. But, they fail to connect them without an explicit text bridge. Come talk to us about this and more at #Neurips2025
I still see much focus on single factor coding in studies of medial temporal cortex. Although our results in primate hippocampus indicates a more mixed code
www.biorxiv.org/content/10.1...
Always!
I’m looking for interns to join our lab for a project on foundation models in neuroscience.
Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).
Interested? See the details in the comments. (1/3)
🧠🤖
The future of Canadian research
Great Blueprint from @arnaghosh.bsky.social on our newest paper on representational geometry!
tl;dr: we find that during pretraining LLMs undergo consistent cycles of expansion/recuction in the dimensionality of their representations & these cycles correlate with the emergence of new capabilities.
This sounds like a foundation model for C elegans. Great idea in principle!