We are hiring a research specialist, to start this summer! This position would be a great fit for individuals looking to get more experience in computational and cognitive neuroscience research before applying to graduate school. #neurojobs Apply here: research-princeton.icims.com/jobs/21503/r...
Posts by Paul Scotti
I have EXCITING news:
I've started a company!
Introducing Sophont
We’re building open multimodal foundation models for the future of healthcare. We need a DeepSeek for medical AI, and @sophontai.bsky.social will be that company!
Check out our website & blog post for more info (link below)
NeuroAI papers that caught my interest last month:
brain-image alignment | brain scaling laws | brain-to-text decoding
Sharing quick takeaways on these papers alongside raw technical notes: paulscotti.substack.com/p/neuroai-im...
How can large-scale models + datasets revolutionize neuroscience 🧠🤖🌐? We are excited to announce our workshop: “Building a foundation model for the brain: datasets, theory, and models” at @cosynemeeting.bsky.social #COSYNE2025. Join us in Mont-Tremblant, Canada from March 31 – April 1!
I resigned from Stability AI after over a year working as Head of NeuroAI.
A personal retrospective on my journey with open science communities and how Stability AI’s role in the “science-in-the-open” movement was such a pivotal phenomenon: paulscotti.com/blog/leaving...
ive personally never really given astrocytes much thought before so this was an interesting read
also fun to consider how advancements in AI are potentially leading to breakthroughs in reconceptualizing how the brain works (and hopefully the reverse happens as well)
their jupyter notebook shows how neuron-astrocyte network is equivalent to transformer (with random feature attention for approximating softmax attention)
github.com/kozleo/Build...
Transformers are usually considered biologically implausible
but maybe the brain actually does do self-attention + feed-forward operations
astrocytes could enable a biologically plausible solution to transformer mechanisms in the brain
www.pnas.org/doi/pdf/10.1...
It’s really heart breaking to realize just how much our short horizon, outcome focused funding cycles hurt science. We could do so much more if we could invest in people long term. Staff scientist positions and an ecosystem to support them are direly needed.
This is big: in collaboration with E11 Bio and @andrewcpayne.bsky.social, we are announcing today a new way to map brain circuits at scale. With improvements in AI and microscopy I think whole brain mouse and maybe human brain mapping will be feasible in ~5-10 years. 1/
An actionable comprehensive paper regarding roadmaps for NeuroAI, highly recommend a read esp if any of the following topics sound interesting:
Foundation brain models
Sensory and embodied digital twins
Brain data to improve AI models
Neuro-inspired mech. interpretability
This work was made possible by our amazing coauthors (inc. @iscienceluvr.bsky.social @ptoncompmemlab.bsky.social) and Stability AI 🙏
This project was developed using an open lab approach where we publicly worked with volunteers in the MedARC Discord. We are continuing to work on neuroAI projects in-the-open, check out our lab website to learn more: medarc.ai/fmri
This work shows it is now practical for patients to undergo a single MRI scanning session and produce enough data to perform high-quality image reconstructions. This could enable novel clinical diagnosis and assessment approaches, including locked-in patient communication and BCIs.
If we use the full 40 hours instead of 1, we get SOTA performance for reconstruction and retrieval. We found that the 1-hour setting offered a good balance between scan duration and reconstruction performance, with notable improvements from first pre-training on other subjects.
Other innovations: 1. Mapping fMRI to OpenCLIP & reconstructing via new fine-tuned Stable Diffusion XL unCLIP model. 2. Merging previously independent high- and low-level pipelines into one. 3. Predict text captions for conditional guidance during a final refinement step.
How do you do shared-subject modeling when brains are differently shaped with different functional topography? We first do subject-specific ridge regression to a shared latent space, followed by subject-agnostic non-linear mapping, and train this single model end-to-end.
Past work trained independent models per person, with each person needing dozens of hours of training data in the MRI machine for high-quality results. We show it’s now possible to get high-quality reconstructions from a single visit to the MRI facility.
🧠👁️ Our MindEye2 preprint is out!
We reconstruct seen images from fMRI activity using only 1 hour of training data.
This is possible by first pretraining a shared-subject model using other people's data, and then fine-tuning on a held-out subject with only 1 hr of data.
arxiv.org/abs/2403.11207
I am happy to say I'm not even leaving Princeton since I can work remotely for Stability; I'll become a visiting research scientist in the Norman lab to continue our existing projects while establishing new collaborations between MedARC and Princeton.
Also a massive thank you to my postdoc advisor Ken Norman for taking me into his lab, allowing me the freedom to pursue these crazy projects, and for being such a caring, hardworking, and insightful mentor throughout our time together.
I am extremely grateful for this opportunity. Huge thank you to Tanishq Abraham, PhD (CEO of MedARC) for his leadership in creating MedARC and Stability AI for being willing to invest in open neuroscience research.
This approach to doing research redefines the traditional research model, following similar footsteps as initiatives like EleutherAI, LAION, OpenBioML, and ML Collective.
Happy to share that I am now Head of Neuroimaging at Stability AI!
My role is to lead the MedARC Neuroimaging & AI Lab: medarc.ai/fmri
The lab is remote, open-source, and open to the public to join. Tons of potential to rethink what a research lab is and benefit from crowd-sourced intelligence.
arxiv link was pasted wrong 😓 heres correct link: arxiv.org/abs/2305.182...
Our MindEye fMRI-to-Image paper got accepted as spotlight for #NeurIPS2023! See you in New Orleans, will be my first time attending NeurIPS :)
Updated camera ready paper is also now live on arxiv: arxiv.org/abs/2305.182...
Includes new expts, appendix figures, more references to other work 🧠📈