We're happy to release NeuralSet: a simple, fast, scalable package for Neuro-AI
Supports:
🧠 fMRI, EEG, MEG, iEEG, spikes… preprocessing
💬 text 🔊 audio ▶️ video 🏞️ image… embeddings
📦 pip install neuralset
🔍 facebookresearch.github.io/neuroai/neur...
📄 kingjr.github.io/files/neural...
🧵 Details👇
Posts by hakwan lau
My review on the "Confidence-accuracy dissociations in perceptual decision making" is now published. I think that this will be useful to both experts and newcomers to the field.
www.sciencedirect.com/science/arti...
why don't we likewise say that AI can only simulate intelligence, but doesn't make it genuinely present?
Monkey auditory neurons "did not show enhanced responses to unexpected stimulus repetitions, contrary to predictive-coding theory. However, they did show enhanced responses to unexpected stimulus omissions." www.biorxiv.org/content/10.6...
I think about this study a lot.
The world's a gymnasium... if you notice it.
figure 1 from the paper linked in OP - similar prevalence of self-reported imagery scores across auditory and visual domains
new preprint from PhD student Gage Quigley-Tump, reporting a survey of 200,000 ppl on themusiclab.org about auditory imagery or the lack thereof ('anauralia', the auditory version of aphantasia)
osf.io/cm85z
some findings:
(1) self-reported imagery ability similar across auditory & visual domains
A “fun” experience with BMC Neurology @bmc.springernature.com who sat on our submission for over 10 months without ever sending for review, despite repeated reminders… we have now retracted to submit elsewhere…
The supply of blood to brain tissue is thought to depend on the overall neural activity in that tissue, and this dependence is thought to differ across brain regions and across brain states. However, studies supporting these views have measured neural activity as a bulk quantity and related it to blood supply following disparate events in different regions. Here we measure fluctuations in neuronal activity and blood volume across the mouse brain, and find that their relationship is consistent across brain states and brain regions but differs in two opposing brainwide neural populations. Functional ultrasound imaging (fUSI) revealed that whisking, a marker of arousal, is associated with brainwide fluctuations in blood volume. Simultaneous fUSI and Neuropixels recordings showed that neurons that increase activity with whisking have distinct haemodynamic response functions compared with those that decrease activity. Their summed contributions predicted blood volume across states.Brainwide Neuropixels recordings revealed that these opposing populations coexist in the entire brain. Their differing contributions to blood volume largely explain the apparent differences in blood volume fluctuations across regions. The mouse brain thus contains two neural populations with opposite relations to brain state and distinct relationships to blood supply, which together account for brainwide fluctuations in blood volume.
How does blood flow relate to brain activity? We discovered that it reflects two neural populations affected oppositely by arousal. Together, they explain neurovascular coupling in all brain regions and brain states!
Out today in Nature: rdcu.be/fdC2A
@uclbrainscience.bsky.social
"we show that [the lateral prefrontal areas] 8Av/45 encodes the color of a visual stimulus, regardless of its behavioral relevance."
www.nature.com/articles/s41...
We have over 100 abstracts submitted already and are expecting a big spike in these last couple of weeks!
EPC/APCV are very rarely in NZ (last was 15yrs ago). If you're kiwi, this is a great opportunity for a nearby international conference. If you're not, this is your chance to see beautiful NZ :)
www.nature.com/articles/s41...
so visual coding is sparse after all, even in dlPFC (of freely moving monkeys not doing any assigned tasks specifically)
Excited to share our new preprint: "Do Machines Fail Like Humans? A Human-Centered Out-of-Distribution Spectrum for Mapping Error Alignment" led by
@binxia.bsky.social w @ken-lxl.bsky.social & co-senior author Luke Dickens (UCL)
🤖🧵👇
Link: arxiv.org/abs/2603.07462
🧠📈#PsychSciSky #compneuro #mlsky /1
& the issue was solved by this method:
www.pnas.org/doi/10.1073/...
(replicated in monkeys www.jneurosci.org/content/28/4...)
so there is still hope that someone could get to the the bottom of this, & fully rule that that V1 lesion just reduced saliency / internal processing strength
/end
... w/ different training curriculum. so why would absolute match be as important as the overall pattern that logically supported blindsight (RO>blank>RC)?
so i still worry a bit that it's just saliency, which is not uninteresting
this parallels the old blindsight debate in humans & primates...
BUT perhaps it just means they dimmed too much? from their own fig 2H, it seems like at some intermediate level, creating the pattern of RO>blank>RC is possible
they could argue that at such level, % right choice for RC was not matched to lesion group
BUT these were different cohorts of animals!
to rule that out they physically dimmed the right target. to make the mice choose center under RC (2 targets) at the same level as the left-V1 lesioned mice, they had to dim it by a lot. & by the time it happened, the mice just seemed blind, not blindsighted. i.e. blank = RO, for % right choices ...
BUT what if the mice just went where the target was the most salient, & a peripheral target was only considered salient enough if it was >= center? so a 'dimmed' right target may give way to center, but on its own visible. so maybe V1 lesion didn't produce nonconscious vision, just weaker vision...
importantly they were not just totally blind on the right. coz when there was 1 target on the right only (RO), the mice still went right, more often than when there no target (blank)
so the pattern of results that would logically hint at blindsight is that for % right choice, RO>blank>RC...
so the mice had to do a 3-arm mace task & run towards where a target was. sometimes, there were 2 targets, e.g. one at right + one at center (condition RC), where right = correct choice
after lesion to left V1, mice often went center in RC, as if they didn't see the target on the right...
www.sciencedirect.com/science/arti...
very nice paper on a mice model of blindsight
i'm not sure if i'm fully convinced by their saliency control, but maybe i'm just picking hair
let me explain: (a thread to follow)
what do researchers mean when they say they measure 'consciousness'? do they mean 1) subjective experiences, or 2) just general cognitive capacities?
to those who say 1, perhaps they think Alzheimer patients are zombies, or they are actually confusing 1 with 2
pubmed.ncbi.nlm.nih.gov/41574278/
i'm not sure the subjects with aphantasia would want to change the way they are~ being different is ok
"During functional MRI...we identified a breakdown in feedback along a cortical pathway ... in aphantasia. The IPS showed impaired imagery generation signals and reduced functional connectivity with EVC, whereas the EVC exhibited a general deficit in feedback independent of voluntary control."
New Perspective out in @natmentalhealth.nature.com! We consider why neuroimaging analyses struggles to predict adolescent mental health, discuss approaches for improving prediction, and provide open-source implementations and tutorials: www.nature.com/articles/s44...
New preprint from my lab! We study how reinforcement learning & selective attention interact. To do so, we built a set of models describing different ways that value & reward prediction error can modulate top-down attention. We compare model outcomes to monkey data from a color value learning task
🧪🧠📈 Happy #SciFri folks! Wanted to share out some work my team has been working on.
Introducing TRIBE v2: a foundation model of the brain's responses to sight, sound & language.
great work led by @sdascoli.bsky.social! Check out his post for more details! 🤩
#cogsci #neuro #compneuro #neuroai
www.science.org/doi/10.1126/...
"mental imagery reactivates the same sensory codes used during visual stimuli, suggesting the existence of a generative model capable of synthesizing detailed sensory contents from an abstract, semantic representation."
Really excited about our new work on aphasia! Even in fairly profound aphasia, we can recover semantic maps through visual stimuli and use them to decode language. This is a big step! Language BCIs in aphasia might be possible!
🚨New preprint🚨
Very excited about the last work led by Dugué Lab PhD student Yue Kong on TMS-induced #traveling_waves.
@erc.europa.eu
@upcite.bsky.social
www.biorxiv.org/content/10.6...