A popular account of the so-called deep learning revolution is that all natural understanding of the problem can be abdicated; we just throw all the data in the NN and gradient descent goes brr, but that’s not right. We have to have the right structural prior and a NN architecture to express it.
Posts by Sami Beaumont
Wanna do neuroscience in Paris but can't find interesting lab?
Want to come do a sabbatical but don't know who to collaborate?
Check this webpage aggregating ~all the neuroscience labs (+200) in Paris.
⚠️only the information of 'verified' profiles is reliable⚠️
Please retweet 🙏
parisneuro.fr
First repost is a bot give me a break
Also I'm a bit worried that the feeling of urgency to use llms comes from the generalized FOMO in academia. To me the most rational position is that we still don't know how exactly llms should be used (as not using it for X is bad practice) so the fear of not being relevant does not seem appropriate
I agree there are best practices we should follow, and reinventing the wheel is highly inefficient, but as best practices also evolve I think it's still a good exercise (for students) to know how to do what they ask Claude to do (and trust that actual people validated existing packages...)
Anyway, I think making science from multiple standpoints is more efficient at the end than producing multiple instances of the same averaged perspective. For me it's hard to tell when I'm trading too much standardization for a bit of efficiency
Giving up a particular stance (yours) to the apparent purity and generality of a llm precisely what makes the outcome less relevant. And I think the line between what is integral to your standpoint vs what is contingent is finer that it seems
I think that this idea of pure statistical tool capable of summarizing the whole literature, generating research questions and code to answer them is misleading and self defeating
And they are fast for many repetitive tasks (e.g. coding) so not using them seems like a waste of ressources
The fact that they produce standardized answers based on basically everything available online, make them sound like neutral and balanced, a perfect starting point for scientific inquiry
Random thoughts about llms in academia. I think the most risky misconception in this space is not the appeal (or fear) of artificial general intelligence, but rather of "pure" intelligence, i.e. llms are somehow "clean", efficient, verbose, they look like a way to improve practice
I don't think we ever learned implementation for the sake of it, but knowing how to do things is the kind of expertise you need to use llms safely. I still believe students should know how a model fitting pipeline should look like before using Claude to do it
Unsurprisingly we don't read similar opinion pieces about "the right" while the US gouvernement unplugs vaccines research.
(critically as embedded in a larger background knowledge and experience, not as being dismissive). "Ne soyez pas le secrétaire des malades".
Which could be modernized as "don't be a statistical parrot"
He laid down principles of clinical examinations that, to me, highlight all the complexity and non reducibility of clinical practice. Notably the tension between 2 imperatives 1) be aware of the uniqueness of each patient ("individualité maladive") and 2) assess critically what they report
This is a critical point. One could say that inferring what questions are relevant in a clinical examination is just another instance of pattern detection. I believe it misses the core of what clinical skills actually are. In psychiatry the teaching of JP Falret (1874) is still highly relevant
Screenshot of an article in +972 magazine entitled 'The legal fight to open Gaza to foreign press has failed. It’s time to change course', Feb. 6, 2026.
Ça se passe en direct sous nos yeux. Nous ne pourrons pas dire que nous ne savions pas.
www.972mag.com/gaza-foreign...
They also spread the message to kids that smoking is an adult's choice, which is both subtly incitative and carefully avoiding public health issues.
Similarly the message that only specialists should use llms bypasses the existential question of why should my google search result in a word salad ?
Postdoc position in Paris: come help develop new generation human brain computer interfaces ⚡🧠💻
Interested? Contact me if you have experience with machine learning (e.g. simulation-based inference, RL, generative/diffusion models) or dynamical systems.
See below for + details and retweet 🙏
In the brain causal interventions on a specific region or circuit does not imply that this circuit is a module (= encapsulated system dedicated to this function whose output would be indépendant from other modules)
Lesion studies can demonstrate the specificity of function but the limits can be more and more fuzzy as the scope increases. A general theory of human physiology would require transversal approaches (e.g. ontogeny) in addition to individual systems characterisation
I like the analogy with organs but it can be misleading as looking for modules as physical entities would be akin to phrenology. But i agree that describing something as a module (e.g. immune system) does not imply an explicit design. However i think question remains about the limits of modularity
Fodor was very critical of this theory as modularity was a relevant description for low level processes. I think the current debate in neuroscience is about that : can we find arguments for low level functions that are completely encapsulated from other modules ?
In my understanding the defining characteristic of modularity is functionnal encapsulation. Some have argued that the mind is massively modular i.e. no transversal process, and that really sounds unnatural and more like software engineering
I'd say the term "model" is better suited than "tool". I see why they can be framed as bad models of language from an explanatory cognitive perspective, but i think they are not even bad tools
🚨 !!! New preprint !!! 🚨
MULTIMODHAL (phase 3, multicenter double-blind RCT): fMRI “symptom-capture”–guided neuronavigation significantly improves 1-Hz rTMS outcomes for drug-resistant auditory verbal hallucinations vs standard T3P3 targeting. 🧠🟦
#rTMS #Psychiatry #Schizophrenia #fMRI
But companies continuously claim the inevitability of it and that safeguards are possible to prevent such use cases. I'm saying that taking full accountability is the only logical consequence. Otherwise they have to admit they cannot control shit and remove it entirely.
Yes that's why we shouldn't have this into the website in the first place
I think complying with the law, especially regarding child porn, is a pretty low standard already. But is it possible is the question ideed. I believe it's not and we don't need to pollute already toxic social networks with garbage generating bots.