This is a great article from @juangallego.bsky.social. I still feel less excited about "neural foundation models" than Juan, but I have to say he makes his case very convincingly, both for the advantages and disadvantages. Well worth a read!
Posts by Dan Goodman
Ah the classic "I'm applying for a grant / job and I think one of my reviewers might be in the audience" talk. The worst.
Tree in bloom. White flowers glow luminously against a dark grey cloud background.
Second joyful nature photo for today to celebrate election result in Hungary. #photography
Didn't even have to wait until tomorrow! 🎉
Hopefully wake up tomorrow and realise this was a post about Hungary.
Rainbow peeking through behind trees lit up golden in sunset light, dark clouds behind.
Yeah I agree that's wrong. But maybe it doesn't need to be as rigid as current code in terms of syntax.
Yeah but it could be just mathematical notation for example.
I agree with Nico's point that a new generation might not need code to learn this. It's basically just mathematics anyway.
None of this is specific to any particular programming language, and I'd agree learning the syntax of a particular language isn't a very important skill. But that's also what programmers have been saying for ever, nothing new.
The danger is that LLMs will put something if you give an underspecified prompt, but it may not be what you want. Since you haven't looked at the code or learned how to understand the subtlety of thinking through the logic of what you actually want, you won't recognise this failure.
What I mean is that this part is not separable from the high level design goal. So of course it can be automated but only if the high level part is automated and then you've just handed the whole thing over to AI.
Yeah, we can automate remembering how the API wants to be called which is tedious and largely meaningless. But the hard part of programming is thinking through the logic of what it is EXACTLY you want the program to do. There's no way to automate that part without introducing mistakes I think.
A neurodevelopment-inspired warm-up strategy to address uncertainty calibration: networks are briefly trained on random noise and labels before exposure to real data, leading to well-calibrated confidence and strong detection of unknown inputs.
Cool results!
#NeuroAI
www.nature.com/articles/s42...
✍️ "This is a tragedy for individuals, but also means our economy is missing out on a huge amount of potential talent."
The gap in attainment between richer and poorer pupils has lifelong implications - for both individuals and society.
It must be tackled ⤵️
New preprint with @sevberg.bsky.social! We map Hopfield-like binary networks onto spiking networks with dendrites … and it works! Same memory capacity, bigger basins of attraction, plus selective recall through dendritic gating, and more. How? Dendrites! See below.
They always do in papers like this. 😮💨
They introduce a new measure of novelty: whether or not it introduces a new word or phrase in its title/abstract that is subsequently re-used in at least one paper. No analysis of whether or not this is a good measure or whether it might correlate with features of AI-assisted research. Nah.
What do #neuromodulators do in the #brain? Two recent papers give new insights:
@nishantjoshi.bsky.social shows they do not only reshape individual cellular properties, but also the architecture linking them, thereby expanding the computational repertoire.
www.biorxiv.org/content/10.6...
Woodland with rich brown floor covered in leaves and sticks, and sun shining bright green through foliage above. On the floor there are trails of smoothed out mud lined with sticks forming paths.
The kids had fun making paths on the ground in the local wood. #photography
Is it cynicism or just weary observation?
I don't think it would be a bad thing if done well, but I'm not convinced it would achieve much proportionate to the cost. At least in my experience, the problems with science papers (and there are many) wouldn't be caught by even the best version of this.
Sounds like a very expensive way to slow down publication and add very little value. I'm sure the journals will snap the idea up, and then contract the fact checking out to a company that passes it through chatgpt with a prompt asking it to highlight any factual errors.
“Recognizing the scope and impact of heterogeneity in basic neuroscience is essential if we want to understand complex conditions and the brain in health and disease,” write @lindadouw.bsky.social, Klaus Eyer and Lara Keuck.
#neuroskyence
www.thetransmitter.org/science-and-...
Woodpecker on the side of a tree with blue sky behind and some out of focus green leaves in front.
Not normally into bird #photography but quite pleased that I managed to get a shot of a woodpecker! Even managed to get a video of it pecking.
Send me an email and I'll reply next week when I'm back from holiday. 🙂
Interview with @braininspired.bsky.social for my book "The Brain, In Theory":
www.youtube.com/watch?v=T3zE...
Cool! Let me know if you want to come and join our lab meeting one day. 😀
"The Brain, In Theory" is out today!
A short excerpt in The Transmitter @thetransmitter.bsky.social
www.thetransmitter.org/theoretical-...
Did it work?
Second this. Stop doing some stuff. Some of the stuff that you have to do, you can just do badly. It's ok. Not everything has to be perfect.