Advertisement · 728 × 90

Posts by Dan Goodman

This is a great article from @juangallego.bsky.social. I still feel less excited about "neural foundation models" than Juan, but I have to say he makes his case very convincingly, both for the advantages and disadvantages. Well worth a read!

20 hours ago 15 1 1 0

Ah the classic "I'm applying for a grant / job and I think one of my reviewers might be in the audience" talk. The worst.

22 hours ago 1 0 0 0
Tree in bloom. White flowers glow luminously against a dark grey cloud background.

Tree in bloom. White flowers glow luminously against a dark grey cloud background.

Second joyful nature photo for today to celebrate election result in Hungary. #photography

1 day ago 7 0 0 0

Didn't even have to wait until tomorrow! 🎉

1 day ago 0 0 0 0

Hopefully wake up tomorrow and realise this was a post about Hungary.

1 day ago 2 0 1 0
Rainbow peeking through behind trees lit up golden in sunset light, dark clouds behind.

Rainbow peeking through behind trees lit up golden in sunset light, dark clouds behind.

#SilentSunday #photography

1 day ago 7 0 1 0

Yeah I agree that's wrong. But maybe it doesn't need to be as rigid as current code in terms of syntax.

1 day ago 0 0 1 0

Yeah but it could be just mathematical notation for example.

1 day ago 2 0 1 0
Advertisement

I agree with Nico's point that a new generation might not need code to learn this. It's basically just mathematics anyway.

1 day ago 1 0 1 0

None of this is specific to any particular programming language, and I'd agree learning the syntax of a particular language isn't a very important skill. But that's also what programmers have been saying for ever, nothing new.

1 day ago 1 0 1 0

The danger is that LLMs will put something if you give an underspecified prompt, but it may not be what you want. Since you haven't looked at the code or learned how to understand the subtlety of thinking through the logic of what you actually want, you won't recognise this failure.

1 day ago 6 0 1 0

What I mean is that this part is not separable from the high level design goal. So of course it can be automated but only if the high level part is automated and then you've just handed the whole thing over to AI.

1 day ago 2 0 1 0

Yeah, we can automate remembering how the API wants to be called which is tedious and largely meaningless. But the hard part of programming is thinking through the logic of what it is EXACTLY you want the program to do. There's no way to automate that part without introducing mistakes I think.

1 day ago 4 0 1 0
Preview
Brain-inspired warm-up training with random noise for uncertainty calibration - Nature Machine Intelligence Cheon and Paik show that overconfidence in deep neural networks arises from standard initialization practices, and that brief warm-up training with random noise improves uncertainty calibration and meta-cognitive recognition of unknown inputs.

A neurodevelopment-inspired warm-up strategy to address uncertainty calibration: networks are briefly trained on random noise and labels before exposure to real data, leading to well-calibrated confidence and strong detection of unknown inputs.

Cool results!

#NeuroAI
www.nature.com/articles/s42...

3 days ago 30 7 0 1
Preview
The Disadvantage Gap explained - The Sutton Trust We have long called for action to close the attainment gap, but what is it?

✍️ "This is a tragedy for individuals, but also means our economy is missing out on a huge amount of potential talent."

The gap in attainment between richer and poorer pupils has lifelong implications - for both individuals and society.

It must be tackled ⤵️

3 days ago 19 15 3 0
Advertisement

New preprint with @sevberg.bsky.social! We map Hopfield-like binary networks onto spiking networks with dendrites … and it works! Same memory capacity, bigger basins of attraction, plus selective recall through dendritic gating, and more. How? Dendrites! See below.

4 days ago 28 12 1 2

They always do in papers like this. 😮‍💨

3 days ago 0 0 0 0

They introduce a new measure of novelty: whether or not it introduces a new word or phrase in its title/abstract that is subsequently re-used in at least one paper. No analysis of whether or not this is a good measure or whether it might correlate with features of AI-assisted research. Nah.

3 days ago 3 0 2 0

What do #neuromodulators do in the #brain? Two recent papers give new insights:

@nishantjoshi.bsky.social shows they do not only reshape individual cellular properties, but also the architecture linking them, thereby expanding the computational repertoire.

www.biorxiv.org/content/10.6...

4 days ago 33 10 1 0
Woodland with rich brown floor covered in leaves and sticks, and sun shining bright green through foliage above. On the floor there are trails of smoothed out mud lined with sticks forming paths.

Woodland with rich brown floor covered in leaves and sticks, and sun shining bright green through foliage above. On the floor there are trails of smoothed out mud lined with sticks forming paths.

The kids had fun making paths on the ground in the local wood. #photography

4 days ago 435 32 6 2

Is it cynicism or just weary observation?

I don't think it would be a bad thing if done well, but I'm not convinced it would achieve much proportionate to the cost. At least in my experience, the problems with science papers (and there are many) wouldn't be caught by even the best version of this.

4 days ago 0 0 0 0

Sounds like a very expensive way to slow down publication and add very little value. I'm sure the journals will snap the idea up, and then contract the fact checking out to a company that passes it through chatgpt with a prompt asking it to highlight any factual errors.

4 days ago 0 0 1 0
Post image

“Recognizing the scope and impact of heterogeneity in basic neuroscience is essential if we want to understand complex conditions and the brain in health and disease,” write @lindadouw.bsky.social, Klaus Eyer and Lara Keuck.

#neuroskyence

www.thetransmitter.org/science-and-...

4 days ago 17 8 1 0
Woodpecker on the side of a tree with blue sky behind and some out of focus green leaves in front.

Woodpecker on the side of a tree with blue sky behind and some out of focus green leaves in front.

Not normally into bird #photography but quite pleased that I managed to get a shot of a woodpecker! Even managed to get a video of it pecking.

5 days ago 15 2 1 0

Send me an email and I'll reply next week when I'm back from holiday. 🙂

5 days ago 1 0 0 0
Advertisement
BI 235 Romain Brette: The Brain, in Theory
BI 235 Romain Brette: The Brain, in Theory YouTube video by Brain Inspired

Interview with @braininspired.bsky.social for my book "The Brain, In Theory":

www.youtube.com/watch?v=T3zE...

6 days ago 25 12 0 0

Cool! Let me know if you want to come and join our lab meeting one day. 😀

5 days ago 3 0 1 0
Preview
‘The Brain, In Theory,’ an excerpt In his new book, Brette pushes back against theories that describe the brain as a “biological computer.” In this excerpt from Chapter 4, he challenges equating brain evolution with programming…

"The Brain, In Theory" is out today!

A short excerpt in The Transmitter @thetransmitter.bsky.social

www.thetransmitter.org/theoretical-...

1 week ago 69 26 3 0

Did it work?

1 week ago 0 0 0 0

Second this. Stop doing some stuff. Some of the stuff that you have to do, you can just do badly. It's ok. Not everything has to be perfect.

1 week ago 3 0 1 0