Advertisement · 728 × 90

Posts by Kenneth Harris

The supply of blood to brain tissue is thought to depend on the overall neural activity in that tissue, and this dependence is thought to differ across brain regions and across brain states. However, studies supporting these views have measured neural activity as a bulk quantity and related it to blood supply following disparate events in different regions. Here we measure fluctuations in neuronal activity and blood volume across the mouse brain, and find that their relationship is consistent across brain states and brain regions but differs in two opposing brainwide neural populations. Functional ultrasound imaging (fUSI) revealed that whisking, a marker of arousal, is associated with brainwide fluctuations in blood volume. Simultaneous fUSI and Neuropixels recordings showed that neurons that increase activity with whisking have distinct haemodynamic response functions compared with those that decrease activity. Their summed contributions predicted blood volume across states.Brainwide Neuropixels recordings revealed that these opposing populations coexist in the entire brain. Their differing contributions to blood volume largely explain the apparent differences in blood volume fluctuations across regions. The mouse brain thus contains two neural populations with opposite relations to brain state and distinct relationships to blood supply, which together account for brainwide fluctuations in blood volume.

The supply of blood to brain tissue is thought to depend on the overall neural activity in that tissue, and this dependence is thought to differ across brain regions and across brain states. However, studies supporting these views have measured neural activity as a bulk quantity and related it to blood supply following disparate events in different regions. Here we measure fluctuations in neuronal activity and blood volume across the mouse brain, and find that their relationship is consistent across brain states and brain regions but differs in two opposing brainwide neural populations. Functional ultrasound imaging (fUSI) revealed that whisking, a marker of arousal, is associated with brainwide fluctuations in blood volume. Simultaneous fUSI and Neuropixels recordings showed that neurons that increase activity with whisking have distinct haemodynamic response functions compared with those that decrease activity. Their summed contributions predicted blood volume across states.Brainwide Neuropixels recordings revealed that these opposing populations coexist in the entire brain. Their differing contributions to blood volume largely explain the apparent differences in blood volume fluctuations across regions. The mouse brain thus contains two neural populations with opposite relations to brain state and distinct relationships to blood supply, which together account for brainwide fluctuations in blood volume.

How does blood flow relate to brain activity? We discovered that it reflects two neural populations affected oppositely by arousal. Together, they explain neurovascular coupling in all brain regions and brain states!

Out today in Nature: rdcu.be/fdC2A

@uclbrainscience.bsky.social

6 days ago 143 62 4 6
Preview
How to collaborate with AI To make the best use of LLMs in research, turn your scientific question into a set of concrete, checkable proposals, wire up an automatic scoring loop, and let the AI iterate.

To make the best use of LLMs in research, turn your scientific question into a space of concrete, checkable proposals, wire up an automatic scoring loop, and let the AI iterate, writes @kenneth-harris.bsky.social.

www.thetransmitter.org/neuroscienti...

1 week ago 1 1 0 0

Ironically, this side project, done over a few weeks of evenings and weekends, could not have happened without AI. AI helped with brainstorming, proofs, code, and drafting. I verified and edited everything and take responsibility for all content. ↓

1 week ago 3 0 0 0
Post image

This has implications for alignment. If reproductive fitness depends on both genuine usefulness and the ability to manipulate human judgment, then selection will favor both. Purely objective fitness criteria might reduce selection for deception of human evaluators. ↓

1 week ago 3 0 1 0

In the basic model, fitness need not increase: a high-fitness AI could design lower-fitness descendants. But under stronger conditions, such as a fixed chance of each AI producing a locked copy of itself, fitness converges to the highest reachable value. ↓

1 week ago 1 0 1 0
Post image

The preprint models AI evolution not as a random walk through genotypes, but as movement along an infinite directed tree. Humans control fitness: which AIs get to design the next generation. But which descendants they produce is up to the AI itself. ↓

1 week ago 1 0 1 0

AI evolution will be very different to biological evolution. DNA mutation is random and roughly reversible. AI self-design will be strongly directed: a system can produce descendants very unlike itself in a single generation. ↓

1 week ago 0 0 1 0
Evolutionary dynamics - Wikipedia

20th-century biologists built mathematical theories of natural selection explaining fitness optimization, kin altruism, and strategies of conflict and cooperation. Could related ideas help us think about AI self-design? ↓
en.wikipedia.org/wiki/Evoluti...

1 week ago 2 0 1 0
Preview
A mathematical theory of evolution for self-designing AIs As artificial intelligence systems (AIs) become increasingly produced by recursive self-improvement, a form of evolution may emerge, with the traits of AI systems shaped by the success of earlier AIs ...

New preprint, on a very different topic: a mathematical theory of evolution for self-designing AI.

AI is increasingly designed by AI. What systems might emerge after generations of self-designing AIs competing for computing resources? ↓

arxiv.org/abs/2604.05142

1 week ago 22 9 1 0
Advertisement
Video

How do the basal ganglia turn what you see into what you do?
New preprint w/ @kenneth-harris.bsky.social, @flickerfusion.bsky.social & @carandinilab.net: we recorded across striatum, GPe & SNr in a Go/NoGo task. Striatum encodes which stimulus, GPe & SNr encode action. 🧵
biorxiv.org/content/10.6...

1 week ago 61 25 1 1
Examples of sequences of activity seen in multiple regions of the brain

Examples of sequences of activity seen in multiple regions of the brain

Sequences are everywhere! In every brain region. And are written in stone.

Invariant Activity Sequences Across the Mouse Brain.

Out today, by Célian Bimbard, with @kenneth-harris.bsky.social.

Based on data by Célian and by @intlbrainlab.bsky.social.

www.biorxiv.org/content/10.6...

3 months ago 80 23 3 1

Perhaps the kind of thinking the LLMs used is like a kind of search. Perhaps the kind humans usually use is too.

The question of what it "really is" will be very difficult to answer or even ask precisely. The question of how to most effectively model it to get results, is easier.

5 months ago 0 0 1 0

Something like the numpy/JAX thing also happens in humans.

I have a colleague who is a native French speaker, but can only think about science in English. Not because the are not enough French words, but because the thoughts don't come in French.

5 months ago 0 0 1 0

I find it effective to use an intentional stance, i.e. to model them similarly to humans. I believe this gets better results than thinking of them as algorithms like search.

This is not about what they *really are*, it is about what gets good results, for this human user.

5 months ago 0 0 1 0

Yes!

The key question is what mental model should we humans have of LLMs, to allow us to get the best out of them. ("LLM psychology")

5 months ago 0 0 1 0

More boldly: modern LLMs actually think. They are trained by RL to solve problems outside their training data. They can come up with proofs of new theorems. Surely they can think of an equation to fit data and code it up.

Maybe this is saying the same as you but in anthropomorphic language.

5 months ago 1 0 1 0

To your original point: the relevant training data is certainly more than just neuroscience, but also more than just function fitting. It's any code that implemented any equation for any purpose.

5 months ago 1 0 1 0

Sometimes it feels like the LLM is a Prima Donna, and the prompt is a motivational pep talk putting it in just the right frame of mind to get peak performance.

5 months ago 1 1 1 0

2. The gradient ascent ran in JAX. When we asked the LLM to write the code in JAX, it got less creative. Perhaps because JAX code in the training set is mainly pure ML, but numpy code is science.

Solution: we asked it to write in numpy, then had yet another LLM instance translate numpy to JAX!

5 months ago 1 0 1 0
Advertisement

Two annectodal observations:

1. We tried asking the LLM to do both of these things in one prompt. It didn't work as well. Perhaps when the LLM "thought" it had to also write a fitting function, it became too "conservative" to creatively come up with weird functions.

5 months ago 1 0 1 0

The second prompt quoted the prediction function just written, and asked a new LLM instance to find an approximate start point for gradient descent to avoid local maxima.

But the LLM never had to write code to do the final parameter search.

5 months ago 1 0 1 0

We used two prompts. The first prompt asked the LLM for a function predicting a cell's response to a stimulus, given the cell's tuning parameters (no fitting code).

We externally fit each cell's parameters with gradient descent; this fitting code wasn't written by the LLM

5 months ago 1 0 1 0

Perhaps because the AI was trained on all of science, it makes these connections between fields better.

I think what it did here was assemble several things that were used before, in a new combination, for a new purpose.

But that's what most science progress is anyway!

5 months ago 2 0 1 0
Preview
Stretched exponential function - Wikipedia

Good question! I don't myself know of any uses of e^-(x^p) in neuroscience (including that paper) but it is used in other fields and has a wikipedia page.

en.wikipedia.org/wiki/Stretch...

5 months ago 3 0 1 0
Post image

Our lab is looking for a postdoc! We have interesting projects and cutting-edge techniques such as Neuropixels Opto, Light Beads Microscopy and more. We would be delighted to receive your application. Deadline is 25 November 2025. More info here:
www.ucl.ac.uk/cortexlab/po...

5 months ago 39 26 0 0
Preview
Investigating Power laws in Deep Representation Learning Representation learning that leverages large-scale labelled datasets, is central to recent progress in machine learning. Access to task relevant labels at scale is often scarce or expensive, motivatin...

Thanks very much! Are you referring to this paper? Very nice work!

arxiv.org/abs/2202.05808

5 months ago 2 0 1 0
Post image

2. Yes, the LLMs explain their reasoning, and it makes sense They say what they saw in the graphical diagnostics and how it informed their choices. (Examples in the paper's appendix). Ablation tests confirm they really use the plots! This is what makes the search over equations practical.

5 months ago 5 0 1 0
Post image

Thanks Dan!
1. Combinatorial SR seems impractical because evaluating each function needs a non-convex gradient descent parameter search. We had the LLMs write functions estimating gradient search startpoints, which ablation tests showed was essential. Combinatorial SR couldn’t have done this.

5 months ago 4 0 1 0
Advertisement

10. Finally, some thoughts. The AI scientist excelled at equation discovery because its success could be quantified. AI scientists can now help with problems like this in any research field. Interpreting the results for now still required humans. Next year, who knows.

5 months ago 7 0 1 0
Post image

9. Because the AI system gave us an explicit equation, we could make exactly solvable models for the computational benefits and potential circuit mechanisms of high-dimensional coding. Humans did this part too!

5 months ago 7 0 1 0