Advertisement · 728 × 90

Posts by Daniel Siegle

Using AI to cure cancer or fight climate change makes as much sense as using neuroscience to figure out AI. It's all yak shaving. If your goal is to "solve X" just go solve X, don't try to solve Y then use Y to solve X.

2 weeks ago 24 2 5 0
Video

Statistical Rethinking 2026 is done: 20 new lectures emphasizing logical and critical statistical workflow, from basics of probability theory to causal inference to reliable computation to sensitivity. It's all free, made just for you. Lecture list and links: github.com/rmcelreath/s...

1 month ago 599 194 11 11

What's being automated? Why are we automating that thing? What's the input/what's the output? Is it sensible to expect any system to give reliable outputs from just that input? Who's benefitting from automating this thing?

>>

1 month ago 261 33 1 3

@alexhanna.bsky.social and I recommend reframing from "AI" to automation in our book, because it always helps to make the conversation clearer.

>>

1 month ago 186 17 2 1

The universe has experienced itself, and it wants a refund

1 month ago 78 7 5 0
Statistical Rethinking 2026 Lecture B05 - Social Networks II
Statistical Rethinking 2026 Lecture B05 - Social Networks II YouTube video by Richard McElreath

Witness the power of this fully operational Bayesian latent space social relations model - Lecture B05 of Statistical Rethinking 2026. Incremental model construction and testing workflow for dyadic and generalized exchange networks, posterior network simulation, one dank Insane Clown Posse meme.

2 months ago 46 9 0 0

We could have this for biology if we hadnt collectively decided to spend ~10 Mio dollars per year on BioRender instead 😢

2 months ago 4 2 1 0
Advertisement

This is the best synthesis of the protein design field ever

2 months ago 4 0 0 0
A roadmap for AI-driven protein design I created this free course consisting of 10 lectures to introduce you to AI-driven protein design.

Sorry, i forgot to add the webpage of the course 😅

miangoaren.github.io/teaching/pro...

2 months ago 2 1 0 0

This one was quite the journey! The paper describing the #ChlamyDataset is finally out and on the cover of Mol Cell!

This beautiful rendering made by co-author @jessheebner.bsky.social and Holly Peterson shows an instance of mitochondrial fission found in the dataset 😍

[Maybe long thread ahead]

3 months ago 93 35 3 10
Preview
Frontiers | GIFT-AI: The Cringe Test: student evaluations of intelligence with LLMs in a Turing Test adapted for classroom use This article presents “The Cringe Test,” a classroom adaptation of the Turing Test (or imitation game) that stages dialogue with large language models (LLMs)...

Just published open access, a playful exercise I developed and ran with students: "The Cringe Test: student evaluations of intelligence with LLMs in a Turing Test adapted for classroom use" www.frontiersin.org/journals/edu...

3 months ago 7 4 0 0
Preview
Make the Tool You Wish Existed (with Your LLM) | Brian Gershon Build custom HTML tools that fit your exact workflow using LLMs. No extensive programming knowledge required. Turn workflow problems into practical tools in hours.

Thanks @simonwillison.net, always enjoy your content. I also really like HTML tools. Thank you for all the resources!

Was inspired to share them:

www.briangershon.com/blog/make-to...

3 months ago 42 6 0 0
Statistical Rethinking 2026 - Lecture B01 - Multilevel Models
Statistical Rethinking 2026 - Lecture B01 - Multilevel Models YouTube video by Richard McElreath

Statistical Rethinking 2026 Lecture B01 Multilevel Models is online. This is the first lecture of the "experienced" section, in which we start with multilevel models and venture into vast covariance spaces. Full lecture list still here: github.com/rmcelreath/s...

3 months ago 97 17 0 1
@mhdksafa for the record: cops aren't supposed to kill guilty people either (from 2023)

@mhdksafa for the record: cops aren't supposed to kill guilty people either (from 2023)

3 months ago 18111 4933 50 71
Post image

A reminder - before Trump, MAGA, Musk and Fox rewrote history - that this is what the entire country agreed had happened five years ago today:

3 months ago 7145 3440 112 134
The poster I presented at VIZBI 2025, showing a series of gorgeous protein visualisations with a range of materials, lighting and styles

The poster I presented at VIZBI 2025, showing a series of gorgeous protein visualisations with a range of materials, lighting and styles

A quick write up about the poster I presented at last year's #VIZBI 2025, showing a series of protein renders from experiments learning to use #Blender and #MolecularNodes. Excited for #VIZBI 2026!!!

"Molecular Masterpieces - A Fashion Show of Protein Glow-ups"

e-nox.net/vizbi-molecu...

#SciArt

3 months ago 6 2 0 0
Get your war on 3: ok I hate to act like a fucking dumbass but are we at war? I mean, did we ever officially declare war? Declare war? Who's got time to declare war when there are so many bombs to drop

Get your war on 3: ok I hate to act like a fucking dumbass but are we at war? I mean, did we ever officially declare war? Declare war? Who's got time to declare war when there are so many bombs to drop

3 months ago 686 260 5 18
Advertisement
Post image

Less than a week to apply for our Cell Types Workshop!

Apply by Friday, January 9 to join us at our Seattle HQ for a hands-on workshop on how to describe your neurons like the Allen Institute.

🛫 Travel support available

Apply: alleninstitute.org/events/2026-cell-types-w...

🧠📈

3 months ago 7 1 0 0

Ha, I'd probably roll that back into the class. It's just so much more convenient than having to set up everything on the HPC for just a two week exploration.

3 months ago 1 0 0 0

This looks very handy for a short course I'm teaching about protein structure prediction in January

3 months ago 1 0 1 0
Preview
GitHub - hgbrian/foldism: protein folding app running on modal protein folding app running on modal. Contribute to hgbrian/foldism development by creating an account on GitHub.

I semi-vibe coded a frontend for all the protein folding tools I've been running. It runs on modal and has 5 different open-source algorithms: Chai1, Boltz2, AF2, Protenix(-mini).

Not 100% polished but hopefully of use to some use to people in the field!
github.com/hgbrian/fold...

3 months ago 11 3 1 0
course schedule as a table. Available at the link in the post.

course schedule as a table. Available at the link in the post.

I'm teaching Statistical Rethinking again starting Jan 2026. This time with live lectures, divided into Beginner and Experienced sections. Will be a lot more work for me, but I hope much better for students.

I will record lectures & all will be found at this link: github.com/rmcelreath/s...

4 months ago 662 235 12 20

Try to find Redmon's resume from back then before he ran off to the circus. My Little Pony everywhere. It's beautiful

4 months ago 2 0 0 0
Beatriz Rodrigues Estevam is pictured smiling to the camera while working on her laptop while seated in a booth.

Beatriz Rodrigues Estevam is pictured smiling to the camera while working on her laptop while seated in a booth.

Have you heard of the Sanger Prize? It is a three-month undergrad placement at the Institute, and applications are open.

Hear from our current Sanger Prize holder Beatriz Rodrigues Estevam, here ⤵️
sangerinstitute.blog/2025/11/20/unravelling-t...

5 months ago 12 5 0 2
Advertisement

Goes immediately to the top of the "To Read" list ⬇️

4 months ago 21 2 1 0

But it IS the best way to get 13 million dollars to do whatever you want for two years...

4 months ago 3 0 0 0
Preview
Neural dynamics outside task-coding dimensions drive decision trajectories through transient amplification Most behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choice...

“Our findings challenge the conventional focus on low-dimensional coding subspaces as a sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.”

4 months ago 143 41 8 9
A comic-style infographic titled “THE AI CHEF’S ‘PROCEDURAL’ SECRET: AN ATTRIBUTION ANALOGY.” It uses a robot chef baking a soufflé to explain how attribution and gradient-based tracing in AI works. The diagram proceeds left to right in five labeled steps.

⸻

1. THE TASK (REASONING)

A friendly robot chef stands in a kitchen, holding up a perfectly baked soufflé. A math bubble shows x + 2y = 10 as an analogy for solving a problem.
Caption: AI Chef (LLM) solves a problem (bakes a soufflé).

⸻

2. THE “FINGERPRINT” (GRADIENT)

Close-up of the robot whisking batter. A glowing network of abstract swirls appears over the bowl.
Caption: We record the exact, unique actions & “effort” (Gradient) used for this specific soufflé.

⸻

3. THE “BRAIN MAP” (EK/FAC)

The robot stands before floating diagram bubbles labeled Whisking Techniques, Aeration Physics, Heat Transfer, Simplified Linkages.
Caption: We use a simplified map of how the chef connects concepts (Hessian/EK-FAC approximation).

⸻

4. THE LIBRARY MATCH (ATTRIBUTION)

The robot enters a vast library with floor-to-ceiling bookshelves. A giant glowing fingerprint projection shines onto one shelf as the robot scans for the best match.
Caption: We scan the entire “cookbook library” (pre-training data) to find which book’s instructions best match the fingerprint via the brain map.

⸻

5. THE RESULT: PROCEDURAL KNOWLEDGE

The robot chef proudly holds a glowing lightbulb while a book opens nearby with a concept diagram. A large reference book beside him is titled “THE PHYSICS OF FOAMS & AERATION (NOT a Soufflé Recipe Book!)”
Caption: We find the source was NOT a recipe, but a foundational PRINCIPLE (procedural knowledge) applied to a new task.

⸻

Overall, the image uses the story of baking a soufflé to explain how AI models trace reasoning: capturing gradients, mapping conceptual relations, searching training data, and revealing underlying procedural knowledge rather than direct memorization.

A comic-style infographic titled “THE AI CHEF’S ‘PROCEDURAL’ SECRET: AN ATTRIBUTION ANALOGY.” It uses a robot chef baking a soufflé to explain how attribution and gradient-based tracing in AI works. The diagram proceeds left to right in five labeled steps. ⸻ 1. THE TASK (REASONING) A friendly robot chef stands in a kitchen, holding up a perfectly baked soufflé. A math bubble shows x + 2y = 10 as an analogy for solving a problem. Caption: AI Chef (LLM) solves a problem (bakes a soufflé). ⸻ 2. THE “FINGERPRINT” (GRADIENT) Close-up of the robot whisking batter. A glowing network of abstract swirls appears over the bowl. Caption: We record the exact, unique actions & “effort” (Gradient) used for this specific soufflé. ⸻ 3. THE “BRAIN MAP” (EK/FAC) The robot stands before floating diagram bubbles labeled Whisking Techniques, Aeration Physics, Heat Transfer, Simplified Linkages. Caption: We use a simplified map of how the chef connects concepts (Hessian/EK-FAC approximation). ⸻ 4. THE LIBRARY MATCH (ATTRIBUTION) The robot enters a vast library with floor-to-ceiling bookshelves. A giant glowing fingerprint projection shines onto one shelf as the robot scans for the best match. Caption: We scan the entire “cookbook library” (pre-training data) to find which book’s instructions best match the fingerprint via the brain map. ⸻ 5. THE RESULT: PROCEDURAL KNOWLEDGE The robot chef proudly holds a glowing lightbulb while a book opens nearby with a concept diagram. A large reference book beside him is titled “THE PHYSICS OF FOAMS & AERATION (NOT a Soufflé Recipe Book!)” Caption: We find the source was NOT a recipe, but a foundational PRINCIPLE (procedural knowledge) applied to a new task. ⸻ Overall, the image uses the story of baking a soufflé to explain how AI models trace reasoning: capturing gradients, mapping conceptual relations, searching training data, and revealing underlying procedural knowledge rather than direct memorization.

on #3, this paper uses a method where they can directly attribute specific documents from the pretraining dataset

they used it to show that LLMs do in fact learn procedures, not just autocomplete. But you could take this so much further with Olmo3

arxiv.org/abs/2411.12580

4 months ago 15 2 0 1

Yes! I carried a beat up OM-1 around the world after high school. Didn't know how good I had it.

4 months ago 0 0 0 0
Post image

Not sure why @lpachter.bsky.social did not post this here. But it is brilliant. Single cell genomics finally makes it to the clinic.

5 months ago 53 13 1 1