While this now feels "obvious", this distinction of "differentiating through a fixed program" versus "learning which program we generate" is one I've never seen acknowledged before
Posts by Bruno Gavranović
and it ended up morphing into a novel perspective on what it means to integrate dependent types into training.
I just published a new blog post!
Types and Neural Networks
( www.brunogavranovic.com/posts/2026-04-20-types-a... )
It's about what comes into view once dependent type systems become a part of neural network training.
New blog post coming up soon!
I'm very excited about some of the work we've been doing at GLAIVE about making the output space of frontier models *typed*.
Stay tuned :)
What is really exciting is that *type-safe einsum* is now fully within reach, and I have a pretty good idea how to implement it.
uses directly the function above, which is merely a wrapper around dependent lenses.
On the technical side it finally enabled tensor reshapes to fully fall out of the categorical machinery: they're extensions of morphisms of containers.
That is, tensor reshape defined here: github.com/bgavran/TensorType/blob/...
That is, compare the syntax for creating a non-cubical tensor (image attached to this post) vs one for non-cubical ones (image in the original post)
This also solved a few technical issues behind the scenes, and eased ergonomics around distinguishing between creation of cubical tensors (which I want to have backward compatibility with), and non-cubical tensors (which this framework enables).
b) cannot by accident sum over sequence length, for instance, instead of "batch"
There's still a long way to go to get this fully integrated, but I'm quite excited
I never wrote about it here, but as of some time ago I figured out a basic implementation for named axes in TensorType:
https://github.com/bgavran/TensorType
This means that now you're:
a) forced to assign some meaning to all your axes)
something strange and horrible and somehow fitting that we should have a very real threat of madman civilizational destruction at the very moment when we also have humans on the dark side of the moon taking pictures that show how small, precious, and beautiful our world and existence is
"Coalgebras for categorical deep learning:
Representability and universal approximation"
https://arxiv.org/abs/2603.03227
Probably joining these, or engaging in the discussions/joining who your followers follow is a good way to get started
2) Various 'Zulip's: CT Zulip (categorytheory.zulipchat.com), Lean Zulip (leanprover.zulipchat.com), Idris Zulip (idris-lang.zulipchat.com) ...
1) Mathstodon/ BlueSky. It seems to be more 'academic' in my experience than 'industrial', and compared to old twitter there's considerably less discussion about the underlying technology here, and more about societal impacts
Thanks for reaching out! Unfortunately after the exodus from Twitter in my experience the community became more disjoint than before. Nonetheless, there are still many hubs where researchers congregate. This is mostly:
Where are the nuanced left-wing takes on modern AI and LLMs?
So much of the discourse around this tech is centered on rejecting it because of who currently owns it. But like all tech, it can be used for both oppression and liberation.
Who is focusing on the latter?
Lots of exciting stuff has been happening at GLAIVE:
https://mathstodon.xyz/@julesh/116279930318085086
mathstodon.xyz/@Andrev@types.pl/1162557...
Discworld QOTD, from Monstrous Regiment
“Stopping a battle is much harder than starting it. Starting it only requires you to shout ‘Attack!’ but when you want to stop it, everyone is busy.”
Masteful sequence of quotes in Mike and Ike which I never noticed before
never, ever, ever, ever accept "how will you pay for it?" as an argument against social programs.
RE: https://mathstodon.xyz/@julesh/116103778140182700
Some more work I've been a part of:
New blog post!
Autodiff through function types: Categorical semantics the ultimate backpropagator
julesh.com/posts/2026-02-20-categor...
The transition from “AI can’t do novel science” to “of course AI does novel science” will be like every other similar AI transition.
First the over-enthusiastic claims that are debunked, then smart people use AI to help them, then AI starts to do more of the work, then minor discoveries, & then…
there is a widespread belief among people with even a little technical and scientific savvy that Alchemy was a bunch of hooey, that they never published their experiments, that alchemists had to each discover on their own not to drink mercury, etc
but this is urban legend!
This is not what is happening at all.
The amount of misinformation on BlueSky about AI is insane, and it keeps promising that AI is all hype that is going away soon.
A really dangerous position that cedes all AI policy and decisions about how it will be used to others.
Also Futurism is clickbait
"it's impossible to reproduce ndarray in a type safe way." What part of it is impossible to reproduce?
This is a catch-22 problem: nobody is working on non-cubical tensors because they're slow, and they're slow because nobody is working on them.
One of the goals of TensorType is to try out machine learning with them. If something new works, that'll be good incentive to work on making it fast