Advertisement · 728 × 90

Posts by Bruno Gavranović

While this now feels "obvious", this distinction of "differentiating through a fixed program" versus "learning which program we generate" is one I've never seen acknowledged before

16 hours ago 2 0 0 0

and it ended up morphing into a novel perspective on what it means to integrate dependent types into training.

16 hours ago 2 0 1 0
Post image

I just published a new blog post!

Types and Neural Networks
( www.brunogavranovic.com/posts/2026-04-20-types-a... )

It's about what comes into view once dependent type systems become a part of neural network training.

17 hours ago 17 5 1 0

New blog post coming up soon!

I'm very excited about some of the work we've been doing at GLAIVE about making the output space of frontier models *typed*.

Stay tuned :)

5 days ago 5 0 0 0

What is really exciting is that *type-safe einsum* is now fully within reach, and I have a pretty good idea how to implement it.

6 days ago 1 0 0 0

uses directly the function above, which is merely a wrapper around dependent lenses.

6 days ago 1 0 1 0
Preview
TensorType/src/Data/Tensor/Tensor.idr at main · bgavran/TensorType Framework for type-safe pure functional and non-cubical tensor processing, written in Idris 2 - bgavran/TensorType

On the technical side it finally enabled tensor reshapes to fully fall out of the categorical machinery: they're extensions of morphisms of containers.

That is, tensor reshape defined here: github.com/bgavran/TensorType/blob/...

6 days ago 1 0 1 0

That is, compare the syntax for creating a non-cubical tensor (image attached to this post) vs one for non-cubical ones (image in the original post)

6 days ago 1 0 1 0
Post image

This also solved a few technical issues behind the scenes, and eased ergonomics around distinguishing between creation of cubical tensors (which I want to have backward compatibility with), and non-cubical tensors (which this framework enables).

6 days ago 1 0 1 0
Advertisement

b) cannot by accident sum over sequence length, for instance, instead of "batch"

There's still a long way to go to get this fully integrated, but I'm quite excited

6 days ago 2 0 1 0
Post image

I never wrote about it here, but as of some time ago I figured out a basic implementation for named axes in TensorType:

https://github.com/bgavran/TensorType

This means that now you're:
a) forced to assign some meaning to all your axes)

6 days ago 4 2 1 0

something strange and horrible and somehow fitting that we should have a very real threat of madman civilizational destruction at the very moment when we also have humans on the dark side of the moon taking pictures that show how small, precious, and beautiful our world and existence is

1 week ago 2960 692 35 31
Post image

"Coalgebras for categorical deep learning:

Representability and universal approximation"

https://arxiv.org/abs/2603.03227

2 weeks ago 3 0 0 0

Probably joining these, or engaging in the discussions/joining who your followers follow is a good way to get started

2 weeks ago 2 0 1 0
Preview
Public view of Category Theory | Zulip team chat Browse the publicly accessible channels in Category Theory without logging in.

2) Various 'Zulip's: CT Zulip (categorytheory.zulipchat.com), Lean Zulip (leanprover.zulipchat.com), Idris Zulip (idris-lang.zulipchat.com) ...

2 weeks ago 1 0 1 0

1) Mathstodon/ BlueSky. It seems to be more 'academic' in my experience than 'industrial', and compared to old twitter there's considerably less discussion about the underlying technology here, and more about societal impacts

2 weeks ago 1 0 1 0
Preview
Public view of Category Theory | Zulip team chat Browse the publicly accessible channels in Category Theory without logging in.

Thanks for reaching out! Unfortunately after the exodus from Twitter in my experience the community became more disjoint than before. Nonetheless, there are still many hubs where researchers congregate. This is mostly:

2 weeks ago 2 0 1 0

Where are the nuanced left-wing takes on modern AI and LLMs?

So much of the discourse around this tech is centered on rejecting it because of who currently owns it. But like all tech, it can be used for both oppression and liberation.

Who is focusing on the latter?

3 weeks ago 17 2 4 0
julesh (@julesh@mathstodon.xyz) New blog post: Sequents for sequence II: Balancing the strangeness budget Ending with a teaser reveal https://julesh.com/posts/2026-03-23-sequents-sequence-ii.html

Lots of exciting stuff has been happening at GLAIVE:

https://mathstodon.xyz/@julesh/116279930318085086

mathstodon.xyz/@Andrev@types.pl/1162557...

3 weeks ago 0 0 0 0
Advertisement

Discworld QOTD, from Monstrous Regiment

“Stopping a battle is much harder than starting it. Starting it only requires you to shout ‘Attack!’ but when you want to stop it, everyone is busy.”

1 month ago 1151 316 9 11
Post image

Masteful sequence of quotes in Mike and Ike which I never noticed before

1 month ago 25 4 2 0

never, ever, ever, ever accept "how will you pay for it?" as an argument against social programs.

1 month ago 7202 2613 20 35
Preview
julesh (@julesh@mathstodon.xyz) Attached: 1 image New blog post! Autodiff through function types: Categorical semantics the ultimate backpropagator https://julesh.com/posts/2026-02-20-categorical-semantics-ultimate-backpropagator.html

RE: https://mathstodon.xyz/@julesh/116103778140182700

Some more work I've been a part of:

2 months ago 2 0 0 0
Post image

New blog post!

Autodiff through function types: Categorical semantics the ultimate backpropagator

julesh.com/posts/2026-02-20-categor...

2 months ago 13 3 1 0

The transition from “AI can’t do novel science” to “of course AI does novel science” will be like every other similar AI transition.

First the over-enthusiastic claims that are debunked, then smart people use AI to help them, then AI starts to do more of the work, then minor discoveries, & then…

2 months ago 103 12 3 2

there is a widespread belief among people with even a little technical and scientific savvy that Alchemy was a bunch of hooey, that they never published their experiments, that alchemists had to each discover on their own not to drink mercury, etc

but this is urban legend!

2 months ago 89 31 1 5
Post image

This is not what is happening at all.

The amount of misinformation on BlueSky about AI is insane, and it keeps promising that AI is all hype that is going away soon.

A really dangerous position that cedes all AI policy and decisions about how it will be used to others.

Also Futurism is clickbait

2 months ago 274 40 20 7
Advertisement
numpy.ndarray — NumPy v2.5.dev0 Manual

That's not what ndarrays are: they are homogeneous arrays of elements: numpy.org/devdocs/refe...

2 months ago 0 0 1 0

"it's impossible to reproduce ndarray in a type safe way." What part of it is impossible to reproduce?

2 months ago 0 0 1 0

This is a catch-22 problem: nobody is working on non-cubical tensors because they're slow, and they're slow because nobody is working on them.

One of the goals of TensorType is to try out machine learning with them. If something new works, that'll be good incentive to work on making it fast

2 months ago 1 1 0 0