Really nice post! 💯
Posts by Adrián Javaloy
This great blogpost summarizes my own experience where I deleted code that Claude had written for me because I realized that I understood nothing in this file.
🚨 Opportunity for #Neurosymbolic AI folks!
I’m looking for a PhD student or postdoc to join the 🇦🇹 FWF Cluster of Excellence Bilateral AI (think #NeSy++):
www.bilateral-ai.net
Feel free to reach out or share 🙌
hey, just had a quick look at the talk, pretty exciting stuff! well done! :)
This was a nice project in collaboration with @loreloc.bsky.social and @nolovedeeplearning.bsky.social
PS: As a bonus, I wrote a small summary of the paper in my blog: adrianjav.github.io/blog/2026/os...
See you in Rio! 🌴
More importantly, we show that this comes at no cost in performance, and we can even train non-structured-decomposable squared circuits*
(* That is, circuits which cannot be efficiently squared)
As a result, we can train really large squared circuits while saving both time and memory!
At 357M parameters, we (⟂) use:
- 12 GiB vs 18 GiB (33% reduction!)
- 0.29ms vs 0.52ms per iteration (44% faster!)
Yes, we can!
💡 We generalize both ideas and propose to use orthogonality constraints to parametrize *already normalized* squared circuits
That way, we completely avoid squaring them during training!
In the tensor network community, a similar issue can be avoided for specific cases using canonical forms
And in the circuit community, determinism (i.e. non-overlapping supports) makes the square tractable, although it is too restrictive...
🤔 Can we go expand on these ideas?
One way of increasing the expressiveness of probabilistic circuits is to square them (multiply a circuit with itself).
😔 However, this imposes a quadratic cost in the circuit size, as we need to re-normalize it to ensure that it encodes a valid probability.
I am a bit late to the party, but I am happy to share that our latest work was accepted to #ICLR2026 🥳🥳
📜 How to Square Tensor Networks and Circuits Without Squaring Them
arxiv.org/abs/2512.17090
Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?🫣
Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!
📈 Tenerife Norte bate su récord de temperatura para noviembre: 33 °C el día 4.
➡️ Supera ampliamente el registro máximo anterior, de 31 °C. Tenerife Norte tiene una serie de datos de 85 años.
Convince me that Dagstuhl seminars are real and not AI generated 😒
To: Reviewer 2
My name is Inigo Montoya
You killed my paper
Prepare to die
Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.
We got lost in latent space. Join us 👇
Friday afternoon! Finally time to look back at a busy week, and ask oneself — "wait, what did I do, again?"
We are excited to bring #EurIPS 2025 to Copenhagen in December.
Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.
Reach out if you have any questions ❔
Info: eurips.cc/become-spons...
It's been a while, but I am happy to share that my PhD dissertation is finally available online! 🎉
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
🧵1/4
PS: If something, just check it out for the aesthetics 😋 (I will release the LaTeX template soon)
publikationen.sulb.uni-saarland.de/handle/20.50...
🧵4/4
Funny enough, I later found my perspective on soft constraints to be quite similar as that of soft inductive biases by @andrewgwils.bsky.social in one of his latest works:
arxiv.org/abs/2503.02113
🧵3/4
Also, I did put considerably effort to frame everything under a common question:
> What biases can we add to DL optimization so that the outcome of the model is what we expected from the beginning?
🧵2/4
It's been a while, but I am happy to share that my PhD dissertation is finally available online! 🎉
Not only it contains most of my work, but there is plenty of brand new content:
publikationen.sulb.uni-saarland.de/handle/20.50...
🧵1/4
What a nice experience! Thank you everyone who attended TPM!
Particularly those who engaged in the poster sessions, few times I had so much fun discussing my poster!
likely one of the best editions of #TPM ever!
big thanks to @poorvagarg.bsky.social @jsleland.bsky.social @javaloyml.bsky.social @zzhe.bsky.social @lennertds.bsky.social Lingyun Yao and Christoph Staudt for organizing it
and to everyone who attended it!
My maternity leave project is now somewhat out: I made a jupyterbook about the basics of ML, that I teach at TUE. You can check it out here:
sibylse.github.io/TUEML/intro....
The linA part is not yet formulated out and there are other todos, but maybe it helps someone with their own course design 😅
Last talk of the day for TPM!
@auai.org #TPM2025
we almost end the day (banquet incoming) with an extremely lively poster session!
the conference cannot officially start without a proper reception 🍹