Advertisement · 728 × 90

Posts by Adrián Javaloy

Really nice post! 💯

2 weeks ago 2 0 0 0

This great blogpost summarizes my own experience where I deleted code that Claude had written for me because I realized that I understood nothing in this file.

2 weeks ago 5 1 0 0
Bilateral AI - Home Bilateral AI is an Austrian Cluster of Excellence developing Broad AI by combining symbolic and sub-symbolic AI. Discover research, news and events.

🚨 Opportunity for #Neurosymbolic AI folks!

I’m looking for a PhD student or postdoc to join the 🇦🇹 FWF Cluster of Excellence Bilateral AI (think #NeSy++):
www.bilateral-ai.net

Feel free to reach out or share 🙌

3 weeks ago 4 3 0 0

hey, just had a quick look at the talk, pretty exciting stuff! well done! :)

3 weeks ago 2 0 1 0
Squaring tensor networks and circuits without squaring them | Adrián Javaloy or how to effectively exploit orthogonality constraints

This was a nice project in collaboration with @loreloc.bsky.social and @nolovedeeplearning.bsky.social

PS: As a bonus, I wrote a small summary of the paper in my blog: adrianjav.github.io/blog/2026/os...

See you in Rio! 🌴

2 months ago 1 0 0 0
Post image

More importantly, we show that this comes at no cost in performance, and we can even train non-structured-decomposable squared circuits*

(* That is, circuits which cannot be efficiently squared)

2 months ago 2 0 1 0
Post image

As a result, we can train really large squared circuits while saving both time and memory!

At 357M parameters, we (⟂) use:
- 12 GiB vs 18 GiB (33% reduction!)
- 0.29ms vs 0.52ms per iteration (44% faster!)

2 months ago 1 0 1 0

Yes, we can!

💡 We generalize both ideas and propose to use orthogonality constraints to parametrize *already normalized* squared circuits

That way, we completely avoid squaring them during training!

2 months ago 1 0 1 0

In the tensor network community, a similar issue can be avoided for specific cases using canonical forms

And in the circuit community, determinism (i.e. non-overlapping supports) makes the square tractable, although it is too restrictive...

🤔 Can we go expand on these ideas?

2 months ago 1 0 1 0
Advertisement

One way of increasing the expressiveness of probabilistic circuits is to square them (multiply a circuit with itself).

😔 However, this imposes a quadratic cost in the circuit size, as we need to re-normalize it to ensure that it encodes a valid probability.

2 months ago 1 0 1 0
Preview
How to Square Tensor Networks and Circuits Without Squaring Them Squared tensor networks (TNs) and their extension as computational graphs--squared circuits--have been used as expressive distribution estimators, yet supporting closed-form marginalization. However, ...

I am a bit late to the party, but I am happy to share that our latest work was accepted to #ICLR2026 🥳🥳

📜 How to Square Tensor Networks and Circuits Without Squaring Them

arxiv.org/abs/2512.17090

2 months ago 15 3 1 0
Post image

Want to use your favourite #NeSy model but afraid of the reasoning shortcuts?🫣

Fear not💪🏻In our #NeurIPS2025 paper we show that you just need to equip your favourite NeSy model with prototypical networks and the reasoning shortcuts will be a problem of the past!

5 months ago 14 3 1 1
Post image

📈 Tenerife Norte bate su récord de temperatura para noviembre: 33 °C el día 4.

➡️ Supera ampliamente el registro máximo anterior, de 31 °C. Tenerife Norte tiene una serie de datos de 85 años.

5 months ago 82 43 1 8

Convince me that Dagstuhl seminars are real and not AI generated 😒

5 months ago 1 0 0 0

To: Reviewer 2

My name is Inigo Montoya
You killed my paper
Prepare to die

7 months ago 54 6 1 0
Post image

Does a smaller latent space lead to worse generation in latent diffusion models? Not necessarily! We show that LDMs are extremely robust to a wide range of compression rates (10-1000x) in the context of physics emulation.

We got lost in latent space. Join us 👇

7 months ago 27 8 1 1

Friday afternoon! Finally time to look back at a busy week, and ask oneself — "wait, what did I do, again?"

7 months ago 12 2 1 0
Advertisement
Post image

We are excited to bring #EurIPS 2025 to Copenhagen in December.

Consider becoming a sponsor and support us in making this inaugural event a success! Sponsorship packages are available and can be further customized if necessary.

Reach out if you have any questions ❔
Info: eurips.cc/become-spons...

8 months ago 18 8 0 0
Publikationen der UdS: Meet my expectations: on the interplay of trustworthiness and deep learning optimization

It's been a while, but I am happy to share that my PhD dissertation is finally available online! 🎉

Not only it contains most of my work, but there is plenty of brand new content:

publikationen.sulb.uni-saarland.de/handle/20.50...

🧵1/4

8 months ago 8 3 1 0
Publikationen der UdS: Meet my expectations: on the interplay of trustworthiness and deep learning optimization

PS: If something, just check it out for the aesthetics 😋 (I will release the LaTeX template soon)

publikationen.sulb.uni-saarland.de/handle/20.50...

🧵4/4

8 months ago 1 0 0 0
Preview
Deep Learning is Not So Mysterious or Different Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overf...

Funny enough, I later found my perspective on soft constraints to be quite similar as that of soft inductive biases by @andrewgwils.bsky.social in one of his latest works:

arxiv.org/abs/2503.02113

🧵3/4

8 months ago 2 0 1 0

Also, I did put considerably effort to frame everything under a common question:

> What biases can we add to DL optimization so that the outcome of the model is what we expected from the beginning?

🧵2/4

8 months ago 0 0 1 0
Publikationen der UdS: Meet my expectations: on the interplay of trustworthiness and deep learning optimization

It's been a while, but I am happy to share that my PhD dissertation is finally available online! 🎉

Not only it contains most of my work, but there is plenty of brand new content:

publikationen.sulb.uni-saarland.de/handle/20.50...

🧵1/4

8 months ago 8 3 1 0

What a nice experience! Thank you everyone who attended TPM!

Particularly those who engaged in the poster sessions, few times I had so much fun discussing my poster!

8 months ago 8 0 0 0

likely one of the best editions of #TPM ever!

big thanks to @poorvagarg.bsky.social @jsleland.bsky.social @javaloyml.bsky.social @zzhe.bsky.social @lennertds.bsky.social Lingyun Yao and Christoph Staudt for organizing it

and to everyone who attended it!

8 months ago 13 2 0 0
Advertisement
Welcome to the Data Mining and Machine Learning Book — Data Mining and Machine Learning Jupyter Book

My maternity leave project is now somewhat out: I made a jupyterbook about the basics of ML, that I teach at TUE. You can check it out here:
sibylse.github.io/TUEML/intro....
The linA part is not yet formulated out and there are other todos, but maybe it helps someone with their own course design 😅

8 months ago 6 4 1 0
Post image

Last talk of the day for TPM!

@auai.org #TPM2025

8 months ago 12 2 0 0
Post image Post image Post image Post image

we almost end the day (banquet incoming) with an extremely lively poster session!

8 months ago 16 4 0 0
Post image Post image

the conference cannot officially start without a proper reception 🍹

9 months ago 13 4 0 0