We're glad to announce the NeSy 2025 Test of Time award for "Probabilistic Inference Modulo Theories"!
πRodrigo de Salvo Braz was here to accept the award.
This is groundwork for recent NeSy approaches like DeepSeaProbLog and the probabilistic algebraic layer.
Posts by Lorenzo Loconte
EurIPS is coming! π£ Mark your calendar for Dec. 2-7, 2025 in Copenhagen π
EurIPS is a community-organized conference where you can present accepted NeurIPS 2025 papers, endorsed by @neuripsconf.bsky.social and @nordicair.bsky.social and is co-developed by @ellis.eu
eurips.cc
We propose Neurosymbolic Diffusion Models! We find diffusion is especially compelling for neurosymbolic approaches, combining powerful multimodal understanding with symbolic reasoning π
Read more π
This year I am co-organizing the 8th iteration of the Tractable Probabilistic Modeling #TPM workshop at #UAI2025 π΄ Rio de Janeiro edition π΄
π lnkd.in/dDK8T5Au
β° Submission deadline: May 23th AoE
π΄ Date: July 15th
π§΅π
Today we have @lennertds.bsky.social from KU Leuven teaching us how to adapt NeSy methods to deal with sequential problems π
Super interesting topic combining DL + NeSy + HMMs! Keep an eye on Lennert's future works!
Have you thought that in computer memory model weights are given in terms of discrete values in any case. Thus, why not do probabilistic inference on the discrete (quantized) parameters. @trappmartin.bsky.social is presenting our work at #AABI2025 today. [1/3]
the #TPM β‘Tractable Probabilistic Modeling β‘Workshop is back at @auai.org #UAI2025!
Submit your works on:
- fast and #reliable inference
- #circuits and #tensor #networks
- normalizing #flows
- scaling #NeSy #AI
...& more!
π deadline: 23/05/25
π tractable-probabilistic-modeling.github.io/tpm2025/
great to have David Watson (dswatson.github.io) visiting us today and talking about #trustworthy #AI #ML for tabular data with #trees and #circuits
with connections to #generative modeling, #causality and #fast inference!
You can find the speakers bios and the abstracts of presentations here: april-tools.github.io/colorai/spea...
Check them out!
The last speaker of the workshop is Alexandros Georgiou, who is giving an introduction to polynomial networks and equivariant tensor network architecture, as well as how to implement them.
After lunch break, Andrew G. Wilson (@andrewgwils.bsky.social) is now giving his presentation on the importance of linear algebra structures in ML, as well as on how to navigate such structures in practice.
After Nadav it is now the turn of Guillaume Rabusseau, who is joining us online
Guillaume guides us through interesting expressiveness relationships of families of RNNs that are parameterized through tensor factorizations techniques
Live from the CoLoRAI workshop at AAAI
(april-tools.github.io/colorai/)
Nadav Cohen is now giving his talk on "What Makes Data Suitable for Deep Learning?"
Tools from quantum physics are shown to be useful in building more expressive deep learning models by changing the data distribution.
we're almost ready for the @realaaai.bsky.social #AAAI25 Workshop on Connecting Low-rank Representations in #AI (#CoLoRAI) tomorrow!
we also have video presentations for some of the accepted papers you can already check (π april-tools.github.io/colorai/acce...)!
π½οΈ www.youtube.com/watch?v=JlVd...
We all know backpropagation can calculate gradients, but it can do much more than that!
Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
We are going to present our poster "Sum of Squares Circuits" at AAAI in Philadelphia today
Hall E 12:30pm-14:00pm poster #840
We trace expressiveness connections of different types of additive and subtractive deep mixture models and tensor networks
π arxiv.org/abs/2408.11778
Are you at AAAI in Philadelphia and interested about #tensor-factorizations or #circuits or even both?
Then join us today at our tutorial: "From tensor factorizations to circuits (and back!)"
Details and materials here
april-tools.github.io/aaai25-tf-pc...
Time 4:15pm - 6:00pm, Room 117
I am at @realaaai.bsky.social #AAAI25 in sunny #Philadelphia π
reach out if you want to grab coffee and chat about #probabilistic #ML #AI #nesy #neurosymbolic #tensor #lowrank models!
check out our tutorial
π april-tools.github.io/aaai25-tf-pc...
and workshop
π april-tools.github.io/colorai/
Right, but what are Causal NFs again? In case you missed our NeurIPS 2023 Oral, Causal NFs are Deep Learning models that learn causal systems (SCMs) while having *theoretical guarantees*!
In short, you can accurately use them for causal inference tasks π§ͺ
arxiv.org/abs/2306.05415
Have you ever been curious to try Causal Normalizing Flows for your project but found them intimidating? Say no more π
I just released a small library to easily implement and use causal-flows:
github.com/adrianjav/ca...
Happy to see our work at TMLR!
We systematically show the relationships between two apparently different fields: tensor factorizations and circuits, and how bridging the two enables us to exchange results, research opportunitie in ML, and practical implementation solutions.
Interested in estimating posterior predictives in Bayesian inference? Really want to know if your approximate inference "is working"?
Come to our poster at the NeurIPS BDU workshop on Saturday - see TL;DR below.
π£ Does your model learn high-quality #concepts, or does it learn a #shortcut?
Test it with our #NeurIPS2024 dataset & benchmark track paper!
rsbench: A Neuro-Symbolic Benchmark Suite for Concept Quality and Reasoning Shortcuts
What's the deal with rsbench? π§΅
I wanted to make my first post about a project close to my heart. Linear algebra is an underappreciated foundation for machine learning. Our new framework CoLA (Compositional Linear Algebra) exploits algebraic structure arising from modelling assumptions for significant computational savings! 1/4
@ropeharz.bsky.social and his pet dinosaur are on bsky!
follow him for #probabilistic #ML content!
My amazing collaborators @sbadredd.bsky.social and @e-giunchiglia.bsky.social have landed!
We're working on a Python library for accessible Neurosymbolic Learning called ULLER, that we plan to release soon.
White paper: arxiv.org/abs/2405.00532