Advertisement · 728 × 90

Posts by Benjie Wang

Anji Liu Incoming Assistant Professor at NUS working on tractable deep generative models.

🎓 Looking for PhD students, postdocs & interns!
I’m recruiting for my new lab at NUS School of Computing, focusing on generative modeling, reasoning, and tractable inference.
💡 Interested? Learn more here: liuanji.github.io
🗓️ PhD application deadline: June 15, 2025

11 months ago 19 9 0 1
Post image

What happens if we tokenize cat as [ca, t] rather than [cat]?

LLMs are trained on just one tokenization per word, but they still understand alternative tokenizations. We show that this can be exploited to bypass safety filters without changing the text itself.

#AI #LLMs #tokenization #alignment

1 year ago 48 15 6 3

Also check out the awesome paper "Sum of Squares Circuits" (arxiv.org/pdf/2408.11778) by @loreloc_, Stefan Mengel, and @tetraduzione, which concurrently showed the separation between monotone and squared circuits. Also at AAAI 2025 today poster #840!

1 year ago 3 0 0 0
Post image

Inception PCs strictly subsume monotone and squared PCs, and are strictly more expressive than both. We show this leads to improved downstream modeling performance when normalizing for FLOPS:

1 year ago 4 0 1 0
Post image

To overcome these limitations, we propose Inception PCs, a novel tractable probabilistic model representing a deep *sum-of-square-of-sums*.

Inception PCs explicitly introduce two types of latent variables into the circuit for the mixtures encoded at sum nodes.

1 year ago 3 0 1 0
Post image

We show that the reverse also holds (!!) - some tractable distributions expressed as monotone circuits cannot be compactly expressed as a square.

1 year ago 3 0 1 0
Preview
Subtractive Mixture Models via Squaring: Representation and Learning Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of c...

On the other hand, squared circuits (arxiv.org/abs/2310.00724) allow use of arbitrary real parameters by *squaring* the circuit output. It was previously proven that squared circuits can be exponentially more expressive than monotone circuits!

1 year ago 3 0 1 0

Probabilistic circuits are deep *tractable* probabilistic models that allow efficient and exact computation of marginals.

Traditionally, monotone circuits enforce non-negativity by using non-negative weights.

Paper: arxiv.org/abs/2408.00876

1 year ago 2 0 1 0
Post image

Circuits are generative models that use sum-product computation graphs to model probability densities. But how do we ensure the non-negativity of the output?

Check out our poster "On the Relationship between Monotone and Squared Probabilistic Circuits" at AAAI 2025 **today**: 12:30pm-14:30pm #841.

1 year ago 21 3 1 0

Want to turn your state-of-the-art diffusion models into ultra-fast few-step generators? 🚀
Learn how to optimize your time discretization strategy—in just ~10 minutes! ⏳✨
Check out how it's done in our Oral paper at ICLR 2025 👇

1 year ago 15 4 0 0
Advertisement

If you are interested in doing a #PhD with me at Imperial College London and qualify as a home student, please reach out (before end of 2024)! Potential topics: spatial statistics, applied deep generative models, probabilistic programming and more.

1 year ago 7 5 0 0

Thanks Devendra!

1 year ago 1 0 0 0

Thanks to my amazing co-authors Denis Mauá, @yjchoi1.bsky.social, @guyvdb.bsky.social. Hope to see you at the poster session!

1 year ago 2 0 1 0
Tractability results on case studies

Tractability results on case studies

Along the way we also show a bunch of other cool results, like:
- More efficient algorithms for causal inference on circuits
- New circuit properties
- Separation/hardness results

1 year ago 3 0 1 0
Table depicting the atlas of tractability conditions

Table depicting the atlas of tractability conditions

Building upon the prior PC atlas (proceedings.neurips.cc/paper_files/... ), our algebraic atlas provides a comprehensive approach for deriving **efficient algorithms** and **tractability conditions** for arbitrary compositional queries.

Try our atlas the next time you come across a new query!

1 year ago 2 0 1 0
PASP query as a composition

PASP query as a composition

Just as circuits serve as a unifying representation of models, we show how you can express many queries as compositions of just a few basic operations: aggregation (marginalization, max, etc.), product, and elementwise mappings.

1 year ago 2 0 1 0
Illustration of Probabilistic CIrcuit

Illustration of Probabilistic CIrcuit

Circuits are a unifying representation of probability distributions as a computation graph of sums and products. Here we consider the more general algebraic circuits, where sum/product is replaced with a semiring operation (think e.g. OR and AND for Boolean circuits).

1 year ago 2 0 1 0

You have some model/knowledge (e.g. Bayes Net, Probabilistic Circuit, Probabilistic/Logic Program, DB) and some query (e.g. MAP, Causal Adjustment) you want to ask. When can you compute this efficiently?

Find out @ NeurIPS today in Poster Session 6 East, #3801.

Paper: arxiv.org/abs/2412.05481

1 year ago 18 4 1 0

Hi! I work on prob ML & tractable models.

1 year ago 0 0 0 0