Advertisement · 728 × 90

Posts by François Fleuret

The voc corresponding to the logits

7 months ago 1 0 0 0
Post image
7 months ago 28 3 3 0

- Ring Attention: takes advantage of multi-node hardware to scale the computation according to the sequence length

- Speculative decoding: a cheaper model generates tokens, and a rejection process corrects this generation to march the full-model distribution.

11 months ago 16 0 1 0

- Multi-token prediction: sums the training over multiple future tokens, possibly with additional readout heads.

- FlashAttention: computes the attention on the fly, avoiding a memory footprint O(T^2) (+ optimizes very carefully for the GPU!)

11 months ago 12 0 1 0

- Warmup: very short ramping-up of the learning rate, starting from 0

- Cosine schedule: the learning rate varies less at the beginning and end of the schedule

- AdamW: decouples weight includes decay from Adam

11 months ago 13 0 1 0

- RoPE (Rotary Positional Embedding): makes the attention depend only on the relative Q/K positions

- MoE (Mixture of Experts): The FFN block is implemented with multiple MLPs and a gating mechanism selects which ones process each token.

11 months ago 12 0 1 0

- RMSNorm instead of Layernorm: normalize only the scaling

- MLA (Multi-head Latent Attention): stores a low-rank projection of the attention block input and compute the KV from it

- SwiGLU: non-linearity for the FFN block with per-component gating

11 months ago 12 0 1 0

- Prenorm: normalization in the residual blocks before the attention operation and the FFN respectively

- GQA (Group Query Attention): more Q than (K, V)

11 months ago 15 0 1 0
Advertisement

I asked "on the other platform" what were the most important improvements to the original 2017 transformer.

That was quite popular and here is a synthesis of the responses:

11 months ago 204 43 4 3
Post image Post image Post image

"You are in Paris, enjoy the city, stop obsessing with AI"

Paris:

1 year ago 36 1 2 0

Yes, it's awesome. The kind of work that opens up a whole new and important field.

1 year ago 1 0 0 0

If your task is not resolution-agnostic, do not use normalized p-e.

All this being said, putting both normalized and non-normalized cannot hurt methinks.

1 year ago 0 0 1 0

You cannot be better off without p-e.

1 year ago 1 0 0 0

Why not a normalized positional encoding?

1 year ago 0 0 1 0

After a long lecture, I recommend a coffee, a pain au chocolat, and leave-me-the-fuck-alone time.

1 year ago 2 0 0 0

Maybe the wall was the friends we made during that journey Ted.

1 year ago 7 1 0 0

Why is it spooky?

1 year ago 1 0 2 0
Advertisement
Preview
Relational Norms for Human-AI Cooperation How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student...

I asked this because even though I am interested in the topic, I have not met so far "foundational" theory regarding the future of society with AI.

Someone linked this paper which is exactly the sort of thing I was looking for:

arxiv.org/abs/2502.12102

1 year ago 6 0 2 0
Post image

What is the true depth of an LLM?

Together with @danielepal.bsky.social , @matpagliardini.bsky.social, M. Jaggi and @francois.fleuret.org we show that LLMs have a smaller effective depth that can be exploited to increase inference speeds on multi-GPU settings!

arxiv.org/abs/2502.02790
(1/N)

1 year ago 13 3 1 0

We can't complain, can we?

1 year ago 1 0 1 0
Preview
19h30 - Play RTS Play RTS vous permet de visionner ou d'écouter de nombreuses émissions tv ou radio, quand et aussi souvent que vous le souhaitez.

J'étais l'invité du journal de 19h30 sur la @radiotelesuisse.bsky.social ce soir pour parler d'Intelligence Artificielle.

www.rts.ch/play/tv/19h3...

1 year ago 12 3 1 0

To do so, you concatenate all the sequences to make a batch of a single sequence, and carve the attention matrix into a block-diagonal one (possibly with causal structure in each block) so that sequences cannot look at each other.

Magic!

3/3

1 year ago 8 0 1 0

It does this by generating an optimized cuda kernel on the fly.

So it's cool for causal masks, but it also allows an amazing trick to deal with batches of sequences of various lengths *without padding*!

2/3

1 year ago 4 0 1 0

It is hard to overstate how cool and powerful is flex attention. @chhillee.bsky.social

pytorch.org/blog/flexatten…

TL;DR: it is an implementation of the attention operator in pytorch that allows in particular to efficiently "carve" the attention matrix.

1/3

1 year ago 56 5 2 0

I have to admit I am more on the other platform.

1 year ago 0 0 1 0

I'm very happy then! Thanks for the feedback.

1 year ago 1 0 0 0

Does it match your expectations?

1 year ago 0 0 1 0
Advertisement

The big city...

1 year ago 2 0 1 0
Post image

Finally got this beautiful piece from @francois.fleuret.org

1 year ago 24 1 1 0

Happy new year you all!

2025 is certainly full of promise.

1 year ago 30 0 0 0