The voc corresponding to the logits
Posts by François Fleuret
- Ring Attention: takes advantage of multi-node hardware to scale the computation according to the sequence length
- Speculative decoding: a cheaper model generates tokens, and a rejection process corrects this generation to march the full-model distribution.
- Multi-token prediction: sums the training over multiple future tokens, possibly with additional readout heads.
- FlashAttention: computes the attention on the fly, avoiding a memory footprint O(T^2) (+ optimizes very carefully for the GPU!)
- Warmup: very short ramping-up of the learning rate, starting from 0
- Cosine schedule: the learning rate varies less at the beginning and end of the schedule
- AdamW: decouples weight includes decay from Adam
- RoPE (Rotary Positional Embedding): makes the attention depend only on the relative Q/K positions
- MoE (Mixture of Experts): The FFN block is implemented with multiple MLPs and a gating mechanism selects which ones process each token.
- RMSNorm instead of Layernorm: normalize only the scaling
- MLA (Multi-head Latent Attention): stores a low-rank projection of the attention block input and compute the KV from it
- SwiGLU: non-linearity for the FFN block with per-component gating
- Prenorm: normalization in the residual blocks before the attention operation and the FFN respectively
- GQA (Group Query Attention): more Q than (K, V)
I asked "on the other platform" what were the most important improvements to the original 2017 transformer.
That was quite popular and here is a synthesis of the responses:
"You are in Paris, enjoy the city, stop obsessing with AI"
Paris:
Yes, it's awesome. The kind of work that opens up a whole new and important field.
If your task is not resolution-agnostic, do not use normalized p-e.
All this being said, putting both normalized and non-normalized cannot hurt methinks.
You cannot be better off without p-e.
Why not a normalized positional encoding?
After a long lecture, I recommend a coffee, a pain au chocolat, and leave-me-the-fuck-alone time.
Maybe the wall was the friends we made during that journey Ted.
Why is it spooky?
I asked this because even though I am interested in the topic, I have not met so far "foundational" theory regarding the future of society with AI.
Someone linked this paper which is exactly the sort of thing I was looking for:
arxiv.org/abs/2502.12102
What is the true depth of an LLM?
Together with @danielepal.bsky.social , @matpagliardini.bsky.social, M. Jaggi and @francois.fleuret.org we show that LLMs have a smaller effective depth that can be exploited to increase inference speeds on multi-GPU settings!
arxiv.org/abs/2502.02790
(1/N)
We can't complain, can we?
J'étais l'invité du journal de 19h30 sur la @radiotelesuisse.bsky.social ce soir pour parler d'Intelligence Artificielle.
www.rts.ch/play/tv/19h3...
To do so, you concatenate all the sequences to make a batch of a single sequence, and carve the attention matrix into a block-diagonal one (possibly with causal structure in each block) so that sequences cannot look at each other.
Magic!
3/3
It does this by generating an optimized cuda kernel on the fly.
So it's cool for causal masks, but it also allows an amazing trick to deal with batches of sequences of various lengths *without padding*!
2/3
It is hard to overstate how cool and powerful is flex attention. @chhillee.bsky.social
pytorch.org/blog/flexatten…
TL;DR: it is an implementation of the attention operator in pytorch that allows in particular to efficiently "carve" the attention matrix.
1/3
I have to admit I am more on the other platform.
I'm very happy then! Thanks for the feedback.
Does it match your expectations?
The big city...
Finally got this beautiful piece from @francois.fleuret.org
Happy new year you all!
2025 is certainly full of promise.