Advertisement Β· 728 Γ— 90

Posts by Donatella Genovese

Post image

Please share it within your circles! edin.ac/3DDQK1o

1 year ago 13 9 0 1
Post image

Really cool paper by @kayoyin.bsky.social about interpretability of In Context Learning, they found that Function Vectors (FV) heads are crucial for few-shot ICL.
www.arxiv.org/abs/2502.14010

1 year ago 2 1 0 0

A really nice resource for understanding how to parallelize LLM training.

1 year ago 2 0 0 0

3/ Interleaving Concepts with Token Embeddings

πŸ”Ή Predicted concepts are compressed into a continuous vector 🎯
πŸ”Ή They are then inserted into hidden states alongside token embeddings 🧩

1 year ago 0 0 0 0

2/ Training the Model with Dual Objectives

πŸ”Ή Next-token prediction – the standard LLM training objective.
πŸ”Ή Concept prediction – the model learns to reproduce extracted concepts from its hidden state.

1 year ago 1 0 0 0

1/ Concept Extraction with SAE

πŸ”Ή A Sparse Autoencoder (SAE) extracts high-level concepts from the hidden states of a pretrained LLM.
πŸ”Ή Only the most important concepts are selected based on their attribution score (impact on model output).

1 year ago 0 0 0 0
Post image

πŸš€ Meta’s new LLM pretraining framework predicts concepts and integrates them into its hidden state to enhance next-token prediction. πŸš€

It achieves the same performance with 21.5% fewer tokens and better generalization! 🎯

πŸ“: arxiv.org/abs/2502.08524

1 year ago 1 0 3 0

A very interesting work that explores the possibility of having a unified interpretation across multiple models

1 year ago 2 1 0 0
Advertisement
Post image

*MoE Graph Transformers for Interpretable Particle Collision Detection*
by @alessiodevoto.bsky.social @sgiagu.bsky.social et al.

We propose a MoE graph transformer for particle collision analysis, with many nice interpretability insights (e.g., expert specialization).

arxiv.org/abs/2501.03432

1 year ago 12 4 0 0