We're probably at peak research science. Low level stuff is getting automated, coding is basically reviewing, plots are beautiful, everyone gets a brainstorming buddy. Work is mostly ideation/planning. Soon it might become mostly meetings/auditing. Hope I'm wrong, work is pretty fun right now!
Posts by Diego de las Casas
The AI community is re-learning 20 years of cybersecurity. The hard way www.404media.co/exposed-molt...
Humming in denial as Material 3 takes over all my screens
Atrapanubes is such a good Chilean beer. Great taste, great art, great name.
Gotta wait until he double-crosses Indiana Jones to steal the Holy Grail, I'm afraid.
I wrote a CLI script to run PDFs through the new Mistral OCR API model (with some help from Claude) - details on that and notes on the new model here: https://simonwillison.net/2025/Mar/7/mistral-ocr/
We've upgraded Le Chat and it's blazing fast right now!
Also available for Android and iOS as of today
mistral.ai/en/news/all-...
Mistral Small 3 is also available on many partner platforms:
- Ollama: ollama.com/library/mist...
- Kaggle: kaggle.com/models/mistr...
- Fireworks: fireworks.ai/models/firew...
- Together: together.ai/blog/mistral...
And many more soon!
Performance of Mistral Small 3 Instruct model
huggingface.co/mistralai/Mi...
Mistral Small 3 Base model
huggingface.co/mistralai/Mi...
Mistral Small 3 architecture is optimised for latency while preserving high quality
We're releasing Mistral Small 3!
- 24B params, 81% MMLU
- Latency optimized: 150 tokens/s
- Competitive with Llama-3.3 70B, Qwen-2.5 32B, GPT4o-mini
- Apache 2.0
mistral.ai/news/mistral...
What people are going to do with AGI
Screen cap from one of the Thor movies featuring a dark haired pale skinned woman as Thor's sister Hela. She has her hand out stopping Thor's hammer (Mjölnir) in mid air. The hammer is labeled "It's basic biology". Hela is labeled "Advanced Biology"
I know, but it's just an application of one of my favorite memes:
agent swarm framework aces spatial reasoning test
Inventors of flow matching have released a comprehensive guide going over the math & code of flow matching!
Also covers variants like non-Euclidean & discrete flow matching.
A PyTorch library is also released with this guide!
This looks like a very good read! 🔥
arxiv: arxiv.org/abs/2412.06264
Jane Street, a quant trading firm has a very good YouTube channel. For comparison, DeepSeek is also a quant trading firm.
They recently published a video on "Building Machine Learning Systems for a Trillion Trillion Floating Point Operations".
Link: www.youtube.com/watch?v=139U...
AI Scientists: here is a technology that will automate your grunt work so you can spend more time with your kids
AI Ads: here is a technology that will automate spending time with your kids
A dataset of 1 million or 2 million Bluesky posts is completely irrelevant to training large language models.
The primary usecase for the datasets that people are losing their shit over isn't ChatGPT, it's social science research and developing systems that improve Bluesky.
Arxiv sharing reminder
pdf ❌
abs ✅
In fact, statistical malpractice is the main driver of progress in machine learning. At some point, we need to come to terms with this.
Fsdp2 has a different policy for handling streams that is also worth a read
github.com/pytorch/pyto...
READ: “3,337 Parisians were equipped with GPS trackers to record their journeys…for journeys from the outskirts of Paris to the center, the number of cyclists now far exceeds the number of motorists, a huge change from just 5 years ago.”
Evidence of leadership.
www.forbes.com/sites/carlto...
Comparison table of various AI models across different benchmarks: Mathvista, MMMU, ChartQA, DocVQA, VQAv2, AI2D, and MM MT-Bench. Models are categorized into Open Weights, Closed, and Unreleased. Key models include Pixtral Large, Llama-3.2 90B, Gemini-1.5 Pro, GPT-4o, Claude-3.5 Sonnet, Llama-3.1 505B, and Grok-2. The table shows measured and reported performance scores, highlighting differences in model capabilities across various tasks. Pixtral Large excels in Mathvista, DocVQA, AI2D and MM MT-Bench benchmarks.
Pixtral Large:
- 123B decoder, 1B vision encoder, 128K sequence length
- Frontier multimodal model
- Maintains text performance of Mistral Large 2
HF weights: huggingface.co/mistralai/Pi...
Try it: chat.mistral.ai
Blog post: mistral.ai/news/pixtral...
Two announcement cards from the Mistral AI team, dated November 18, 2024. The first card announces 'Mistral has entered the chat' with a brief description: 'Search, vision, ideation, coding... all yours for free.' The second card announces 'Pixtral Large' with the description: 'Pixtral grows up.' Both cards feature an orange 'Read More' button.
We have 2 new big updates today at Mistral:
- New Le Chat: With canvas, web search, image understanding and generation & more - and free!
- Pixtral Large, our Frontier 124B open weight multimodal model that powers it.
Try it: chat.mistral.ai
Blog post: mistral.ai/news/mistral...
There seems to be some renewed interest in making this work in the ML/AI space, so I'm here as well 👋
Here's my latest blog post for good measure, about how diffusion models of images perform autoregression in frequency space: sander.ai/2024/09/02/s...
When I write more, I'll share here as well!
Quick thread in response to a question on token packing practices when pretraining LLMs!