Advertisement · 728 × 90

Posts by Matthias Plappert

📝 New blog post: You Have to Earn Your Calculator

On the temptation of LLMs, and why sometimes you still need to do the work by hand.

matthiasplappert.com/blog/2026/earn-your-calc...

3 weeks ago 1 0 0 0

📝 New blog post: Deepfakes for Code and the Asymmetric Internet

AI hasn't just given us a new kind of spam—it has broken the economics of noise. The filter exists, but only if you can afford it.

matthiasplappert.com/blog/2026/deepfakes-for-...

1 month ago 3 0 0 0

📝 New blog post: Let's Talk About the Humanoid Robot in the Room

General purpose humanoid robots are like self-driving, but actually much harder.

matthiasplappert.com/blog/2026/humanoid-robot...

1 month ago 0 0 0 0
Post image

Is it just me or has Claude Code UI really gone downhill recently? Just got this beauty...

1 month ago 0 0 1 0
Embrace your Laziness in the Age of AI – Matthias Plappert AI agents remove the natural friction that kept us from overbuilding. Now you have to be your own regularizer.

I’ve decided to start blogging! The idea is to share short opinion pieces on topics in AI, machine learning and tech more broadly.

Here’s my first post 🎉

matthiasplappert.com/blog/2026/la...

2 months ago 1 0 0 0
Post image

o1 pro mode is pretty hilarious as well, especially step 2 (aka “optional wedge”)

1 year ago 1 0 0 0

RL always finds a way

1 year ago 1 0 0 0
Advertisement

How sure are we that the problem is only a software problem and not also a hardware limitation?

1 year ago 2 1 3 0

This man is asking the right questions. It's very much a hardware problem still, especially wrt reliability, power efficiency, and cost.

In that sense this humanoid robot hype is much worse than the self-driving cars one, because at least we knew how to build cars already.

1 year ago 2 0 0 0

I'm actually very confused by the GPT-4.5 release

1 year ago 0 0 0 0

We still don’t have broadly available self-driving cars but somehow humanoid robots for our homes are imminent. Yeah sure 🙄

1 year ago 3 0 1 0

Also, it has an unusual amount of non-fungible experiences to offer that I think will remain (and will probably increase) in value because they cannot be automated.

1 year ago 0 0 1 0

I’m actually quite bullish on Europe: I think it’s culturally well equipped to deal with the onset of increasing automation due to AI. We already have strong social security systems that can buffer the disruptions and peoples lives are less focused on work.

1 year ago 1 0 1 0

This is cool, congrats! But the code benchmarks you use are non-standard and make it unnecessarily hard to compare these models to other one. Why do you not report HumanEval, Codeforces or SWE-bench performance?

1 year ago 0 0 0 0
Advertisement

What made you a fan? Have heard multiple people switching recently but I’m not sure why

1 year ago 1 0 1 0

So, even though Dario claims this to be expected, I think this changes the economics of the whole thing quite significantly and not in the favor of those who rely on massive outside investments (ie OpenAI and Anthropic). (6/6)

1 year ago 0 0 0 0

So how valuable really is the second part? Surely you have to discount this now as well.

(Side note: DeepSeek also intends to build AGI so the first point gets indirectly attacked as well because there’s more competition; Dario realizes this and wants more export control for this reason) (5/n)

1 year ago 0 0 1 0

The second angle is what DeepSeek very directly attacked: they give you a recipe for how to train a frontier model for $5.5M and the weights themselves for free.

They also became VERY popular with consumers VERY quickly: their app is still the most downloaded app on the App Store. (4/n)

1 year ago 0 0 1 0

So you have two possible angles: we’ll build AGI and this will be very valuable and/or what we’re building on the way is also very valuable.

The problem is that the first angle is very risky because you don’t know who will build AGI and when. So even though valuable, you need to discount it. (3/n)

1 year ago 0 0 1 0

(This is a key difference from Google / Amazon / Microsoft: they have massive revenue and sizable balance sheets and can finance their AI projects this way.)

But because they require massive, outside investments, investors reasonably expect a return on investment. (2/n)

1 year ago 0 0 1 0
Preview
Dario Amodei — On DeepSeek and Export Controls On DeepSeek and Export Controls

Essays like the one from Dario (darioamodei.com/on-deepseek-...) miss the point on why DeepSeek matters so much to investors. The key issue for OpenAI and Anthropic (and any other VC model company in this space) is that they absolutely require outside investments. (1/n)

1 year ago 1 0 1 0

Yeah sure but the foreign phrasing is still misleading as it makes this sound like some foreign adversary somehow undermined the US govt, which is clearly not the case. I think the issue is that you can purchase that much influence regardless of whether or not the purchaser was born in the US

1 year ago 1 0 3 0

I agree but the foreign part is strange; Musk is a US citizen and has been for a long time.

1 year ago 7 0 5 0
Preview
46% of Nvidia's Revenue Came From 4 Mystery Customers Last Quarter | The Motley Fool Nvidia's incredible growth is increasingly reliant on just a handful of customers.

I don’t know, NVIDIA revenue is mostly driven by a very small number of big spender customers and if this cools the risk appetite of those companies (because someone can come along and do what you’ve been doing but with orders of magnitude less capex), that’s a problem for NVIDIA.

1 year ago 0 0 0 0
Advertisement
ggml : x2 speed for WASM by optimizing SIMD PR by Xuan-Son Nguyen for `llama.cpp`: > This PR provides a big jump in speed for WASM by leveraging SIMD instructions for `qX_K_q8_K` and `qX_0_q8_0` dot product functions. > > …

DeepSeek R1 appears to be a VERY strong model for coding - examples for both C and Python here: https://simonwillison.net/2025/Jan/27/llamacpp-pr/

1 year ago 40 14 2 0

OpenAI in particular still has an advantage of course because of its ChatGPT user base and brand.

These open weight models are also still lacking behind in some features (lack of multi-modality and advanced voice mode, for example).

1 year ago 0 0 1 0

It also increasingly looks to me like these models will get commoditized very rapidly.

They also depreciate in value extremely quickly: Arguably the value of a frontier model is now only a few million dollars but it likely cost OpenAI et al many times more than that only a few months ago.

1 year ago 0 0 1 0

I think these models likely caused a panic at places like OpenAI, Anthropic, Google and Meta: one of their main moats (having access to the biggest computers) seems weakened significantly.

It is also proof that much less well-capitalized players can be competitive. Others will notice and compete.

1 year ago 0 0 1 0

Finally got around to read the DeepSeek-v3 and r1 papers in detail and I’m very impressed. They keep things simple and pragmatic but go deep and optimize when worthwhile.

I think access to compute remains key but they demonstrate how much more can be squeezed out when faced with constraints.

1 year ago 5 1 1 0
Preview
HuggingFaceFW/fineweb · Datasets at Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

huggingface.co/datasets/Hug...

1 year ago 2 0 0 0