The new ThoughtWorks Tech Radar reads like a warning letter. Cognitive debt. Broken productivity metrics. Terms nobody agrees on. If you've been reading my posts, none of it will surprise you. It's always nice to get evidence that I'm not completely crazy and making all of this up.
Posts by Patrick “Grumpy” Prill
The IT consolidation was coming long before ChatGPT. AI just gave it a narrative. And the people who actually understood the systems? They're updating their LinkedIn.
"Are we building the right thing?" is a question as old as engineering itself. GenAI made it easier than ever to skip. Time to start asking again.
I have not seen many places over the last 26 years that invested in good requirements writing. A few, yes. And the better the requirements, the better the waterfall or scrum or whatever. But as most people‘s crystal ball seems to be broken, the requirements were accordingly.
I have to say the mentioned project in the post was not the worst waterfall. But it was also a project with a few hundred people overall and a lot of expertise and specialists.
Spec-Driven Development is the hot new thing in vibe coding. Write a perfect spec, let the agent work, sit back. Sounds familiar? We called it Waterfall. We hated it. We sucked at it. Why do we think we're suddenly ready for it now?
I listened to a podcast about "assisted migration" for trees and plants. A fantastic topic for a systems thinker. I listened to an AI podcast just before that. And then a few uncomfortable synaptic connections clicked.
OpenAI valued at $852 billion without profit. Sora lost $1 million per day. Big Tech spends $700 billion yearly, and most of it vanishes into chips and electricity. IPOs are coming. Before you buy in, remember T-Online. That was counted as a safe bet, too.
The S-to-P Jig is an advanced cognitive jig from DSRP. Scrum is a perfect example. And understanding it explains why good Scrum Masters adapt the framework instead of just following it.
"Where are your solutions?" Fair question. Honestly: I don't have any. What I have is tools for thinking. Systems thinking. Perspective shifts. I can't think for you. But I can show you ways to improve your thinking. I don't want you to share my thinking. That's my contribution. Small, but mine.
Heroes mask a system's weaknesses. When they're gone, things collapse. AI agents are the new heroes, flooding teams with output. But a healthy system doesn't need heroes. It needs balance.
Yesterday's garbage run by the river got me thinking. Biases aren't "just" glitches in our brains. They're stable systems with feedback loops. And they all have a social layer that makes them sticky. Find the loop, find the leverage point.
The simple answer wins. Always. Because we wanted it that way. We trained the algorithm, and now it trains us. Keep thinking. Leave the one-liners to the comedians
I wrote another post on that topic recently, but not with the focus on tests. But you are absolutely right.
In a recent project I poked the AI to start writing unit test cases for me, setting up a scaffold. It chose to write 7 test cases for an enum with 4 values.
AI agents promise to write your tests for you. Sounds great until you realize: a test suite you didn't build is a test suite you don't understand. Don't let green checkmarks replace your own eyes.
AI agents are writing your test scripts now. So what exactly are you still doing here? If coding was the vehicle but you forgot the destination, you might be getting Pluto-ed. A post about dwarf planets, AI shepherds, and redefining your orbit.
AI coding agents produce the most probable code. Not the best. Not the most elegant. The most average. Without skill to guide them, you'll get a codebase that works, passes tests, and slowly becomes a maintenance nightmare. Your code deserves better.
A tester who doesn't think in systems is just poking at a surface. DSRP gave me a framework for what good testers do intuitively: draw boundaries, understand parts and wholes, stress relationships, switch perspectives. That's how you detect risks. That's the real job.
If your context works like that, then this is good. I have seen more than one context where test cases were often an afterthought. Even if it shouldn’t have been.
Test cases describe a desired reality. But what does that mean through a systems thinking lens? I have given it a try to explain my train of thought.
12 o'clock in Germany. Time to raise the gas prices. Do we get a new all-time high? Here is a rule from systems thinking that helps to understand situations like this. When the purpose of a system does not match its intention.
Bonus post today, because of reasons...
In woodworking, glue strength depends on surface area. In teams, those surfaces are the moments where people connect. When we automate them away, we don't just remove work. We remove the thing that held the structure together.
We love labeling things as good or bad. Especially when testing. But labeling while building your mental model contaminates your map. Observe first, evaluate later. Bonus to The Six Moves series.
Same reality, seen differently. The P-Circle lays out who is looking, from where, and what they see. The real power? Noticing whose perspective is missing. Part 6 of the Six Moves.
It would be nice to commit the context of the agent with the PR, so that you can ask questions to someone who knows what happened.
Nature hides its secrets in relationships. So do the systems we build. Part 5 of the Six Moves grabs the relationship arrows and asks: what is this made of?
AI used well makes you faster, not lazier. Feed it stack traces, let it draft scripts, use it to learn. But stay in the loop. The moment you stop looking at the output, you're a spectator, not a professional. Here's a few examples that work for me.
Microsoft, Google, Apple, Meta. And now OpenAI and Anthropic. The speed of increasing dependency on big tech in the age of AI is getting higher and higher each day. Big tech has found a new drug to make us even more dependent. And we.... seem to accept it.
Often the beauty lies in the simplicity of a thing. AI allows people to add more without much friction. But is this really useful?
Happy Friday rant from yours truly: Testers build trust by understanding thought processes. LLMs 'explain' themselves too. But is it real reasoning or post-hoc storytelling? And where are the observability folks when we need them most? Asking not only for a regulated industry.
Happy weekend!