Advertisement · 728 × 90

Posts by Patrick “Grumpy” Prill

The Tech Radar is Blinking Red ThoughtWorks just dropped Volume 34 of their Tech Radar, and it reads less like a technology map and more like a warning letter. Several signals on the same screen, all pointing the same way. If you've been following my posts, none of them will surprise you. What's new is that one of the most respected consultancies in our industry is now saying it out loud.

The new ThoughtWorks Tech Radar reads like a warning letter. Cognitive debt. Broken productivity metrics. Terms nobody agrees on. If you've been reading my posts, none of it will surprise you. It's always nice to get evidence that I'm not completely crazy and making all of this up.

1 day ago 26 14 0 1
Preview
The Convenient Scapegoat? When it takes a team two weeks to add a button, the problem isn't the button. And the solution isn't AI. I've been there. Literally. A simple button on a screen. Two calendar weeks. And if you're on the outside looking in, you think: what are these people doing all day? Are they incompetent? Lazy? You see a button. You think it's a button problem.

The IT consolidation was coming long before ChatGPT. AI just gave it a narrative. And the people who actually understood the systems? They're updating their LinkedIn.

2 days ago 5 0 0 0
Preview
Are We Building the Right Thing, or How GenAI Made Us Forget to Ask I was listening to the A/B Testing podcast the other day, episode 229, where Alan Page had my friend Chris Armstrong on as a guest. At one point they were talking about Verification and Validation. Two concepts that have been around forever. Two simple questions, really. "Are we building the thing right?" That's verification. "Are we building the right thing?" That's validation.

"Are we building the right thing?" is a question as old as engineering itself. GenAI made it easier than ever to skip. Time to start asking again.

3 days ago 3 3 0 0

I have not seen many places over the last 26 years that invested in good requirements writing. A few, yes. And the better the requirements, the better the waterfall or scrum or whatever. But as most people‘s crystal ball seems to be broken, the requirements were accordingly.

3 days ago 2 0 0 0

I have to say the mentioned project in the post was not the worst waterfall. But it was also a project with a few hundred people overall and a lot of expertise and specialists.

3 days ago 0 0 0 0
Preview
The Waterfall Strikes Back As of when writing this post in the first days of April 2026, the vibe coding community is buzzing about "Spec-Driven Development." Write a perfect specification, let the AI agents loose, sit back, and watch the magic happen. Revolutionary, right? For me, it feels like a déjà vu from the 2000s. Back then, we called it Waterfall or V-Model. The idea was the same: define everything upfront, hand it over to the people who build it, and collect your finished product at the end.

Spec-Driven Development is the hot new thing in vibe coding. Write a perfect spec, let the agent work, sit back. Sounds familiar? We called it Waterfall. We hated it. We sucked at it. Why do we think we're suddenly ready for it now?

5 days ago 9 2 2 0
Preview
Assisted Migration, or Why Forcing AI Into Your Ecosystem Is a Terrible Idea I recently listened to a podcast about "assisted migration." Scientists are helping plants and trees migrate to new areas because climate change is happening too fast for natural evolution. A tree that thrives at a certain temperature profile can't just walk north when its habitat becomes inhospitable. And also the seeds don't spread that quickly. Think acorns. So researchers look up international catalogs, find similar species that already grow in warmer climates, and carefully introduce them to new regions.

I listened to a podcast about "assisted migration" for trees and plants. A fantastic topic for a systems thinker. I listened to an AI podcast just before that. And then a few uncomfortable synaptic connections clicked.

6 days ago 2 3 0 0
Preview
The AI Gold Rush, or It’s Time to Make Money OpenAI just closed a $122 billion funding round, pushing their valuation to $852 billion. Let that sink in. A company that isn't profitable yet, valued higher than most countries' GDP. An IPO (Initial Public Offering, when a company sells shares to the public for the first time) is expected by the end of the year. Big Tech is planning to spend nearly $700 billion on AI this year alone.

OpenAI valued at $852 billion without profit. Sora lost $1 million per day. Big Tech spends $700 billion yearly, and most of it vanishes into chips and electricity. IPOs are coming. Before you buy in, remember T-Online. That was counted as a safe bet, too.

1 week ago 3 1 1 0
Preview
Frameworks and Systems Thinking. How the S-to-P Jig Works. Imagine you want to buy a new bike. Before you even walk into the shop, you already have a picture in your head. Where will you ride? City commute, gravel paths, weekend tours in the mountains? Luggage and how much? How often? You build a little mental model of your situation. And then, almost without noticing it, you walk into the shop and use that model as a lens.

The S-to-P Jig is an advanced cognitive jig from DSRP. Scrum is a perfect example. And understanding it explains why good Scrum Masters adapt the framework instead of just following it.

1 week ago 1 1 0 0
Advertisement
Preview
Why I Rant Without Providing Solutions "Why do you only point out problems? Where are your solutions?" It's a fair question. I ask it myself sometimes, scrolling through my own posts. Another rant about AI hype. Another frustrated observation about how we're losing craftsmanship. Another finger pointing at something that's broken. A bit more salt into some open wounds. And then... nothing. No neat five-step plan. No actionable takeaways.

"Where are your solutions?" Fair question. Honestly: I don't have any. What I have is tools for thinking. Systems thinking. Perspective shifts. I can't think for you. But I can show you ways to improve your thinking. I don't want you to share my thinking. That's my contribution. Small, but mine.

1 week ago 5 2 0 0
Preview
Are AI Coding Agents the New Heroes, and Why That’s Not a Compliment Many teams have one. The hero. The person who stays late, works weekends, knows every corner of the codebase, and somehow holds everything together. Management loves them. Colleagues rely on them. But having a hero in your team is not a good sign. It's a symptom. Now, AI coding agents are stepping into that role. And nobody seems to notice the pattern repeating.

Heroes mask a system's weaknesses. When they're gone, things collapse. AI agents are the new heroes, flooding teams with output. But a healthy system doesn't need heroes. It needs balance.

1 week ago 5 2 0 0
Preview
Biases are Systems, or Why Knowing About Them Doesn’t Make Them Go Away In psychology and cognitive science, cognitive biases are systematic patterns of deviation from norm and/or rationality in judgment. - Wikipedia on List of cognitive biases We love to treat biases like bugs in our brains. Little glitches in the wetware that we can patch once we know about them. "Oh, that's just confirmation bias," we say, as if naming the demon exorcises it.

Yesterday's garbage run by the river got me thinking. Biases aren't "just" glitches in our brains. They're stable systems with feedback loops. And they all have a social layer that makes them sticky. Find the loop, find the leverage point.

1 week ago 2 1 0 0
Preview
The Asymmetry of Explanation, or Why the Simple Lies Win A populist makes a statement. Eight words. It fits on a bumper sticker. It goes viral. Millions nod along. You know it's wrong. You know it's an oversimplification. You want to counter it. But here's the thing. To explain why it's wrong, you need context. You need to draw out the system. You need several minutes, just to set up the picture before you can even get to the point.

The simple answer wins. Always. Because we wanted it that way. We trained the algorithm, and now it trains us. Keep thinking. Leave the one-liners to the comedians

1 week ago 1 1 0 0

I wrote another post on that topic recently, but not with the focus on tests. But you are absolutely right.
In a recent project I poked the AI to start writing unit test cases for me, setting up a scaffold. It chose to write 7 test cases for an enum with 4 values.

1 week ago 1 0 0 0
Preview
The Blind Confidence of AI-Generated Tests There is a new promise floating around. Let an AI agent write your tests. Let it generate your automation. Let it take that burden off your shoulders so you can focus on the "important things." Sounds great, right? Except that it isn't. Not entirely, at least. Don't get me wrong, I'm not against using AI to support testing. I use it myself.

AI agents promise to write your tests for you. Sounds great until you realize: a test suite you didn't build is a test suite you don't understand. Don't let green checkmarks replace your own eyes.

1 week ago 6 2 1 0
Preview
Did Test Automation Engineers Just Get Pluto-ed? Remember when Pluto got demoted? One day it was a planet, a proper member of the solar system. Kids learned it in school, it had its place in the lineup. Then in 2006, the International Astronomical Union decided that Pluto didn't quite meet the criteria anymore. Too small. Couldn't clear its orbit. Still there, still orbiting, but suddenly: dwarf planet. Thanks for your service.

AI agents are writing your test scripts now. So what exactly are you still doing here? If coding was the vehicle but you forgot the destination, you might be getting Pluto-ed. A post about dwarf planets, AI shepherds, and redefining your orbit.

2 weeks ago 2 2 0 0
Preview
The Most Average Codebase You’ve Ever Seen Everyone is excited about AI coding agents. Claude Code, Cursor, Copilot, Codex, you name it. They write code, they refactor, they even debug. And yes, it's impressive to watch from the sideline. But let me ask you something: have you actually looked at the code they produce? Developers are knowledge workers. Not in the vague, corporate sense of the word. I mean literally.

AI coding agents produce the most probable code. Not the best. Not the most elegant. The most average. Without skill to guide them, you'll get a codebase that works, passes tests, and slowly becomes a maintenance nightmare. Your code deserves better.

2 weeks ago 3 2 0 0
Preview
Testing Is Systems Thinking Testing is an important part of systems thinking. And systems thinking is an important part of the tester's job. So what does a (software) tester actually do when it comes to thinking in systems while testing? Not in the abstract, philosophical sense. Practically. What happens in your head when you sit down to test something? I've been chewing on this for a while, and DSRP from the Drs.

A tester who doesn't think in systems is just poking at a surface. DSRP gave me a framework for what good testers do intuitively: draw boundaries, understand parts and wholes, stress relationships, switch perspectives. That's how you detect risks. That's the real job.

2 weeks ago 4 0 0 0
Advertisement

If your context works like that, then this is good. I have seen more than one context where test cases were often an afterthought. Even if it shouldn’t have been.

2 weeks ago 0 0 0 0
Preview
Systems Thinking and Test Cases I want to try describing how I think systems thinking and test cases relate. This has been brewing in my head for a while, and it connects to the mental model concept I introduced in part 0 of the Six Moves series. According to Drs. Cabrera, a mental model (M) is the way we organize (O) information (I) about a system.

Test cases describe a desired reality. But what does that mean through a systems thinking lens? I have given it a try to explain my train of thought.

2 weeks ago 1 0 0 0
Preview
The Purpose of a System Is What It Does, or Why German Gas Prices Peak at Noon Here in Germany, politicians wanted to help citizens cope with high gasoline prices. Their solution? A new regulation that allows gas stations to raise prices only once per day, at noon. The communicated intention was clear. Stop the wild price fluctuations, make prices more predictable, protect consumers. Sounds reasonable, right? Except the result is the opposite of what they said they intended.

12 o'clock in Germany. Time to raise the gas prices. Do we get a new all-time high? Here is a rule from systems thinking that helps to understand situations like this. When the purpose of a system does not match its intention.
Bonus post today, because of reasons...

2 weeks ago 2 1 0 0
Preview
AI Cannot Replace Glue Work A colleague of mine left recently. She was one of the strongest connectors I've experienced in my career. The person who remembered what was discussed three meetings or five years ago. Who noticed when someone went quiet. Who made sure the right people talked to each other before things went sideways. She didn't have a fancy title for any of this.

In woodworking, glue strength depends on surface area. In teams, those surfaces are the moments where people connect. When we automate them away, we don't just remove work. We remove the thing that held the structure together.

2 weeks ago 4 2 0 0
Preview
Don’t Label While You Model – The Six Moves – Part 7 What? Part 7 of a six part series? Here is a small bonus. There's a trap I keep falling into. So I want to share it, maybe it's useful for you, too. You're exploring a system, you're building your mental model, you're trying to understand what's going on. And then you stumble across something. Something unexpected. Something that feels wrong. And before you even finish processing what you just observed, your brain has already slapped a label on it.

We love labeling things as good or bad. Especially when testing. But labeling while building your mental model contaminates your map. Observe first, evaluate later. Bonus to The Six Moves series.

2 weeks ago 1 0 0 0
Preview
Looking From All Sides – P-Circle – The Six Moves – Part 6 This is Part 6 of a series where I apply six systems thinking moves to the AI landscape. In Part 5 we cracked open relationships with the RDS Barbell. Now we step back and ask a different question entirely. Not what, not how, but who. The sixth move from the DSRP framework is the P-Circle. You take a topic and you lay out all the perspectives around it.

Same reality, seen differently. The P-Circle lays out who is looking, from where, and what they see. The real power? Noticing whose perspective is missing. Part 6 of the Six Moves.

2 weeks ago 0 0 0 0

It would be nice to commit the context of the agent with the PR, so that you can ask questions to someone who knows what happened.

3 weeks ago 0 0 1 0
Preview
Crack Open the Arrow – RDS Barbell – The Six Moves – Part 5 This is Part 5 of a series where I apply six systems thinking moves to the AI landscape. In Part 4 we connected the parts and watched them interact. Now we pick up one of those connections and look inside. The fifth move from the DSRP framework is the RDS Barbell. Three letters, three steps. Start with a R…

Nature hides its secrets in relationships. So do the systems we build. Part 5 of the Six Moves grabs the relationship arrows and asks: what is this made of?

3 weeks ago 0 0 0 0
Preview
Responsible Use of AI to Gain Personal Efficiency I've been warning about the dangers of AI a lot lately. And I stand by all of it. But I realized that I've been painting an incomplete picture. Because AI, used well, is genuinely useful. So let me try the other side for a change. The key question isn't whether to use AI. It's how. And the answer, as always: it depends!

AI used well makes you faster, not lazier. Feed it stack traces, let it draft scripts, use it to learn. But stay in the loop. The moment you stop looking at the output, you're a spectator, not a professional. Here's a few examples that work for me.

3 weeks ago 4 2 0 0
Preview
Is AI the New Drug, or How We Are Building Dependencies We Can’t Escape I like AI. I use it every day. It helps me understand tricky things, write code, and capture ideas in new ways. And that's exactly what makes me nervous. Because there's a pattern here that I've seen before. Not with technology, but with any substance or habit that starts out making your life better and slowly, quietly, becomes something you can't function without.

Microsoft, Google, Apple, Meta. And now OpenAI and Anthropic. The speed of increasing dependency on big tech in the age of AI is getting higher and higher each day. Big tech has found a new drug to make us even more dependent. And we.... seem to accept it.

3 weeks ago 1 1 0 0
Advertisement
Preview
More Is Not Better, or How AI Kills the Art of Leaving Things Out There's a principle that every good craftsperson knows. Whether they work with wood, with code, or with words: the magic is in what you leave out. A well-turned wooden bowl isn't beautiful because you added more material or decoration. It's beautiful because you removed everything that wasn't the bowl. And you stopped adding when it was enough. Software used to work like that.

Often the beauty lies in the simplicity of a thing. AI allows people to add more without much friction. But is this really useful?

3 weeks ago 0 1 0 0
Preview
I Don’t Trust This Output Farther Than I Can Throw a Washing Machine Over the last few months I had a weekly pair testing session with James Thomas. When I pair with James Thomas, we talk. A lot. Not about the weather or what we had for lunch, but about what we're doing and why. "I'm clicking here because I noticed this behaves differently when..." or "I want to see that, because there is a system coming later in the flow that...." The cursor moves, and the reasoning follows.

Happy Friday rant from yours truly: Testers build trust by understanding thought processes. LLMs 'explain' themselves too. But is it real reasoning or post-hoc storytelling? And where are the observability folks when we need them most? Asking not only for a regulated industry.
Happy weekend!

3 weeks ago 1 1 0 0