Test coverage metrics are one of the tools for evaluating the quality of tests without reading them. Which is kind of relevant in the age of coding agents. So here, a short light intro to 4 different ones: line, branch, path and condition coverage. radanskoric.com/articles/how...
Posts by Radan
So, I switched to Ghostty terminal a week ago. Today I was surprised there's no scroll-back search. Then I notice that I have an update to apply. But what are the chances that very first time I encounter an issue it will be already fixed?!? Pretty good, apparently! ghostty.org/docs/install... ππ
I'll be honest: very mixed feelings about this one. π
I finally lost my shit about so-called AIs, LLMs, enshittification & everything fucking evil that @officialgrammarly.bsky.social is doing. I'm fucking furious, and not just about what 1 company has done. An open letter to Grammarly & the rest of the LLM hype machine www.moryan.com/an-open-lett...
I hope you responded with: "Do you have time to talk about the true meaning of MINASWAN?" π
Can someone recommend a really good introductory book on "process engineering"? If it's math heavy that's fine, actually even preferred.
If you'll be looking for beta testers, please ping me. I'd have to of course first evaluate myself if it is safe to deploy to production, but I would be interested in trying out a tool like this.
Looking forward to seeing what you come up with. These are all questions that I could very much use an answer to on most apps. It would even be useful as "just" a sampling rather than full tracer.
Because of the performance overhead? Just a thought: can you remove coverage tracking on the first hit? Once it's hit, you know it's not dead code so you no longer need the tracking. Obviously, I don't know what you're doing so I'm just guessing.
Yeah, ok, that's fair enough. :)
Yeah, I'm with you on that one. I've made one off migration scripts where I never even glanced at thousands of lines of code. The important difference was that I could verify the final output independently so I didn't care about the process. But I don't buy the "never read code" crowd arguments.
I would argue that you could do that only because of many years of prior experience working with other humans. I'm not a unique snowflake, you would be subconsciously pattern matching me to various other humans you worked with over the many years of your career. And you would be right to do that.
100% On the other hand, I spent so much time in my career clearing up old code because someone (human) who just cared about making it work has made it near impossible to add this next feature I need to add.
Yes, my experience as well. As long as you give it a verification loop, it can find a working solution. But my main problems are: the code is often hard to extend (i.e. low maintainability) and/or it charged forward long after it should have stopped and questions the original instructions.
So you are reviewing all of the code? You haven't bought into the "if you're still reading the code, you're doing it wrong" hype? To be clear, I'm still reviewing almost all of the code, I just make a judgement call on how carefully I will review it, from glance to deep.
Here's a concrete example: with human code I used to be able to glance the layout and structure and get a good sense of the quality. For a human it's very hard to learn good structure without also learning other qualities. LLMs have no problem imitating good structure while making huge mistakes.
How did you conclude that you have very good intuitions for it? I don't want to deny your claim but it's a bold claim. It takes years to develop the intuition for human developers and we've had nowhere close to that time with coding agents. I get surprised all the time by their failure modes. :)
On the internet, no body knows that your dog is doing your job, right? :D
True, but also, our intuition about what kind of mistakes humans make is pretty good, on the account of us being humans. So I don't think we can carry on with mostly old practices just with LLMs as partners instead of other humans. Our intuitions no longer apply, we need to find other methods.
100%, one the most overlooked values that an expert (of any profession) brings to the table, is the ability to ask the right questions.
Looks at Joel's response to me, he's actually closer to what you're saying. And Kelsey's initial question was about new tech debt. That should be avoided and is still an open question how! But Joel is right that given pre-existing tech debt it is easier to clean it up with agents as helper tools.
That's totally my experience as well. In fact, everything I've seen points to them doing the best in codebases where humans also do the best. Which is not that surprising considering they are stochastically emulating our work.
You had me worried there for a moment! I 100% agree with what you just wrote, but there are people literally saying: "Don't read the code, don't care about tech debt, the agents are getting better and the next generation will clean up the mess, just one-shot from scratch". Which makes 0 sense to me.
I think I read your post about it. That makes perfect sense! But it's also very repetitive, relatively simple work, perfect for agents. But how do you go from that to: "I'm going to accumulate tech debt in a complex codebase faster than ever and then clean it all up with next generation of agents"?
I keep seeing this but I don't understand what it's based on? From my experience, from talking to people and from controlled studies, agents amplify the state of the codebase. They work better and generate cleaner code in a low tech debt codebase. What's the argument this is not more than pure hope?
Can you elaborate on that? That has never been true so far for technical debt. Quite the opposite.
This is THE question of 2026. Everything else (IDE or pure agents, to review or not to review, focus on harness or model, spec driven or guided ...) has this question looming over it like a large cloud on the horizon that you're not yet sure if it's an incoming storm ...
π― Some parts of hand coding I can't wait to get rid of but there are parts that give me so much joy. I still think this kind of work will be valuable but not in the same places as before. We'll have to change how we build software and I've decided to try and find a way that preserves it. We'll see π