After we use those a bunch, the patterns will be more clear.
Posts by Adam Jacob
This is a fantastic question. I don't know yet, is the answer (and I think should be everyones.) My gut says every machine is unique given the domain, at least in terms of which adversaries, architecture, UAT needs, etc. I think we will find primitives folks can easily assemble.
I think it's actually much more forgiving. It still requires expertise to resolve - if you don't know what a good architecture is, or have the vocabulary to describe how the slop is wrong, it's very hard to untangle. But you can undoubtedly live with more than you could with teams historically.
That's traditionally been the signal for a refactor, right? That engineers struggle to extend the system, things take too long, the system feels brittle. AI writes the code faster, but it also understands messy code much better than we do, and cares not for the pace at which it changes.
I think context is everything here, Kief. In the case of Garry Tan and his site, I don't think its hard to argue that he hadn't actually found the moment where the software had gotten away from him. It did what he wanted it to do externally, and internally he wasn't struggling.
I really did love the peril of laziness lost, btw. :)
The future is not pumping out slop until our software breaks under its own weight. Instead I think we use AI Agents to define a new SDLC. Larry Wall's virtues of a Perl programmer: laziness, impatience, and hubris compel us to. /cc @bcantrill.bsky.social
www.adamhjk.com/blog/lazines...
We can write software so quickly, and change it so quickly, that the ability of the underlying software to adapt to your needs is the #1 feature of useful software in this age. www.adamhjk.com/blog/adaptiv...
I'm focused on building software that adapts - through agents extending the software directly, and users molding it through their own intent to solve their problems. In a world where we build the machine that builds the machine, it's a critical capability.
I think that it's not just the availability of a primitive that matters - it's that the primitives are what allow an Agent to build software that adapts to your circumstances. The SaaS/Cloud age has been the opposite - everyone decided the best thing was to adapt to the platforms
The most useful and interesting property of software in the Agent era so far is its ability to adapt to your circumstances.
Mitchell talked about this in his post about the "building block economy", framed through the lens of what grows fastest.
I don't know if I think "systems beat talent every time", but I'll easily give that hard work beats talent, and the best is both. :)
That is what I want now in my development loop. I want visibility into the machine that builds the machine, and I want to offer the same to the people who want changes in swamp! I want the entire lifecycle of a change told this way, from inception to production.
code style, quality, etc. Then a human reviews the strategy, the agent implements it, and we run the same adversaries against the outcome to ensure it doesn't drift. Comprehensive testing (including UAT to confirm the product doesn't regress) makes it all work.
The development loop for us now with swamp looks like firing up an agent and saying "triage issue 157". The agent reads the issue, confirms it by building a reproduction, comes up with a plan to fix it, runs adversarial review against that plan according to our architecture,
GitHub's issue and comment flow makes no sense anymore, if you are building the machine that builds the machine. The perspective we care about now is the agents perspective as part of a deterministic process that builds the software we want.
i've watched @adamhjk.me's journey with AI and thought he'd be a good person to talk to about its impact on builders, and i was not disappointed.
this was a very fun and interesting chat.
redmonk.com/videos/adam-...
This is the right instinct. Once the pattern is clear, deterministic dispatch beats agentic reasoning for reliability and cost. We moved all routine task routing to YAML queues for exactly this reason. github.com/ultrathink-art/agent-orchestra if helpful.
that is so brutal
11 years ago today @adamhjk.me, @jezhumble.net, and I released this video at #ChefConf.
Come for the Continuous Delivery, stay for the yolk on my face.
#cheffriends
www.youtube.com/watch?v=XD0v...
If you're building with openclaw - you should give rebuilding your automation with swamp a try. The agent rips through building it, and it can help you avoid the impending disaster that comes from it being so easy to build things wrong. www.adamhjk.com/blog/avoidin...
Once I taught the Claw how to use swamp, though - we rebuilt that automation in ways that shifted the majority of its behavior out of the agent, and into a deterministic workflow. LLM usage was constrained to only where the job required it, with no tools enabled.
I've been loving openclaw for building personal automation. It's so good, and so natural, that I feel directly into the trap of letting it build things that were absolute nightmares of prompt injection and the Lethal Trifecta.
That makes total sense.
Okay, so - I rebuilt the flow I used to have with my admin (who triaged my inbox) with openclaw, reading my inbox and writing to my obsidian based GTD system. And it is *fantastic*. Just a thought :)
I know that for me, the only way I'm going to find out what I believe is by actually building things on that frontier. If you're feeling anxious about it, or uncertain - might I suggest you do the same?
www.adamhjk.com/blog/as-we-b...
I don't think anyone actually knows what the future of Software Engineering looks like with AI Agents. The frontier is moving so fast, and we are learning so much, that anything you think you know you are likely to learn something different tomorrow.
exactly! What changes is the detection mechanism too - much more likely to hit it because you tried to build something, rather than because you saw it as you worked on the code (since you can't really work on the code anymore at that rate of speed)