Palantir are about six months away from ordering their employees to leave audio logs scattered around their offices
Posts by Josh
hieroglyphs code #coding #software #programmerhumour #tech #programming
> "Trust me bro, this new tech startup isn't a pyramid scheme"
> Looks at codebase:
Picture of a cockapoo
It was her sixth birthday yesterday, and she is still weak on crime.
I wrote a thing about how the industry's push for parallel agentic coding workflows is scientifically deluded. Some might call it a 'hot take'
joshtuddenham.dev/blog/stop-mu...
it is simple, if you don't want people to think you are using ai, forgoe all capitalization
"Run five agents in parallel!" said the person who has never tried spinning plates.
Cursor 3, JetBrains Air - all these new IDE alternatives seem to be pushing one thing, parallelization. But the academic literature is clear for decades. It makes you slower. Nobody is good at context switching. Shouldn't more people be talking about this?
Anthropic shipped three features in one week that each look small on their own. Together they replaced remote agent control platforms, event automation (n8n/Zapier territory), and workflow orchestration tools. It's clear they see the harness as the moat.
DLSS 5 OFF // DLSS 5 ON
With REST you have to anticipate every action upfront. Here you define the primitives, add confirmation gates on anything destructive, and hand them over. The complexity assembles itself. The first time it handles something you never planned for is the moment it feels like the future.
it is hard to imagine going back to hand rolling a rest api once you have experienced the emergent behaviours the above enables.
The key will be providing a deterministic abstraction layer to each sandboxed agent (for example, a collection of end points with human in the loop side effect confirmation)
I have been doing a lot of work playing with agents that execute code in sandboxes at runtime, and given the right harness, this pretty much feels like the future of software. And I hate how much of a booster that makes me sound, but it is hard to imagine another outcome.
To expand on this slightly more, we are already seeing agent-first design in enterprise codebases. When agents in sandboxes are writing and executing throwaway code per-request, debugging and programming becomes primarily iterating on harnesses. It feels like a different game entirely.
I think we are in the middle of a seismic shift in the nature of how softwate will be written. When agents spit out thousands of tokens a second cheaply, deterministic programming as we know it becomes a niche concern. The course quickly becomes a hobbyist interest. Do it for fun but not money.
I'm beginning to think the single best use of OpenClaw would be to convert all the WhatsApp voice notes people send me into normal text messages
Kurt Vonnegut stop being so applicable to all time periods of American life, you can’t do that Kurt Vonnegut, your insights are too evergreen Kurt Vonnegut
5/ The library is called PetriFlow. MIT licensed, works today.
Blog post with full benchmarks: joshtuddenham.dev/blog/agent-s...
GitHub: github.com/joshuaisaact...
Site: petriflow.joshtuddenham.dev
4/ This is a Petri net synchronization primitive. It's been in the formal methods literature since the 60s. I built it into an open-source safety layer for AI agents that works with the Vercel AI SDK.
3/ The fix is a join that blocks until every dispatched path resolves. Not "proceed when something arrives." "Proceed when everything arrives." Skip a tool? Fine. Place the done token directly. But the join won't fire until all tokens are present.
2/ This isn't hypothetical. It's the default behavior in n8n (append mode merge), LangGraph (implicit state), and every ReAct loop. If one branch fails quietly, nothing stops the agent from continuing with partial context.
1/ Your AI agent dispatches 3 tools in parallel. The database lookup times out silently. The merge node proceeds with whatever arrived. The agent generates a confident answer from 2/3 of the information it needed. Your logs look clean.
Every agent framework guards your tools with if-statements. I replaced mine with a mathematical proof. www.joshtuddenham.dev/blog/agent-s...
Another good one is "what tradeoffs did you make when writing this?"
I have also prompted AI to write documentation for what it has built (pointing it at great examples of documentation elsewhere) and that has really helped for me.
I wrote about this recently too, I think it is the biggest thing we will need to grapple with in agentic coding. I used a slightly different term (speed vertigo) but it's the same thing:
joshtuddenham.dev/blog/vertigo/
This is absolutely deranged. Imagine being that disconnected from the trials and tribulations of your employees.
In short, I think allowing users to create a 'personality' for their AI agent, and have it speak to them and treat them in a certain way could very easily lead to even more AI induced psychosis than we already are seeing.
I have noticed this too. Measured takes seem to have nowhere near as much virality as either boosterism or doomerism.
Lots has been written about the security implications of Moltbot/Openclaw, but I think even more dangerous is soul.md. As stories like this show, adding memory to chatbots can lead to profound impacts on users. I dont think we have sufficient guardrails in place yet.
www.npr.org/2026/02/14/n...