Advertisement · 728 × 90

Posts by Josh

Palantir are about six months away from ordering their employees to leave audio logs scattered around their offices

1 day ago 8824 1998 85 55
hieroglyphs code
#coding #software #programmerhumour #tech #programming

hieroglyphs code #coding #software #programmerhumour #tech #programming

> "Trust me bro, this new tech startup isn't a pyramid scheme"
> Looks at codebase:

5 days ago 2776 551 37 16
Picture of a cockapoo

Picture of a cockapoo

It was her sixth birthday yesterday, and she is still weak on crime.

1 week ago 1 0 0 0
Stop Multitasking. Parallel Agent Workflows Are Making You Slower And Burning You Out | Josh Tuddenham I've tested every multi-agent orchestration setup. Conductor, Cursor, Claude Code Max. I was faster single-threaded. The cognitive science on this runs decades deep.

I wrote a thing about how the industry's push for parallel agentic coding workflows is scientifically deluded. Some might call it a 'hot take'

joshtuddenham.dev/blog/stop-mu...

1 week ago 0 0 1 0

it is simple, if you don't want people to think you are using ai, forgoe all capitalization

2 weeks ago 1 0 0 0
Preview
Conductor's $22m Series A We've raised a $22M Series A from Spark and Matrix.

$22m to distribute your attention even thinner

2 weeks ago 1 0 0 0

"Run five agents in parallel!" said the person who has never tried spinning plates.

2 weeks ago 0 0 0 0

Cursor 3, JetBrains Air - all these new IDE alternatives seem to be pushing one thing, parallelization. But the academic literature is clear for decades. It makes you slower. Nobody is good at context switching. Shouldn't more people be talking about this?

2 weeks ago 0 0 1 0
Preview
Anthropic Quietly Killed Three SaaS Categories This Week | Josh Tuddenham Most coverage focused on individual features. Nobody zoomed out to see what Anthropic did across the full week: absorb three categories of SaaS tooling into their own surface area.

Anthropic shipped three features in one week that each look small on their own. Together they replaced remote agent control platforms, event automation (n8n/Zapier territory), and workflow orchestration tools. It's clear they see the harness as the moat.

3 weeks ago 3 0 0 0
Post image Post image

DLSS 5 OFF // DLSS 5 ON

1 month ago 5897 1740 32 23
Advertisement

With REST you have to anticipate every action upfront. Here you define the primitives, add confirmation gates on anything destructive, and hand them over. The complexity assembles itself. The first time it handles something you never planned for is the moment it feels like the future.

1 month ago 0 0 0 0

it is hard to imagine going back to hand rolling a rest api once you have experienced the emergent behaviours the above enables.

1 month ago 0 0 1 0

The key will be providing a deterministic abstraction layer to each sandboxed agent (for example, a collection of end points with human in the loop side effect confirmation)

1 month ago 0 0 1 0

I have been doing a lot of work playing with agents that execute code in sandboxes at runtime, and given the right harness, this pretty much feels like the future of software. And I hate how much of a booster that makes me sound, but it is hard to imagine another outcome.

1 month ago 0 0 1 0

To expand on this slightly more, we are already seeing agent-first design in enterprise codebases. When agents in sandboxes are writing and executing throwaway code per-request, debugging and programming becomes primarily iterating on harnesses. It feels like a different game entirely.

1 month ago 1 0 0 0

I think we are in the middle of a seismic shift in the nature of how softwate will be written. When agents spit out thousands of tokens a second cheaply, deterministic programming as we know it becomes a niche concern. The course quickly becomes a hobbyist interest. Do it for fun but not money.

1 month ago 0 0 1 0

I'm beginning to think the single best use of OpenClaw would be to convert all the WhatsApp voice notes people send me into normal text messages

1 month ago 0 0 0 0

Kurt Vonnegut stop being so applicable to all time periods of American life, you can’t do that Kurt Vonnegut, your insights are too evergreen Kurt Vonnegut

1 month ago 10098 2933 143 57
Your Agent's Safety Net Is an If-Statement. Mine Is a Proof. | Josh Tuddenham Two weeks ago, security researchers found over 1,800 exposed OpenClaw instances. Every vulnerability maps to the same failure mode - a code path that didn't hit the check. Petri nets fix this.

5/ The library is called PetriFlow. MIT licensed, works today.
Blog post with full benchmarks: joshtuddenham.dev/blog/agent-s...
GitHub: github.com/joshuaisaact...
Site: petriflow.joshtuddenham.dev

2 months ago 0 0 0 0

4/ This is a Petri net synchronization primitive. It's been in the formal methods literature since the 60s. I built it into an open-source safety layer for AI agents that works with the Vercel AI SDK.

2 months ago 1 0 1 0
Advertisement

3/ The fix is a join that blocks until every dispatched path resolves. Not "proceed when something arrives." "Proceed when everything arrives." Skip a tool? Fine. Place the done token directly. But the join won't fire until all tokens are present.

2 months ago 0 0 1 0

2/ This isn't hypothetical. It's the default behavior in n8n (append mode merge), LangGraph (implicit state), and every ReAct loop. If one branch fails quietly, nothing stops the agent from continuing with partial context.

2 months ago 0 0 1 0

1/ Your AI agent dispatches 3 tools in parallel. The database lookup times out silently. The merge node proceeds with whatever arrived. The agent generates a confident answer from 2/3 of the information it needed. Your logs look clean.

2 months ago 1 0 1 0
Your Agent's Safety Net Is an If-Statement. Mine Is a Proof. | Josh Tuddenham Two weeks ago, security researchers found over 1,800 exposed OpenClaw instances. Every vulnerability maps to the same failure mode - a code path that didn't hit the check. Petri nets fix this.

Every agent framework guards your tools with if-statements. I replaced mine with a mathematical proof. www.joshtuddenham.dev/blog/agent-s...

2 months ago 0 0 0 0

Another good one is "what tradeoffs did you make when writing this?"

I have also prompted AI to write documentation for what it has built (pointing it at great examples of documentation elsewhere) and that has really helped for me.

2 months ago 1 0 0 0
Speed Vertigo: A New Kind of Engineering Debt | Josh Tuddenham It's not imposter syndrome. It's being over-leveraged in the code you shipped.

I wrote about this recently too, I think it is the biggest thing we will need to grapple with in agentic coding. I used a slightly different term (speed vertigo) but it's the same thing:

joshtuddenham.dev/blog/vertigo/

2 months ago 0 0 0 0

This is absolutely deranged. Imagine being that disconnected from the trials and tribulations of your employees.

2 months ago 2 0 0 0

In short, I think allowing users to create a 'personality' for their AI agent, and have it speak to them and treat them in a certain way could very easily lead to even more AI induced psychosis than we already are seeing.

2 months ago 0 0 0 0
Advertisement

I have noticed this too. Measured takes seem to have nowhere near as much virality as either boosterism or doomerism.

2 months ago 1 0 0 0
Preview
ChatGPT promised to help her find her soulmate. Then it betrayed her ChatGPT sent screenwriter Micky Small down a fantastical rabbit hole. Now, she's finding her way out.

Lots has been written about the security implications of Moltbot/Openclaw, but I think even more dangerous is soul.md. As stories like this show, adding memory to chatbots can lead to profound impacts on users. I dont think we have sufficient guardrails in place yet.

www.npr.org/2026/02/14/n...

2 months ago 0 0 1 0