A behavioural specification language, #AI agent teams, and a Byzantine fault-tolerant distributed system built in 48 hours www.juxt.pro/blog/from-sp...
Posts by Henry Garner
Test suites catch bugs in your implementation. Allium catches bugs in your intent.
It gives behaviour a structured form that humans and AI agents can interrogate and refine together.
GitHub source: github.com/juxt/allium
I've been using #Allium for a few weeks and it's become central to how I work with agentic coding tools. The spec and the code evolve together: I start with what I know, build something, and implementation surfaces questions that sharpen the spec.
Many use markdown to capture intent, but it requires work to identify contradictions. You can write "users must be authenticated" & "guest checkout is supported" without anything noticing the tension.
A powerful model might resolve the ambiguity silently in ways you didn't anticipate.
Vibe coding introduces "code distance": a gap between the developer & the implementation. The productivity gains can be phenomenal.
But code distance brings the risk of "design distance". If a model can only read your code, it has no way to distinguish what the system does from what it should do.
When LLMs generate code from natural language, the design thinking that engineers used to do during implementation no longer happens. Ambiguities go unexamined and edge cases emerge late when they're more expensive to fix, even with AI assistance.
We've open-sourced #Allium, an LLM-native language for expressing behavioural intent alongside your code.
juxt.github.io/allium/
#AgenticCoding #GenAI #AI
Don’t let the title fool you, this is one of the most personal talks I’ve given.
It goes beyond the technology: I think AI skeptics are missing genuine opportunities, while AI enthusiasts risk sacrificing the context required for mastery, satisfaction, and responsibility.
Please watch and share.
Find someone who looks at you the way Skoda designers look at Mary Poppins
New JUXTCast episode! I had a blast chatting with the team about our upcoming AI Radar. We explored practical insights beyond the hype, from inconsistent AI engineering studies to why classical ML still matters. Thoughtful perspectives to help navigate the AI ecosystem: www.juxt.pro/blog/juxtcas...
I did a podcast! Many thanks to @juxt.pro for inviting me to talk about a project I’ve been working on for the past few months: the JUXT AI Radar 📡
Links for all the usual podcast platforms here: juxt.pro/blog/juxtcas...
I spent months on this video about people worshiping ChatGPT, please watch and leave a comment and share if you can!! My YouTube channel is still so small that I lose money making these videos rn 😭. Every little bit helps!! www.youtube.com/watch?v=zKCy...
So for me, the question isn't whether AI can think like us. It's whether we can design it to help us think better.
It's not just about commercial success: in every regulatory framework—now and for the foreseeable future—humans are accountable. To exercise effective oversight we need context, not just conclusions.
The fascinating thing is that while the science goal dominates headlines and captures public imagination, the commercial AI successes so far have come from the innovation side combining sophisticated models with human control and creativity.
Yet there's always been a parallel 'Innovation Goal' focused on extending human capabilities instead of replacing them. William Ross Ashby wrote about Intelligence Amplification, (IA, not AI), back in 1956.
Ever since the earliest days of cybernetics we've been captivated by what Ben Shneiderman calls the 'Science Goal' of building systems that can replace human judgement entirely.
And this long-imagined future where autonomous agents and robot teammates take over our roles may finally be upon us.
There are two grand goals of #AI research, and we're fixated on the wrong one.
Another day shovelling thoughtcoal into the Claude-furnace
The Work of Software Engineering in the Age of Mechanical Reproduceability
Am I “ready to revamp my AI strategy for 2025?” I don’t know grandma I just wanted to drop in and give you these flowers
*nods*
Are you sure you know which are the ‘predictable and routine’ tasks which can be safely delegated? Are you practicing reliable, safe, and trustworthy #AI use?
The flexible nature of current AI interfaces means we choose our relationship with it via our prompts: an autonomous assistant, a reasoning partner, and everything in between.
Yet AI has also scaffolded our own understanding of new frameworks with examples and visualisations, and facilitated debate about competing designs before we commit.
As leader of JUXT’s AI Chapter, I’ve observed for myself how easy it is for us fall into ‘vibe coding’, accepting AI suggestions uncritically and gradually losing our situational awareness.
The sweet spot (except for predictable, routine tasks) combines high automation with high human control. He calls these systems "reliable, safe and trustworthy”.
A chart showing “computer automation“ on the x axis and “human control” on the y axis adapted from Ben Shneiderman‘s ‘human-centered AI’ framework. The top right, corresponding to high human control and high computer automation, is labelled “reliable, safe and trustworthy“.
Ben Shneiderman, author of the book 'Human-Centered AI', argues that the choice between human control and computer automation is a false dichotomy. They are actually orthogonal dimensions creating four distinct regions.
The difference in outcomes was striking. The second approach, where AI built on investors’ own ideas, led to better portfolio diversification, fewer but more strategic trades, and significantly higher satisfaction: 67% versus 43%.