Advertisement · 728 × 90

Posts by Henry Garner

Preview
JUXT Blog: From specification to stress test: a weekend with Claude A behavioural specification language, AI agent teams, and a Byzantine fault-tolerant distributed system built in 48 hours.

A behavioural specification language, #AI agent teams, and a Byzantine fault-tolerant distributed system built in 48 hours www.juxt.pro/blog/from-sp...

2 months ago 1 0 0 0
Preview
GitHub - juxt/allium: A language for sharpening intent alongside implementation. A language for sharpening intent alongside implementation. - juxt/allium

Test suites catch bugs in your implementation. Allium catches bugs in your intent.

It gives behaviour a structured form that humans and AI agents can interrogate and refine together.

GitHub source: github.com/juxt/allium

2 months ago 0 0 0 0

I've been using #Allium for a few weeks and it's become central to how I work with agentic coding tools. The spec and the code evolve together: I start with what I know, build something, and implementation surfaces questions that sharpen the spec.

2 months ago 0 0 1 0

Many use markdown to capture intent, but it requires work to identify contradictions. You can write "users must be authenticated" & "guest checkout is supported" without anything noticing the tension.

A powerful model might resolve the ambiguity silently in ways you didn't anticipate.

2 months ago 0 0 1 0

Vibe coding introduces "code distance": a gap between the developer & the implementation. The productivity gains can be phenomenal.

But code distance brings the risk of "design distance". If a model can only read your code, it has no way to distinguish what the system does from what it should do.

2 months ago 1 0 3 0

When LLMs generate code from natural language, the design thinking that engineers used to do during implementation no longer happens. Ambiguities go unexamined and edge cases emerge late when they're more expensive to fix, even with AI assistance.

2 months ago 0 0 1 0
Preview
Allium An LLM-native language for sharpening intent alongside implementation. Velocity through clarity.

We've open-sourced #Allium, an LLM-native language for expressing behavioural intent alongside your code.

juxt.github.io/allium/

#AgenticCoding #GenAI #AI

2 months ago 3 0 1 0
Preview
JUXT Blog: From Unstructured to Actionable: AI-Powered Regulatory Intelligence Watch Henry Garner's talk at XT25 Fintech Conference [video]

[Updated link] www.juxt.pro/blog/xt25-fi...

8 months ago 1 0 0 0
Preview
JUXT Blog: From Unstructured to Actionable: AI-Powered Regulatory Intelligence Watch Henry Garner's talk at XT25 Fintech Conference [video]

Don’t let the title fool you, this is one of the most personal talks I’ve given.

It goes beyond the technology: I think AI skeptics are missing genuine opportunities, while AI enthusiasts risk sacrificing the context required for mastery, satisfaction, and responsibility.

Please watch and share.

8 months ago 3 0 1 1

Find someone who looks at you the way Skoda designers look at Mary Poppins

8 months ago 2 0 1 0
Advertisement
Preview
JUXT Blog: JUXT Cast: Mapping the AI Landscape A behind-the-scenes look at the new JUXT AI Radar [video].

New JUXTCast episode! I had a blast chatting with the team about our upcoming AI Radar. We explored practical insights beyond the hype, from inconsistent AI engineering studies to why classical ML still matters. Thoughtful perspectives to help navigate the AI ecosystem: www.juxt.pro/blog/juxtcas...

8 months ago 2 0 0 0
Preview
JUXT Blog: JUXT Cast: Mapping the AI Landscape A behind-the-scenes look at the new JUXT AI Radar [video].

I did a podcast! Many thanks to @juxt.pro for inviting me to talk about a project I’ve been working on for the past few months: the JUXT AI Radar 📡

Links for all the usual podcast platforms here: juxt.pro/blog/juxtcas...

9 months ago 0 0 0 0
ChatGPT Is Becoming A Religion
ChatGPT Is Becoming A Religion YouTube video by Taylor Lorenz

I spent months on this video about people worshiping ChatGPT, please watch and leave a comment and share if you can!! My YouTube channel is still so small that I lose money making these videos rn 😭. Every little bit helps!! www.youtube.com/watch?v=zKCy...

9 months ago 1300 290 81 110

So for me, the question isn't whether AI can think like us. It's whether we can design it to help us think better.

9 months ago 1 0 0 0

It's not just about commercial success: in every regulatory framework—now and for the foreseeable future—humans are accountable. To exercise effective oversight we need context, not just conclusions.

9 months ago 0 0 1 0

The fascinating thing is that while the science goal dominates headlines and captures public imagination, the commercial AI successes so far have come from the innovation side combining sophisticated models with human control and creativity.

9 months ago 0 0 1 0

Yet there's always been a parallel 'Innovation Goal' focused on extending human capabilities instead of replacing them. William Ross Ashby wrote about Intelligence Amplification, (IA, not AI), back in 1956.

9 months ago 1 0 1 0

Ever since the earliest days of cybernetics we've been captivated by what Ben Shneiderman calls the 'Science Goal' of building systems that can replace human judgement entirely.

And this long-imagined future where autonomous agents and robot teammates take over our roles may finally be upon us.

9 months ago 0 0 1 0

There are two grand goals of #AI research, and we're fixated on the wrong one.

9 months ago 2 1 2 0

Another day shovelling thoughtcoal into the Claude-furnace

9 months ago 1 0 0 0
Advertisement

The Work of Software Engineering in the Age of Mechanical Reproduceability

9 months ago 1 0 0 0

Am I “ready to revamp my AI strategy for 2025?” I don’t know grandma I just wanted to drop in and give you these flowers

9 months ago 5 2 0 0

*nods*

9 months ago 1 0 0 0

Are you sure you know which are the ‘predictable and routine’ tasks which can be safely delegated? Are you practicing reliable, safe, and trustworthy #AI use?

9 months ago 0 0 0 0

The flexible nature of current AI interfaces means we choose our relationship with it via our prompts: an autonomous assistant, a reasoning partner, and everything in between.

9 months ago 0 0 1 0

Yet AI has also scaffolded our own understanding of new frameworks with examples and visualisations, and facilitated debate about competing designs before we commit.

9 months ago 0 0 1 0

As leader of JUXT’s AI Chapter, I’ve observed for myself how easy it is for us fall into ‘vibe coding’, accepting AI suggestions uncritically and gradually losing our situational awareness.

9 months ago 0 0 1 0

The sweet spot (except for predictable, routine tasks) combines high automation with high human control. He calls these systems "reliable, safe and trustworthy”.

9 months ago 2 0 1 0
A chart showing “computer automation“ on the x axis and “human control” on the y axis adapted from Ben Shneiderman‘s ‘human-centered AI’ framework. The top right, corresponding to high human control and high computer automation, is labelled “reliable, safe and trustworthy“.

A chart showing “computer automation“ on the x axis and “human control” on the y axis adapted from Ben Shneiderman‘s ‘human-centered AI’ framework. The top right, corresponding to high human control and high computer automation, is labelled “reliable, safe and trustworthy“.

Ben Shneiderman, author of the book 'Human-Centered AI', argues that the choice between human control and computer automation is a false dichotomy. They are actually orthogonal dimensions creating four distinct regions.

9 months ago 0 0 1 0
Advertisement

The difference in outcomes was striking. The second approach, where AI built on investors’ own ideas, led to better portfolio diversification, fewer but more strategic trades, and significantly higher satisfaction: 67% versus 43%.

9 months ago 0 0 1 0