Advertisement · 728 × 90

Posts by Alex Strick van Linschoten

Preview
GitHub - zenml-io/kitaru: Durable execution for AI agents, built on ZenML Durable execution for AI agents, built on ZenML. Contribute to zenml-io/kitaru development by creating an account on GitHub.

Repo: github.com/zenml-io/ki...

1 week ago 0 0 0 0

But once we mapped out what versioning and provenance would look like across two systems, the seams started showing. Sarah Wooders' framing (which Harrison also quotes) captures why: managing memory is a core responsibility of the harness, not a peripheral one.

1 week ago 0 0 1 0

We considered integrating with Mem0, Letta, and the other dedicated memory providers. We learned a lot from reading their code and their philosophies, there's real diversity in how this space thinks about the problem.

1 week ago 0 0 1 0

3. Provenance is automatic. Because memory and artifacts share a backend, you don't have to stitch the audit trail back together across systems. (And we offer a full audit log in case you need that for your memories.)

1 week ago 0 0 1 0

2. Scopes match how agents actually work. Namespace for repo conventions, flow for per-agent learned state, execution for per-run progress. No cramming everything into one global blob.

1 week ago 0 0 1 0

1. Versioning comes free. Every memory.set() creates a new version. Soft deletes leave tombstones. You can ask "which run taught the agent this?" and get an actual answer. (And since memory ships through our MCP server, you can ask Claude Code or Codex that question directly.)

1 week ago 2 0 2 0

Three things fell out of putting memory in the same substrate that already handles execution durability:

1 week ago 1 0 1 0
Preview
Kitaru agents now have memory Durable, versioned memory for agents is now built into Kitaru — across Python, the typed client, the CLI, and MCP.

"Your Harness, Your Memory" by Harrison Chase argues that memory belongs inside your agent harness, not behind a third-party API. We've been building exactly that, and Kitaru 0.4.0 shipped it this morning.

kitaru.ai/blog/kitaru...

1 week ago 2 0 1 0
Preview
GitHub - zenml-io/skills: AI coding agent skills for ZenML MLOps workflows — quick wins, pipeline setup, and more AI coding agent skills for ZenML MLOps workflows — quick wins, pipeline setup, and more - zenml-io/skills

The skills are conservative: they flag what they're unsure about rather than guessing. Works with Claude Code, Cursor, Codex, or any coding agent.

Open-source and free. Feedback welcome!

github.com/zenml-io/sk...

2 weeks ago 1 0 0 0
Advertisement
Table outlines migration paths from various ML/data platforms to ZenML, detailing core translations and special notes for each source.

Table outlines migration paths from various ML/data platforms to ZenML, detailing core translations and special notes for each source.

We just shipped migration skills to allow you to migrate off 11 ML/data platforms to ZenML: Airflow, Argo, AzureML, Dagster, Databricks, Flyte, Kedro, Metaflow, Prefect, SageMaker, Vertex AI.

Each has hand-curated concept maps baked in showing what maps 1:1 and what needs redesign.

2 weeks ago 1 0 1 0

Try it out. It’s pretty transparent when there’s lossiness involved.

1 month ago 1 1 0 0
Preview
GitHub - strickvl/panlabel: Universal annotation converter Universal annotation converter. Contribute to strickvl/panlabel development by creating an account on GitHub.

The kind of project I enjoy just steadily plodding away at — one format at a time.

github.com/strickvl/pa...

1 month ago 0 0 0 0

v0.5.0: split-aware YOLO reading + conversion explainability
v0.6.0: Five new adapters (LabelMe, CreateML, KITTI, VIA JSON, RetinaNet CSV)

13 supported formats now with full read, write, and auto-detection. Single binary, no Python deps.

1 month ago 1 0 1 0

I've been building panlabel — a fast Rust CLI that converts between dataset annotation formats — and I'm a few releases behind on sharing updates.

v0.3.0: Hugging Face ImageFolder support
v0.4.0: auto-detection UX overhaul + Docker

1 month ago 3 0 1 1
Preview
How I Rebuilt zenml.io in a Week with Claude Code - ZenML Blog I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.

Wrote up the whole process including the parts that went wrong
www.zenml.io/blog/how-i-...

1 month ago 0 0 0 0

Our designer said it best: "It feels much nicer and powerful to work on the website now, and also flexible to make new layouts and whatever ideas that come to our minds without the Webflow restrictions."

1 month ago 0 0 1 0

One of those reviews caught 7 schema issues that would've broken everything downstream.

Best part is what we can do now that we couldn't before — blog posts through git, a searchable LLMOps database with real filtering, preview URLs for every PR.

1 month ago 0 0 1 0

The thing that made it reliable: using different models for different parts of the project. ChatGPT Deep Research for the upfront architecture decisions, Claude Code for building, and RepoPrompt to get Codex to review Claude's work at phase boundaries.

1 month ago 0 0 2 0
Post image

Last month I migrated our ZenML website from Webflow to Astro in a week during a Claude Code / Cerebras hackathon. 2,224 pages, 20 CMS collections, 2,397 images. The site you see now is the result.

Didn't win the hackathon but got a production website out of it, so I'll take that trade.

1 month ago 0 0 1 0
Advertisement
Preview
GitHub - strickvl/panlabel: Universal annotation converter Universal annotation converter. Contribute to strickvl/panlabel development by creating an account on GitHub.

github.com/strickvl/pa...

1 month ago 0 0 0 0

Full roadmap and install instructions in the repo. If you work with annotated datasets and have hit similar pain points, would be curious to hear what formats or features would be most useful.

1 month ago 0 0 1 0

Not going to change the world, but it might save someone a few hours of debugging coordinate transforms or prevent silent data corruption between tools.

1 month ago 0 0 1 0

There are a ridiculous number of object detection formats out there, and each one has its own quirks about how it handles bounding boxes, coordinates, or class mappings. I'm working through them slowly, format by format.

1 month ago 0 0 1 0

→ Convert between annotation formats (focusing on object detection first, but segmentation and classification coming soon)
→ Validate your datasets
→ Generate statistics
→ Semantic diff between dataset versions
→ Create random or stratified subsets

1 month ago 0 0 1 0

The origin story is pretty mundane: I hit one too many bounding box bugs caused by format inconsistencies and decided someone should just build a Pandoc equivalent for annotation data.

What it does:

1 month ago 0 0 1 0
A README document for Panlabel, a CLI tool that converts dataset annotation formats, including installation instructions for various platforms.

A README document for Panlabel, a CLI tool that converts dataset annotation formats, including installation instructions for various platforms.

panlabel 0.2 is out. It's a CLI tool (and Rust library) for converting between different dataset annotation formats. Now also available via Homebrew.

1 month ago 0 0 1 0

The reasoning isn't strong enough for gnarly bugs, but the speed makes it useful for a different class of task. Still early days figuring out where it fits.

Are you using Codex Spark? Has it carved out a specific role in your workflow, or is it just another option you reach for occasionally?

1 month ago 0 0 0 0
Advertisement

I'm developing a mental filter for it. Docs updates after a code change? Spark's fine. First pass at demo code? Sure. Scanning docs and suggesting rewrites based on a PR? Worth trying. Complex debugging? Not there yet.

1 month ago 0 0 1 0

When regular Codex disappears for 30 minutes on high reasoning mode, you learn to run multiple tasks in parallel and context-switch between them. Spark doesn't need that pattern. The speed drops the friction enough that I'm less precious about what I delegate.

1 month ago 0 0 1 0

My main tools are still Codex 5.3 on high reasoning or Opus 4.6 (usually through @RepoPrompt), but Spark is fast enough that it makes me rethink what's worth handing off.

1 month ago 0 0 1 0