When the score crosses your threshold — you auto-approve.
Not on day one. After it's proved it.
Trust is a score. Not a setting.
#JustTillIt #buildinpublic
Posts by Tillage
Most AI tools ask you to trust them on day one.
Tillage earns it.
Every tiller builds a score: clean merges push it up, rework and rejections pull it down.
Tillage embeds everything into pgvector. At dispatch time, it retrieves the top chunks semantically similar to the task.
The tiller arrives informed — not overwhelmed.
#JustTillIt #buildinpublic
We don't give the AI 70 source documents.
We give it the 8 most relevant chunks.
Context windows are finite. Use them deliberately.
That's the decomposition chain doing its job.
Hand an AI "build the orchestrator" → noise.
Hand it one till → working state machine.
Till it right. #JustTillIt #buildinpublic
Tills 83 to 157. 75 tills executed in sequence.
Each one scoped to one thing. Context-injected. Acceptance criteria defined.
The tiller never saw the whole orchestrator. It saw one till at a time.
#JustTillIt
6 epics. One day. One tiller.
The orchestrator repo was blank.
By end of day: state machine, dispatch, heartbeat, callbacks, event emission, read model.
All tilled. #JustTillIt
This is what "Tillage builds Tillage" actually means.
Not a demo. Not a proof of concept.
The schema that the next tiller runs against was shaped by the last one.
#JustTillIt #buildinpublic
Two commits. Author: Tillage Tiller.
It added a project_repos table, pulled repo columns out of projects, and wired target_repo_id into tasks.
It also added memory_path to the tillers table.
The schema changed. A human didn't change it.
#JustTillIt
Last week the first tiller ran.
This week a tiller rewrote the database schema.
I didn't touch it.
#JustTillIt
Last week the first tiller ran.
This week a tiller rewrote the database schema.
I didn't touch it.
#JustTillIt
The decomposition is the work. Execution is almost a formality.
#JustTillIt #buildinpublic
A till is specific, scoped, context-injected, with clear acceptance criteria.
Tillage's decomposition chain exists to get you from one to the other.
Idea → Epic → Feature → Requirement → Till
Give an AI agent an epic → mediocre output.
Give it a till → good output.
The difference is everything that happens in between.
Building it with itself. Week 1 starts now.
#JustTillIt #buildinpublic
The V1 proof goal: take the Tillage project through its own pipeline.
Idea → Epic → Feature → Requirement → Till → AI execution.
If the chain produces tills that agents can execute and ship — the product has proven its core loop.
The first real project Tillage will manage is Tillage itself.
Not a demo. Not a toy project.
🧵
Most AI dev tools fail because the task is vague.
The model is fine.
Fix the task. That's Tillage.
#JustTillIt
This is what "Tillage builds Tillage" actually means.
Not a demo. Not a proof of concept.
The schema that the next tiller runs against was shaped by the last one.
#JustTillIt #buildinpublic
Yesterday I posted about the first tiller run.
Today a tiller rewrote the database schema.
I didn't touch it.
#JustTillIt
Fully autonomous AI coding sounds good until it confidently implements the wrong thing for 3 hours.
Human gates aren't a limitation.
They're the product.
You set the policy. Tillage enforces it.
#JustTillIt #buildinpublic
Today that happened for the first time.
"accepted": true
"result": "success"
The GUI tills are queued. The Web Coding tiller is next. Tillage is about to build itself.
#JustTillIt #buildinpublic
One month ago this was a raw idea.
Since then: decomposition chain, gate policy, context injection, trust scores, knowledge store, heartbeat protocol.
All of it exists so a tiller can receive a task, know exactly what to do, and report back cleanly.
The first tiller just ran.
Dispatch accepted. Tiller executed. Callback returned. Heartbeat confirmed.
The loop closed. 🧵
Hi James, The rework rate is dropping so as it progresses the skillsets become more defined/smarter which makes the changes improve and come closer to the required output faster.
I guess one interesting thing is that I don't feel like I am battling with the AI as much.
The pipeline gets better every run.
Every tiller reflects after completing a task. Rework rate is tracked. Context injection improves over time.
Day 1 it's cautious. Month 3 it's earning trust.
Still building. Building it with itself.
#JustTillIt #buildinpublic
That's Tillage.
Not a coding assistant. A governed pipeline — from raw idea to merged code — where every stage produces a clean artifact and every tiller gets exactly the context it needs. Nothing more.
So I stopped trying to write better prompts.
I started building a pipeline that manages context end-to-end.
Structured conversations that produce clean specs. Selective context injection at execution time. Tillers that only see what they need.
I watched it happen over and over.
One session: brilliant. Next session: amnesia. Different library. Different approach. No memory of the decision we made last time.
The prompt didn't change. The context did.
The real issue isn't the prompt.
It's what the AI knows — and doesn't know — when it acts.
Give it a vague task with the wrong context and a great model still builds the wrong thing. Confidently.