Revising a document with an agent should feel like collaborating with someone sitting next to you reading the same page. You highlight what needs to change, the agent suggests revisions, and you decide. That's what I'm building in Atelier
Posts by Shawn Simister
AI is great at turning specs into code but that makes writing good specs a bottleneck. Being able to quickly brainstorm and refine specs like this has been really helpful. I can’t imagine going back to re-typing everything manually
Revising a document with an agent should feel like collaborating with someone sitting next to you reading the same page. You highlight what needs to change, the agent suggests revisions, and you decide. That's what I'm building in Atelier
Adding larger models capable of even longer reasoning traces is only going to exacerbate this challenge of having to know how hard a problem is before you try to solve it bsky.app/profile/timk...
Five panelists seated on stage at the Resonant Computing NYC event. A projected slide with hand-drawn concentric circles is visible behind them.
Last night the Resonant Computing Lab launched in NYC. A fund for building software that leaves people nourished, not drained.
1,200+ people have signed the manifesto. Now there's funding to turn principles into practice.
resonantcomputinglab.com
The gap between Hack and NetHack taught me something about the future of CS. Worth thinking about.
Does Computer Science Still Exist?
davidbau.com/archives/20...
drop + merge mini apps on canvas. v0
I want my workflow to feel more like this. One big reconfigurable space for deep work
If you only have one screen a toogle is ok but if you have a room full of screens you probably want to be able to turn them all off instead of toggling them all to different states
When the agent finishes a task I launch atelier-verify to automatically verify the acceptance criteria and move it to the done column if they're all met
The deepen-plan skill from Compound Engineering is another nice pattern to add extra effort when you need it instead of having to define it up-front
github.com/EveryInc/com...
Yeah I did the same thing. Seems like they forgot to add the slash command?
Same with model pickers. It's great to have control over cost and speed but why is it something I need to control manually? If I have some sort of mental model for "these kinds of tasks work best with this model" then why can't I just tell Claude that instead?
Not a fan of this new effort selector in Claude Code. It's cool that the models can adapt to different tasks but it's annoying that you need to think about which setting to use before you send each prompt. Why can't I just define which tasks are high effort in CLAUDE.md?
Plans update to show how many tasks have been completed. Tasks show how many acceptance criteria are met.
I added realtime notification so that when I'm running multiple agents in parallel I can scan the board and see which ones need my attention. Clicking on the yellow text automatically switches to the terminal for that agent
There is a difference between how we approach work where we’re given known constraints, known capabilities, vs work where we get to explore the possibilities of the new tech & define the constraints & capabilities. The latter is science led, and the process is Applied Research.
Any markdown file on the kanban will show action buttons in the editor so you can launch the agent straight from the editor. And the knowledge graph connections let you easily jump between related task files
The biggest change in Atelier.dev v0.2 is that spec files now open in a custom Markdown editor. So now every task gets its own "plan mode"
I just wrote up my thoughts on how I'm delegating tasks to multiple agents. Last year I was only using one at a time, this year I'm usually planning a couple tasks while others are being implemented
bsky.app/profile/narp...
I've been thinking about why verifying AI agent output feels so much harder than writing the spec that produced it. That question led me to rethink where my attention actually belongs in the process, and eventually to build atelier.dev
narphorium.com/blog/decisio...
I've been thinking about why verifying AI agent output feels so much harder than writing the spec that produced it. That question led me to rethink where my attention actually belongs in the process, and eventually to build atelier.dev
narphorium.com/blog/decisio...
an iPhone which transforms into a Kindle Oasis with Apple Pencil support would be 🔥
I wish they made this with an oled display on the outside and a foldable e-ink display on the inside
with all the deskilling and gell-manning (etc) we're gonna get from automated thinking machines, now is the time for the 'tools for thought' crowd to make tools that help you think things through for yourself.
tools to do *more* hard thinking.
it should feel effortful. cherish that feeling.
Also, when it says: I would recommend the first approach so you don’t have to refactor as much code.😂 I have some bad news for you buddy...
You’d be surprised at how human-like they drive now. But yeah they definitely act weird when they’re OOD
Congratulations Tom! Claude Code is a fantastic product
Values-as-Spec is the process of specifying what values the product is committed to protecting; how the product involves the user in its processes; and what the product explicitly will not automate or replace. What it does, won’t do, and can’t do.
@seanx.bsky.social
dearhermes.com/read/kfniw9y...