Hooks run deterministic code on lifecycle events. They can't be forgotten or hallucinated away like prompt instructions. If you want something to happen every single time, hooks are the way.
#ClaudeCode #DeveloperTools #AIEngineering #Productivity #DeveloperExperience
Posts by sajal.sh
With it, I can switch to Slack, read docs, review a PR, and the Glass sound pulls me back exactly when there's something to look at.
On Linux you'd swap in `paplay` or `aplay`. On Windows, PowerShell's `[System.Media.SystemSounds]` works.
The reason this is useful: Claude Code runs can take anywhere from 5 seconds to 5 minutes. Without the chime, I'm either staring at the terminal or I context-switch and forget to check back.
The setup takes about 30 seconds. You add a hooks section to your Claude Code settings.json with a Stop event (fires when Claude finishes a response) and a Notification event (fires when background work completes). Both just run `afplay` on a built-in macOS sound file.
I added a doorbell to Claude Code. Two lines of config and now my Mac plays a chime whenever Claude finishes thinking or a background task completes.
The shared vault pattern extends that: the knowledge base becomes a coordination layer between agents and the human.
#Obsidian #PKM #ClaudeCode #AIEngineering #OpenClaw
The part Karpathy's framing captures well: LLMs are genuinely good at manipulating structured text. Obsidian markdown with frontmatter properties is basically a flat-file database, and Claude is a surprisingly capable query/update layer on top of it.
It's a low-friction way to give an agent persistent, structured context without standing up a separate database.
The same vault is also shared with my OpenClaw agent. I work on my MacBook, OpenClaw runs on a Mac Mini, and iCloud syncs the vault between them. Both agents read from and write to the same knowledge base.
The agents handle the sync so I don't have to choose between a good UI and a unified knowledge base.
I could probably consolidate everything into Obsidian itself, but I keep Hevy and Things around because I genuinely like their UIs, and my brain works differently when I'm browsing data versus writing notes.
It has persistent memory across sessions. And it follows a CLAUDE.md in the vault root with vault-specific rules, which means I only had to teach it my conventions once.
What makes it work beyond basic prompt-to-file writes is the integration layer. Claude Code connects to Hevy (workout tracker) via MCP to pull training data, and Things (task manager) for todos, syncing them back into Obsidian. It uses a CLI integration for Gmail and Google Calendar.
When I want to log a workout or add a book or solve a leetcode problem, I describe it conversationally and Claude writes it in the right place with the right frontmatter.
The vault is PARA-organized with daily notes, weekly reviews, project notes, evergreen notes, and half a dozen tracker systems (books, games, LeetCode problems, fitness). Claude Code knows the conventions, the templates, the folder structure.
Karpathy recently posted about using LLMs to build personal knowledge bases, and it resonated because I've been running something like this for a few months now. My setup is an Obsidian vault managed almost entirely through Claude Code.
If you're running a personal agent and paying API rates, it's worth checking whether a subscription tier covers your actual usage. You might be surprised.
I'll keep monitoring over the next few weeks. Model quality, latency, and whether the subscription ceiling becomes a problem as usage patterns shift.
So I switched to an OpenAI subscription. $20/month, running the 5.3-codex model, and the quality has been fairly good so far. For my use case (personal assistant, not production workloads), the subscription tier seems to cover what I need without hitting limits.
I run a personal AI agent on OpenClaw. When I first set it up, it was running on a Claude subscription. Then Anthropic changed their policy, so I moved to the API. That worked fine, but the bill was running around $100/month for a personal assistant.
If you're building agents right now, this framing helps. You're stacking capabilities, and the interesting design questions are really about how the layers interact with each other.
#agents #aiengineering #openclaw
What makes the river metaphor click for me is that each phase absorbs the previous ones. Phase 2 agents still chat. Phase 3 agents still use tools. The river gets wider; it doesn't change course.
Now in 2026, we're seeing integrated agents like OpenClaw and NanoClaw that work across multiple channels, maintain long-term memory, and run proactively on heartbeat loops even when nobody's prompting them.
Then around 2025, these 'chatbots' started to operate inside a sandboxed environment. Claude Code, Codex CLI, Gemini CLI could read your files, run shell commands, chain together multi-step workflows. The conversation could finally reach outside itself.
I keep coming back to this mental model for AI agents: a widening river.
We started with chat UIs around 2022. ChatGPT, Claude, Gemini. You type something, it responds. That was the whole interaction surface.