Posts by Trigger.dev
3. Plus 10 Claude Code tips most people miss.
--ππππ-πππππππ for context pre-warming. --ππππ-ππ to resume the agent that wrote your PR. ! for inline shell output. π²πππ+πΆ to compose prompts in your editor.
trigger.dev/blog/10-cla...
2. Deep dive on how TRQL works: the SQL-like language behind Query.
Users write familiar SQL, but the ANTLR grammar physically can't express πΈπ½ππ΄ππ or π³π΄π»π΄ππ΄. Every query has its tenant filter injected at compile time.
trigger.dev/blog/how-tr...
New on the blog:
1. We replaced Node.js with Bun in one of our most latency-sensitive services.
2,099 β 10,700 req/s. 5x throughput. Also found a Bun memory leak along the way (which was then swiftly patched by the Bun team).
trigger.dev/blog/firebun
β v4.4.2 - v4.4.4:
β’ batch trigger concurrency bumps.
β’ ππ’πππππππππππ΄πππ
πππ pulls your Supabase connection strings in so you stop copy-pasting
β’ Test page generates example payloads with AI
β’ Dev CLI auto-cancels in-flight runs on exit
β’ 11 new MCP tools
β Dashboards
Every project ships with a pre-built one: run volume, success rates, failures, costs, versions. Build custom ones on top with big numbers, charts, and tables. Filters apply across every widget at once.
β Query.
Ask "what are my p95 durations for the chat task?" and the AI writes the TRQL, runs it on ClickHouse, renders the chart. Or write the SQL yourself.
β Our @vercel integration is live.
Push code, tasks deploy automatically. Env vars sync both ways. Atomic deployments gate Vercel's promotion until your tasks are ready, so your app never goes live with a mismatched task version.
No πππππππ.πππ ππππππ’, no CI/CD workflow.
In case you missed it, here are some of the things we shipped in March:
β Native @vercel integration
β Query & Dashboards (ask in English, get SQL)
β 5x throughput from a Bun rewrite
β New MCP tools, TTL defaults, ππ’πππππππππππ΄πππ
πππ
Full recap β
HookDeck wrote the full walkthrough: both routing patterns, deployment scripts, the Hookdeck transform code, everything.
Fork it and have AI-powered GitHub automation running in ~15 minutes: hookdeck.com/webhooks/pl...
2 routing architectures:
1 Hookdeck connection β router task dispatches child tasks via πππππ.πππππππ() with πππππππππππ’πΊππ’: ${ππππππππ’πΈπ}-${πππππΈπ}. Simple.
Or: 3 connections filtering on π-πΆπππ·ππ-π΄ππππ header β tasks directly. Per-event retry policies, independent pause/replay, separate delivery
The idempotency pattern is the interesting part.
A Hookdeck transform extracts GitHub's π-πΆπππ·ππ-π³πππππππ’ header into the payload. That becomes the idempotency key. Hookdeck retries the delivery? Same key. No duplicate task runs. No duplicate PR comments. Zero application code for dedup.
3 tasks, each in its own container:
ππππππ-ππ: fetches the diff, calls Claude, posts a single review comment (HTML anchor ππ-ππππππ -πππππππ’ prevents duplicates on re-runs)
ππππππ-πππππ: Claude classifies, parses JSON labels, applies via GitHub API
ππππππ-ππππ: Claude summarizes commits in Slack
GitHub β Hookdeck (HMAC verification, payload reshape, retries) β Trigger REST APIβ durable task container β Claude + GitHub API + Slack
No custom webhook server. No Express route parsing π-π·ππ-πππππππππ-πΈπ»πΌ. No retry logic
GitHub sends a webhook. 60 seconds later, Claude has reviewed your PR diff, labeled your issue, and posted a deploy summary to Slack.
Built with HookDeck + Trigger.dev + Claude β
We're heading to AI Engineer Europe in London!
April 8-10 | QEII Centre, London UK | Booth G6
Come and chat with us If you're building AI agents, workflow automation, or background jobs. We'll be doing live demos, sharing what we're building, and handing out swag. See you there!
The gap between "I use Claude Code" and "I orchestrate Claude Code" is wide and getting wider.
These aren't novelty tricks. They're the building blocks for treating AI as a managed fleet of specialised workers, not a chat partner.
Bookmark this. You'll come back to it.
tgr.dev/2TACXLp
10/ Headless CI/CD with hard budget caps
Putting an autonomous agent in a pipeline without boundaries is terrifying.
-p for non-interactive mode
--max-turns to prevent infinite loops
--max-budget-usd as a financial circuit breaker
9/ Dynamic multi-agent orchestration with --agents
You don't need to hardcode subagent definitions in markdown files.
Pass a JSON definition to --agents for session-scoped specialists.
8/ Context compaction with Esc+Esc
Long debugging sessions fill the context window with dead weight. Every failed attempt adds tokens that degrade performance.
Double-tap Esc, select a message midway through, choose "Summarise from here."
Everything before that point is preserved perfectly.
7/ Structured JSON output with --json-schema
When you're building automation pipelines, conversational output is useless.
Combine -p, --output-format json, and --json-schema to turn Claude into a strictly typed function.
You define the shape. The model is constrained to produce exactly that.
6/ Parallel agents with --worktree
This one is genuinely transformative.
Multiple Claude sessions against the same repo = race conditions, trampled file edits, chaos.
--worktree uses native git worktree to give each agent a completely isolated working directory.
5/ Effort levels for cost control
Not every task needs deep reasoning. Boilerplate generation doesn't deserve the same compute as debugging a race condition.
The /model command now has 4 effort tiers: Low, Medium, High, Max
For CI/CD, set it programmatically:
export CLAUDE_CODE_EFFORT_LEVEL=low
4/ Inline shell execution with !
Prefix any input with ! to bypass the LLM entirely and run it in your shell.
The key detail: stdout and stderr get automatically injected into the context window.
Run ! npm test, see the failure appear, type "fix it", and Claude already has the full error context.
3/ Escape the REPL with Ctrl+G
The single-line input is actively hostile to serious prompt engineering.
Ctrl+G opens your $EDITOR (Vim, VS Code, whatever). Suddenly you have macros, syntax highlighting, multiple cursors, and proper multi-line editing.
2/ Code review loops with --from-pr
You wrote the code three days ago. Reviewer left comments this morning. Now you need to mentally reconstruct everything before responding.
Instead:
claude --from-pr 447
The agent comes back with full awareness of its original implementation decisions.
1/ Session forking with --fork-session
Stop rebuilding context from scratch every time.
Create a "master session" loaded with 40k+ tokens of architecture docs and project goals. Then fork it for each new feature.
It's git branch, but for your LLM context window.
Most people use Claude Code like a fancy chatbot in their terminal.
Meanwhile, power users are running parallel agents across git worktrees, forking LLM sessions like git branches, and capping autonomous CI/CD runs with hard budget limits.
10 advanced patterns that change everything π§΅