Had a great time at #KubeCon last week with the @openfeature.dev folks. We had a bunch of great in-person discussions on the future of the project. Outlined the outcomes from the week here: openfeature.dev/blog/kubecon...
Posts by Jonathan Norris
Made it to Atlanta for #kubecon and the OpenFeature Summit. Excited to hangout with great OpenFeature community and learn some new things.
Giving a talk today about how AI is breaking release processes and how feature flags (and maybe LEGO) can help manage the chaos.
Baseball is crewel
DON'T NEED KNEE CARTILEGE TO HIT DINGERS!!!!!!
Get Hyped the @BlueJays play in the World Series today!!! youtu.be/ali2Ssvh7eg?...
Gausman the win, Bassitt the hold, Hoffman strikes out the side for the save.
And Springer with one of the biggest homers imaginable.
Unreal.
I can’t believe it! WORLD SERIES!
Man that was a stressful jays game, not sure I can handle this the whole way. This Jays team has a special vibe to it.
Hot take: `npx -y some-local-mcp` is the new `curl | sh`. We’re teaching devs to auto-install + execute random npm packages with local file system access. Remote MCP servers over HTTP/SSE are just structurally safer.
Sure, if you are trying to one-shot a feature, overall intelligence is everything. But I think most developers working in larger codebases pair-program with their LLM, and iteration speed is the most important factor for productivity.
Also, I still need to experiment more with `grok-4-fast-reasoning` in Cursor, but it's so crazy fast. I'm coming to believe that very fast + medium intelligence is way more important than slow + smart intelligence for working with these models as a day-to-day pair programmer.
Okay, I'm pretty impressed with `gpt-5-codex` in Cursor so far. It seems to mostly have the intelligence of `gpt-5` while being way faster to iterate with. I found `gpt-5` was hurting my productivity with how slow it was; the iteration time just got too long.
Fellow Transit nerds out there, this is a good watch: www.youtube.com/watch?v=XlHq...
Full write-up is here → blog.devcycle.com/devcycle-mcp...
5/ The real magic of MCP is when it deeply integrates into your actual coding flow. You can now ask your AI something like ‘wrap this in a feature flag’, it writes the code, it auto-creates or fetches the flag, and self-targets you into the flag for testing.
4/ Cloudflare Workers AI + Durable Objects made auth easy and handled all of the OAuth state for us. Remote MCP servers seem like the much easier install path, getting everyone to "npx -y" install some random script with full computer access is a security nightmare in waiting.
3/ Giving the agent too many tools is a problem. Exposing every single API was easy, but all it did was overload the context. Merging related calls into fewer, more powerful tools gave the agent just enough without eating all the AI’s context window.
2/ Descriptive errors makes up for a lot of other issues. This made the MCP go from “meh, it works” to “oh wow, this matters.” At first, our errors were vague and unhelpful, which made the agent hallucinate. Once we made errors specific, the AI Agent + MCP could actually recover.
1/ Good input schemas are critical. The input schemas (and descriptions) are your AI agent's primary context when deciding which tool to call. In certain places, .describe() statements on your schema parameters will help push the AI agent in the right direction.
Building @devcyclehq.bsky.social MCP server started off as one of my favourite hackathon projects we've ever done. But it took a lot of iteration to go from “it works” hackathon code to production-ready code. Here’s what we learned along the way (thread):
Not impressed with GPT-5 so far, totally whiffed on helping me fix a bug with our MCP server schemas and just created a mess. Claude Opus figured out the issue in one prompt (it still made a bit of a mess creating tests, but it was easy to get working). Back to Claude for now...
Time for an upgrade, the M4 runs GPT-5 faster right?
There is more hope that less resource-intensive models will soon match the performance of state-of-the-art models. But users expect that they can use the leading models all day for next to nothing, breaking that habit will be hard...
Gotta love the land-grab era of cheap LLM token usage in these AI coding tools. I still remember the era when you could get an Uber across SF for cheaper than MUNI, or get a $0.25 fruit rollup delivered for free to your door...
The signup flow needs some work, literally didn’t know there were paid plans that could be scrolled horizontally. This needs to be a vertical scroll, or something more compact that makes it obvious there are multiple plans.
Couldn't agree more with this post. Coding agents finally made using LLMs feel like a step change in productivity. When the LLM can use agents to run tests, linting, interact with git, and search the web, they don't have to be 100% perfect if they can check their work.
fly.io/blog/youre-a...
Random thought: I wonder if the OpenAI device is AirPods where the case can listen to conversations from your pocket (Sam hinted at this) it’s already the “third device” most people carry around.
When you are cleaning up and you find your old Sega Game Gear. 🙀
There is a lot of hard work ahead for Canada, but I'm excited for @mark-carney.bsky.social and parliament to get to work. Most proud of our healthy democracy and electoral process.