Then you look at the actual utilization and every instance is sitting at 2% CPU, 5% memory, and 5% bandwidth. So sad.
Posts by Ahrav
K8s is cool and it has its place. But sometimes a plain ASG with a handful of big boxes is enough.
Feels like a lot of teams go distributed way too early and light performance on fire for architecture points and the privilege of managing more instances, more moving parts, and more overhead.
So either there’s a trick I’m missing, or the only supported setup blocks the tuning S3 Files needs to perform well.
In either case, it’s rather frustrating :/
But S3 Files doesn’t support ECS EC2 launch type. That leaves ECS Managed Instances, where you can’t tune kernel params. So the one fix it needs is blocked by the only setup it supports.
TIL: S3 Files needs sunrpc tuning for decent throughput, especially on a single machine. Without it, you won’t come close to the throughput most boxes can deliver.
ECS Managed Instances make me very very sad.
@zed.dev this is a lot to take in all at once. 😅 so many changes!
Surprisingly little lifetime or borrow checker noise. Most feedback is about real correctness or edge cases. Even when it’s wrong, I can usually see the reasoning. 6–8 months ago it felt random and frustrating. Big improvement imo.
Example: github.com/ahrav/Gossip...
This is all based on Rust-heavy use. Bugbot hasn’t worked well for me so far, but I haven’t figured out why yet.
Coderabbit feels more focused on the core review experience and has found a nice balance.
KiloCode is still very strong, but it’s also doing more across triage, security, and other workflows. That wider scope might be why the difference shows up more in the overall ranking than in code-only.
Code-only (just finding issues in code, ignoring extras):
KiloCode > Coderabbit > Codex > Greptile > Gemini Code Assist > Claude Code > Cursor Bugbot
Here the gap between #1 and #2 is much smaller. They feel closer in raw code review quality.
I’ve tried a bunch of code review agents the past few months, mostly Rust.
Overall (features, UX, extras):
@coderabbitai.bsky.social > @kilocode.ai > @greptile.bsky.social > Codex > Gemini Code Assist > Claude Code > Cursor Bugbot
There’s a clear gap between #1 and #2 here.
Code-only in thread
Screenshot of an AI message saying “oops that wasn’t a dry-run build”
What… no, it’s not fine! 🤦🏼♂️Dry-run exists for a reason. Welp, I guess we’ll see what breaks in 20 minutes…
Umm… what? I’m not sure what this means.
TIL there is a thing called causal profiling 🤯 Amaze-wow!
Oh this is fun. My LA brain is not fully comprehending what this means.
Hey! I’m in that blog.
I’d rather write yaml than deal with CDK nonsense. 😖
Thanks @coderabbitai.bsky.social. Just the right touch of silliness to bring a smile to my face :)
"🐰 I hopped through slabs and frames today,
Tokens tucked away, gone astray —
Split-points whisper, quiet and neat,
No more pages, just ranges to meet.
The rabbit cheered: simpler beats complete! 🥕"
Codex is incredible at writing code, but soo bad at almost everything else.
It writes some code.
Me: “Err… I don’t get it. Can you explain?”
Codex: “Sure.”
Then it proceeds to explain the code… with more code.
Me: “No worries, Codex. I’ll ask Claude instead. Thanks for the hieroglyphics tho” 😂
Go slow, to go fast. I feel like I have to repeat this to myself multiple times a day.
Solid Friday afternoon!
“I’m honestly not sure how plan mode handles this. The concern is valid — if the planning phase and execution phase don’t share the same context, the executor might not know to update beads status.” - Claude (Opus 4.6)
“I’m honestly not sure…”. I’ve never been this happy to get a rejection. 😍
Tiny dumpster card with words of motivation.
Thanks Tiny dumpster ❤️