Project Hail Mary was really great, definitely recommend it!
Posts by Carter Bryden 🇨🇦
When I see people adding a bunch of process, guard rails, workflows, etc. around agents it sounds an awful lot like what you do within a human org to keep things from falling apart as it all scales up. With a very similar list of pros and cons too.
I’d never expect a freshly hired dev to push a perfect PR with a single prompt and nothing else. I’d expect bad results, or something in the wrong direction, if there wasn’t more involvement (pairing/cowriting/reviewing). I wouldn’t expect that just writing more context would reliably fix it either.
- Go a little too far chasing an idea or implementation
- Get ahead of themselves and skip steps
- Misunderstand context/needs/the state of things
- Make simple mistakes or a bad judgement call
- Being a little too opinionated or not opinionated enough at the wrong time
I have a theory that a lot of the problems we see with agents are the same problems we’ve always had working with and managing actual devs, too.
When we compare agents with people, we tend to imagine some ideal dev in comparison. But even the best devs I’ve known also occasionally do things like:
Approximated.app runs thousands of servers, and this is true.
Similarly, each 9 added to uptime is exponentially harder.
Downtime allowed by uptime percentage:
90%: 36.5 days/yr
99%: 3.65 days/yr
99.9%: 8.8 hrs/yr
99.99%: 52.6 min/yr
99.999%: 5.26 min/yr
99.9999%: 31.5 sec/yr (!!)
Theories:
- More and more of those people have kids now, so no energy or time
- Blogging/posting feels less valuable since AI (key word: feels)
- Feels tone deaf squeezing tech posts in between major news events that happen every 45 minutes
- Everybody is more stressed, which kills spontaneity
This is the first Mac I’ve personally owned. I had a lot of criticisms in the pre-M chip era, it just felt to me like you paid twice as much for branding and kind of weak hardware. But it’s hard to deny how much of a jump the M chips are now, and competitor prices have crept up.
Local LLMs, a zillion concurrent processes with elixir, doesn’t seem to matter what it is. It’s just cool as a cucumber at all times. Coming from non M chip hardware, that’s absolutely nuts to me. My old rig would literally be on fire if I did half of this.
Picked up a Mac Studio recently because my daily driver Linux desktop hardware was showing its age at about 10 years old (some upgrades along the way).
Most impressive to me is that it seems to generate no heat or noise, EVER, despite maxing out every core pretty often with elixir concurrency work.
There‘s still a bit to go to making this happen. Rebar is an important piece to using erlang not just for erlang, but just as much for elixir, gleam, … Consider backing this effort.
Razing awareness is the new zeitgeist
When liveview first came out, I thought it was really brilliant that it can be nested. Even when live components were released, I preferred to just nest liveviews for a while.
Today I use components a lot, and like them, but I still think nesting LVs have more use cases than people realize.
. @dockyard.com is seeking @elixir-lang.org contract devs, senior level experience with distribution and scaling necessary dockyard.com/careers
#ElixirLang
I think part of the reason why is:
If no one’s name is on it, then no one is responsible and so how could any failure be anyone’s fault?
Tempting for corps, infuriating for customers. Leads to employees systemically not caring as much about quality or customer experience and general disconnection.
You can fit so many spinners in this bad boy 🚗
Wearing a white shirt when you have young kids is the height of hubris. I’ve flown too close to the sun once again.
Half way through this podcast episode with @bcardarella.bsky.social about elixir adoption, AI, etc. and it’s worth a listen.
pca.st/episode/3f3a...
A lesson learned by all who have tried to make Reese’s pieces by just mixing chocolate and peanut butter
Bold of you to assume I have anything to do on any night
Thanks, and exactly!
A nice side effect is that it tends to result in features that people are more likely to intuitively understand. And in the code base, something other developers can understand more easily.
I have the dumbest trick ever when I can't figure out how to implement something, but it works. I ask "If a person had to manually do this, how would they do it".
That's it. So many things feel complicated when I think only in coding terms, but dead simple if some guy named Steve had to do it.
That's a good question! I just assumed they are because that would make the most sense, but I haven't actually used this method to grab events themselves before so I'm not sure.
For your use case where you want the events themselves, I'd call the events API and grab those instead, triggered by receiving a webhook.
A frustrating but (I've found) more reliable option is to trigger an API call to get the latest data when webhooks come in. Depending on frequency and the situation you can debounce/deduplicate so you're not slamming API calls more than some rate, too.
That's probably true, but no, I think I don't think I'd feel right doing that. At the end of the day it's a US corp, with all of the unfortunate things that entails, and I wouldn't want to mislead people about that even by accident.
Being Canadian while having a US corporation has been a uniquely tense experience lately, I'll say that.
This is in addition to the inbound load balancing we already have, which balances based on region, connection count, etc.
That inbound handles which part of your Approximated cluster a request should go through, and the upstream load balancing decides which upstream to send it to.
This is extra handy if you want to have your app running on multiple hosting/infra providers and load balance between them for redundancy, costs, etc.
Or if you just don't want to manage a global load balancer on your own infrastructure (or your provider makes it hard to do).