There are (sort of) two kinds of coders: those who see it as just a well-paying, stable job, and those who do it on nights and weekends because they love it, and it’s part of their identity. *Both* are under attack from bosses trying to put them out of work... www.anildash.com/2026/03/13/c... ...
Posts by Bee Klimt
I also played around with Haskell’s Map and Set types. The way they are immutable is identical to the internal map and set classes in the Firestore SDKs, which I appreciate.
Day 5 of doing Advent of Code 2024 in Haskell for no particular reason — I spent most of the day learning how to set up Docker devcontainers in VSCode, so that I could work on my Mac without installing everything. Worked great! Seems generally useful.
Alright, so, Day 4 was a grid search. I thought about using vectors, but stayed with nested lists first, and it ended up being fast enough.
I also played with Records. I don’t love the way members get dumped into the global namespace as function names!
For some reason, I decided to teach myself Haskell by doing Advent of Code 2024.
My impressions of Haskell after 3 days:
1. The reliance on linked lists + recursion feels very lisp
2. Seems like rust’s pattern matching was inspired by Haskell
I wonder how it’ll hold up to graph traversal problems
i’m not a big steak person, but the best steak i ever had was in an amtrak dining car, on the Coast Starlight line
I can’t speak for everyone, but my mind tends to treat writing an article, making a video, writing a song, cooking a meal, drawing an image, and, apparently, designing software the same way. It’s not a matter of just “generating” something perfect from my head, but exploring the tension that exists between what I’m imagining and the limitations of my stupid meat body. That’s actually the exciting part. It also lets me figure out if something has turned out wrong or just resulted in a happy accident. Vibe coding, like every new trend coming out of Silicon Valley, turns this process — the entire act of creativity, itself — into a slot machine. One more pull on the AI and maybe it will figure it out for you. You won’t understand how any of it works, of course, or feel particularly proud of what you’ve done, but maybe you’ll have something. Just a few more dollars for some more tokens. C’mon, just pay a bit more.
www.garbageday.email/p/am-i-too-s...
I just published my first rust crate. It’s a library for serializing and deserializing protocol buffers. I don’t really intend other people to use it. But I use it in enough of my own projects, I wanted an easy way to share it.
crates.io/crates/broto...
comic strip with call and response text. who are we? CEOs what do we want? AI! AI to do what? We don't know! When do we want it? Right now!
every company in 2025
3. AI adoption is easier said than done. Even as executives pressure workers to use AI, getting people to do that throughout an organization is easier said than done. Rukmini Reddy, an engineering executive at incident management software maker PagerDuty, does so by making AI usage a part of her employees’ annual performance reviews. This strategy seems to be working, as she said that 98% of her engineers use coding tools like Anthropic’s Claude Code or Microsoft’s GitHub Copilot on a day-to-day basis now.
It seems like the only way tech companies are able to compel AI usage is by coercion in performance review processes?
(via The Information "AI Agenda" newsletter)
"But when the team looked at the employees’ actual work output, they found that the developers had completed tasks 20% slower when using AI than when working without it. Researchers were stunned. “No one expected that outcome. We didn’t even really consider a slowdown as a possibility.”
🎁link
Okay, for the folks who asked: here's the majority AI view, writing up the reasonable, thoughtful view on AI that the vast majority of people in tech hold, that gets overshadowed by the bluster and hype of the tycoons trying to shill their nonsense. anildash.com/2025/10/17/t... Please share!
“In a new report, management consultants Bain & Company found that despite being ‘one of the first areas to deploy generative AI,’ the ‘savings have been unremarkable’ in programming.”
“When it comes to AI adoption, many companies aren’t guided by strategy but by ‘Fomo’,” said Haritha Khandabattu, senior director analyst at consultancy Gartner.
www.ft.com/content/e93e...
Two trials will be repeated endlessly despite always producing the same results:
1) Does UBI work? (yes)
2) Does AI improve productivity? (no)
Some therapists are using AI during therapy sessions. They’re risking their clients’ trust and privacy in the process.
LinkedIn is the opposite of punk rock.
we learned this already! there was about a one week period where everyone thought Google Glass was neat and then the realization kicked in and people were like, wait a minute, this is a privacy nightmare and the whole thing flopped
An M.I.T. study found that 95% of companies that had invested in A.I. tools were seeing zero return. It jibes with the emerging idea that generative A.I., “in its current incarnation, simply isn’t all it’s been cracked up to be,” johncassidysays.bsky.social writes.
to me, it’s not about the number of iterations, but the “fitness function”. in nature, “survival of the fittest” is who can reproduce. machine learning on problems where there are right and wrong answers works well. but these “AGI” LLMs are just trained to bullshit confidently
“watch this 3 minute movie i spent 6 months making”
“oh wow, even if it’s bad, i’ll understand you better”
“watch this 3 minute movie i spent 5 seconds making by writing a prompt”
“why are you wasting my time? just tell me the prompt”
when i watch movies or listen to music, i examine to the details. i ask myself, “why did the artist make that choice?”
with ai-generated art, there’s no intentionality to the details. it’s just filler
it’s a cold chicken mcnugget without the breading
the thought of ai-generated music hurts my soul. same with art really. i’ll take a stick figure drawing or a three-chord punk song over an ai thing anyday. adding in predictions from statistical models just dilutes the message. i’d rather someone send me the prompt than its output
the thing is, when someone’s switching jobs, they’re already changing teams, so the cost is baked in
if you say, “well, people keep leaving for new jobs that are in-person, so we’re gonna RTO”, then you aren’t properly accounting for that cost
in other words, once you’ve formed teams that span the continent, you can’t really “RTO” without destroying those teams. maybe the destruction pays off somewhere else, but you’ve gotta at least acknowledge it
the thing that frustrates me is how often people conflate “RTO” with “working in-person”
if i had the option of RTO with a 20 minute commute and my whole current team in person, i’d take it
but my last job tried to make me “RTO” to take remote meetings from a corporate office, and i quit instead
User taps → API stalls → UI hangs → User bails. Sound familiar?
🙌 Honeycomb Frontend Observability now supports mobile. Fix prod mobile issues with deep, end-to-end visibility
✅ See user journeys from tap to backend
✅ OTel-based SDKs — no lock-in
✅ No seat/custom metric fees
tinyurl.com/2ccpc5nm
Later on I saw an episode of Voyager where Torres does the same thing. In both cases, the CO accepted the hard estimate.
(I put on 90s scifi tv shows in the background while I’m coding.)
Lots of folks know about Scotty always padding his time estimates.
But today I saw the episode of Stargate SG-1 where Siler says it’ll take 24 hours to fix the gate. Hammond says “You have 12.” And then Siler says “No, sir, that’s not how it works. It’ll take 24.”
I respect that.