Yes, Green Chartreuse.
A Chicago-based distiller does Green Key which is the closest I’ve tasted. Though a Binny’s employee recently assured me that Brovo (which has a green and a yellow) is even closer than Green Key.
Posts by 🎱 Josh Branchaud ✨
If I could get a certain green herbal liqueur made by French monks for a reasonable price, I would. I can’t though, so I’m happy to buy and try the various taste-alikes that keep getting closer and closer (Del Santo, Green Key, Brovo, etc.).
I go to the mountains for a long weekend and Opus 4.7 drops while I'm gone 👀
The cover of The Repeat Room by Jesse Ball. It is light pink and has a sketch drawing of a red-filled IV bag with a line that snakes across the book to an anatomical heart hovering near a black chair.
My next fiction read is Jesse Ball’s The Repeat Room.
It’s another recommendation by a friend who recommends so many of the good books I read.
Mid-April and only four books in on the year 😬
I’ve got some reading to do.
4/
Remember You Will Die (2024)
Eden Robins
bsky.app/profile/jbra...
Sometimes a little bug lands in your old fashioned and you’ve got to sip around it.
I was only doing follow and stop shots.
This is what it looks like when a pro player does the Mighty X drill www.instagram.com/reel/DXK9Oze...
Listening to Miles Davis’ Kind of Blue on vinyl while putting in some focused pool practice with the Mighty X drill — had me deep in the zone 🙇
Same with using `ctrl-g` in Claude Code to open vim for a full editor experience while crafting a prompt. You can abandon the edits you've made in vim with :cq.
Postgres' `psql` also benefits from this -- abort a query you're writing from `\edit` by quitting with :cq
e.g. you're writing a git commit message and realize you need to bail, make a few adjustments, and then proceed with the commit. While :wq will proceed with the commit, :cq will cause the commit to fail and abort.
If you thought :wq was a cool way to quit vim, wait until you here about :cq
This quits with an error code which is very useful when vim has been called by another program like git, claude code, a REPL, etc. It prevents the calling program from "submitting" the vim buffer's contents.
on a better note: construction just started on this other logan square church near me to turn the building into 22 units of affordable housing chicago.suntimes.com/real-estate/...
used to live down the block from this place and felt so depressed every time I walked past it -- incredible example of sacrilege
*A Knight of the Seven Kingdoms
*open up article I'd like to read in browser*
*start to read article*
*see how small the scroll bar is*
*scroll to bottom of article to see if it is really that long; it is*
*close browser tab*
I consumed a lot of media over the past week:
- Knights of the Seven Kingdoms (HBO)
- The Traitors S1 (Peacock) — later seasons are *much* better
- Project Hail Mary (Alamo Drafthouse)
- Demon Slayer: Entertainment District Arc (CrunchyRoll)
My main mode of operating is to imagine that "the situation resolves itself somehow."
And let me tell you, it almost never does.
A screenshot of a Substack subscribe form where I've entered "my-email+latent-space@gmail.com" and submitted the form and it is displaying an error of "Too many accounts with this email".
Substack is doing something with their email subscription validation checks that prevents "plus addressing" your email when signing up for a newsletter 🤔
Hidden Knowledge Unlocked: Wrigley Field is bikeable from Logan Square in under 20 minutes.
Much faster than the blue line to Addison and then the 30 to 50-minute game day slog of a bus ride down Addison.
Seems instructive to both find the uneven edges of what LLMs do well and to experience how important domain expertise is in sifting through the bs.
The intuitions behind using LLMs well are not obvious or free — it takes ongoing experimentation ime
I wondered when these were going to start popping up.
I like how this blog post talks about abstractions:
"Abstractions are ideas that change the way you think about part of your codebase... Finding a good abstraction is hard because the code already has many abstractions but they’re ones that don’t work well anymore."
codeandcake.dev/posts/2025-1...
How would you define an "abstraction" in software engineering? Not with examples, but like as a definition
Screenshot from the linked blog post, with the following paragraph highlighted: "The internet today is unrecognizable compared to the place I started writing two decades ago. In 2006, YouTube was less than a year old, Facebook was still limited to college students, Netflix sent DVDs in the mail, and Instagram, Twitter and TikTok didn’t exist yet."
I am "Netflix send DVDs in the mail" years old 🥴
www.scotthyoung.com/blog/2026/04...
I can already tell that "mailing lists were large group emails in old typewriter font" is a phrase that will live rent free in my mind forever
This is from the introduction of Aphyr’s recent series called “The Future of Everything is Lies, I Guess”.
aphyr.com/posts/411-th...
What is AI, really? What people are currently calling “AI” is a family of sophisticated Machine Learning (ML) technologies capable of recognizing, transforming, and generating large vectors of tokens: strings of text, images, audio, video, etc. A model is a giant pile of linear algebra which acts on these vectors. Large Language Models, or LLMs, operate on natural language: they work by predicting statistically likely completions of an input string, much like a phone autocomplete. Other models are devoted to processing audio, video, or still images, or link multiple kinds of models together. Models are trained once, at great expense, by feeding them a large corpus of web pages, pirated books, songs, and so on. Once trained, a model can be run again and again cheaply. This is called inference. Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.
This is one of the best (at this level of concision) laymen explanations and demystifications of LLMs that I’ve seen.
A frequent misconception I hear is that LLMs remember your conversations and train on your conversations. Sorta true, but probably not in the way they are thinking.
The kindle cover of Keigo Higashino’s Invisible Helix which shows the silhouette of a man on a spiral staircase with a city off in the distance.
I’m reading yet another one of Keigo Higashino’s Detective Galileo murder mysteries — this one is called Invisible Helix.
5/ In general, work that shortens feedback loops is the most valuable "metaproject" work that can be done on a project. Doesn't matter what feedback loop: testing, compiling, deploying, getting customer feedback, analyzing data, whatever