Advertisement · 728 × 90

Posts by 🎱 Josh Branchaud ✨

Yes, Green Chartreuse.

A Chicago-based distiller does Green Key which is the closest I’ve tasted. Though a Binny’s employee recently assured me that Brovo (which has a green and a yellow) is even closer than Green Key.

14 hours ago 2 0 0 0

If I could get a certain green herbal liqueur made by French monks for a reasonable price, I would. I can’t though, so I’m happy to buy and try the various taste-alikes that keep getting closer and closer (Del Santo, Green Key, Brovo, etc.).

16 hours ago 1 0 2 0

I go to the mountains for a long weekend and Opus 4.7 drops while I'm gone 👀

1 day ago 1 0 1 0
The cover of The Repeat Room by Jesse Ball. It is light pink and has a sketch drawing of a red-filled IV bag with a line that snakes across the book to an anatomical heart hovering near a black chair.

The cover of The Repeat Room by Jesse Ball. It is light pink and has a sketch drawing of a red-filled IV bag with a line that snakes across the book to an anatomical heart hovering near a black chair.

My next fiction read is Jesse Ball’s The Repeat Room.

It’s another recommendation by a friend who recommends so many of the good books I read.

1 day ago 1 0 0 0

Mid-April and only four books in on the year 😬

I’ve got some reading to do.

3 days ago 2 0 0 0

4/
Remember You Will Die (2024)
Eden Robins
bsky.app/profile/jbra...

3 days ago 0 0 0 1

Sometimes a little bug lands in your old fashioned and you’ve got to sip around it.

4 days ago 0 0 0 0
Preview
Fedor Gorst on Instagram: "Smooth mighty X drill in POV view" 403 likes, 5 comments - gorstyanich on April 15, 2026: "Smooth mighty X drill in POV view".

I was only doing follow and stop shots.

This is what it looks like when a pro player does the Mighty X drill www.instagram.com/reel/DXK9Oze...

6 days ago 0 0 0 0

Listening to Miles Davis’ Kind of Blue on vinyl while putting in some focused pool practice with the Mighty X drill — had me deep in the zone 🙇

6 days ago 0 0 1 0

Same with using `ctrl-g` in Claude Code to open vim for a full editor experience while crafting a prompt. You can abandon the edits you've made in vim with :cq.

Postgres' `psql` also benefits from this -- abort a query you're writing from `\edit` by quitting with :cq

6 days ago 0 0 0 0
Advertisement

e.g. you're writing a git commit message and realize you need to bail, make a few adjustments, and then proceed with the commit. While :wq will proceed with the commit, :cq will cause the commit to fail and abort.

6 days ago 0 0 1 0

If you thought :wq was a cool way to quit vim, wait until you here about :cq

This quits with an error code which is very useful when vim has been called by another program like git, claude code, a REPL, etc. It prevents the calling program from "submitting" the vim buffer's contents.

6 days ago 9 1 2 0
Preview
Logan Square church to become Chicago's first all-affordable church redevelopment The project, Called La Herencia Apartments, will create 22 units of affordable housing in a neighborhood that's seen housing costs spike in recent years.

on a better note: construction just started on this other logan square church near me to turn the building into 22 units of affordable housing chicago.suntimes.com/real-estate/...

6 days ago 2 0 0 0

used to live down the block from this place and felt so depressed every time I walked past it -- incredible example of sacrilege

6 days ago 0 0 1 0

*A Knight of the Seven Kingdoms

6 days ago 0 0 0 0

*open up article I'd like to read in browser*

*start to read article*

*see how small the scroll bar is*

*scroll to bottom of article to see if it is really that long; it is*

*close browser tab*

6 days ago 0 0 0 0

I consumed a lot of media over the past week:
- Knights of the Seven Kingdoms (HBO)
- The Traitors S1 (Peacock) — later seasons are *much* better
- Project Hail Mary (Alamo Drafthouse)
- Demon Slayer: Entertainment District Arc (CrunchyRoll)

1 week ago 0 0 1 0

My main mode of operating is to imagine that "the situation resolves itself somehow."

And let me tell you, it almost never does.

1 week ago 1 0 0 0
A screenshot of a Substack subscribe form where I've entered "my-email+latent-space@gmail.com" and submitted the form and it is displaying an error of "Too many accounts with this email".

A screenshot of a Substack subscribe form where I've entered "my-email+latent-space@gmail.com" and submitted the form and it is displaying an error of "Too many accounts with this email".

Substack is doing something with their email subscription validation checks that prevents "plus addressing" your email when signing up for a newsletter 🤔

1 week ago 0 0 1 0
Advertisement

Hidden Knowledge Unlocked: Wrigley Field is bikeable from Logan Square in under 20 minutes.

Much faster than the blue line to Addison and then the 30 to 50-minute game day slog of a bus ride down Addison.

1 week ago 1 0 0 0

Seems instructive to both find the uneven edges of what LLMs do well and to experience how important domain expertise is in sifting through the bs.

The intuitions behind using LLMs well are not obvious or free — it takes ongoing experimentation ime

1 week ago 1 0 0 0

I wondered when these were going to start popping up.

1 week ago 2 0 0 0
Code and Cake - Your job isn't programming The greatest limitation in writing software is our ability to understand the systems we are creating.

I like how this blog post talks about abstractions:

"Abstractions are ideas that change the way you think about part of your codebase... Finding a good abstraction is hard because the code already has many abstractions but they’re ones that don’t work well anymore."

codeandcake.dev/posts/2025-1...

1 week ago 3 0 0 0

How would you define an "abstraction" in software engineering? Not with examples, but like as a definition

1 week ago 8 1 17 0
Screenshot from the linked blog post, with the following paragraph highlighted: "The internet today is unrecognizable compared to the place I started writing two decades ago. In 2006, YouTube was less than a year old, Facebook was still limited to college students, Netflix sent DVDs in the mail, and Instagram, Twitter and TikTok didn’t exist yet."

Screenshot from the linked blog post, with the following paragraph highlighted: "The internet today is unrecognizable compared to the place I started writing two decades ago. In 2006, YouTube was less than a year old, Facebook was still limited to college students, Netflix sent DVDs in the mail, and Instagram, Twitter and TikTok didn’t exist yet."

I am "Netflix send DVDs in the mail" years old 🥴

www.scotthyoung.com/blog/2026/04...

1 week ago 0 0 0 0

I can already tell that "mailing lists were large group emails in old typewriter font" is a phrase that will live rent free in my mind forever

1 week ago 167 22 7 0
The Future of Everything is Lies, I Guess This is a long article, so I'm breaking it up into a series of posts which will be released over the next few days. You can also read the full work as a PDF or EPUB; these files will be updated as each section is released.

This is from the introduction of Aphyr’s recent series called “The Future of Everything is Lies, I Guess”.

aphyr.com/posts/411-th...

1 week ago 0 0 0 0
What is AI, really?

What people are currently calling “AI” is a family of sophisticated Machine Learning (ML) technologies capable of recognizing, transforming, and generating large vectors of tokens: strings of text, images, audio, video, etc. A model is a giant pile of linear algebra which acts on these vectors. Large Language Models, or LLMs, operate on natural language: they work by predicting statistically likely completions of an input string, much like a phone autocomplete. Other models are devoted to processing audio, video, or still images, or link multiple kinds of models together.

Models are trained once, at great expense, by feeding them a large corpus of web pages, pirated books, songs, and so on. Once trained, a model can be run again and again cheaply. This is called inference.

Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.

What is AI, really? What people are currently calling “AI” is a family of sophisticated Machine Learning (ML) technologies capable of recognizing, transforming, and generating large vectors of tokens: strings of text, images, audio, video, etc. A model is a giant pile of linear algebra which acts on these vectors. Large Language Models, or LLMs, operate on natural language: they work by predicting statistically likely completions of an input string, much like a phone autocomplete. Other models are devoted to processing audio, video, or still images, or link multiple kinds of models together. Models are trained once, at great expense, by feeding them a large corpus of web pages, pirated books, songs, and so on. Once trained, a model can be run again and again cheaply. This is called inference. Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.

This is one of the best (at this level of concision) laymen explanations and demystifications of LLMs that I’ve seen.

A frequent misconception I hear is that LLMs remember your conversations and train on your conversations. Sorta true, but probably not in the way they are thinking.

1 week ago 4 0 1 0
Advertisement
The kindle cover of Keigo Higashino’s Invisible Helix which shows the silhouette of a man on a spiral staircase with a city off in the distance.

The kindle cover of Keigo Higashino’s Invisible Helix which shows the silhouette of a man on a spiral staircase with a city off in the distance.

I’m reading yet another one of Keigo Higashino’s Detective Galileo murder mysteries — this one is called Invisible Helix.

1 week ago 2 0 0 0

5/ In general, work that shortens feedback loops is the most valuable "metaproject" work that can be done on a project. Doesn't matter what feedback loop: testing, compiling, deploying, getting customer feedback, analyzing data, whatever

2 weeks ago 47 11 1 1