I visit my own website because it makes me happy. AI wrote every line of code, but I directed everything. If a thoughtless prompt can make what you make, why would I come to you? I can prompt AI myself
This post shows the core skill for using AI: taste
isaacflath.com/writing/how-...
Posts by Isaac Flath
In the U.S., new glasses take a few days. In Japan, they took 45 minutes and $70.
Transit, check-in, doors, customs, toilets all just felt nicer.
I wrote about it.
isaacflath.com/writing/japa...
I spent months building MonsterUI. Now, when people ask if they should use it, I hesitate.
MonsterUI redefined what I love to build, yet I couldn't explain why I stopped using it. I felt uncomfortable admitting that I failed
Here's what happened and what I learned
isaacflath.com/writing/lear...
Yes, though this is a saying because the last mile/last 10% taking lots of time is not new with vibe coding. So getting that first 90% in an hour is a massive win!
AI generates code faster than humans can read. When the machine outpaces the reviewer, the team loses understanding. We need to keep humans in control.
Jake Levirne of SpecStory, shared how they adapt the review process to the task's risk for this
elite-ai-assisted-coding.dev/p/legible-ai...
I used an AI agent to build a Discord bot. I wanted it to save images from a channel to S3. The agent wrote the code, explained deployment, and debugged it when it went silent. It's a small tool I use daily.
Link: isaacflath.com/writing/disc...
I often ask: Why this way? What were the trade-offs? Was X considered? Why not Y?
Ex: "Why is the react editor in the main python repo and not its own module?"
AI logs and other context is a key part of answering that. Here's what I do ๐
elite-ai-assisted-coding.dev/p/how-i-use...
I quit my job ~6 months ago to focus on learning and having a bigger impact.
The AI coding course with Eleanor Berger is one (of several) projects that is better than I imagined on both fronts
Cohort 1 was a huge success, and Cohort 2 in January is gonna be even better ๐
My favorite thing about reducing token usage for coding agents with better search with @mixedbreadai
Multi-vector search means semantic search is back for coding agents...and it was always clear it would
60% of tokens are spent searching and exploring the codebase.
Agents need better search.
mgrep helps explore complex documents and pdfs
For a detailed write-up, and the full recording of this talk, go here!
elite-ai-assisted-coding.dev/p/mgrep-wit...
Unlike traditional RAG that vectorizes large chunks of text, @mixedbreadai 's engine represents every single word as its own vector.
This provides much more granular and accurate results.
Join @aaxsh18 tomorrow for research details on how it works: maven.com/p/0c0eed/mo...
mgrep is also multimodal. It can natively index and search images, diagrams, and PDFs in your repository.
An agent can find relevant information in visual assets that are completely invisible to text-only tools
Very useful for legal, e-commerce and many other domains
And cats
The results from their internal tests with Claude are significant. Using mgrep led to:
๐ค 53% fewer tokens used
๐ 48% faster response
๐ฏ 3.2x better quality
By getting the right context immediately, agents stay on track. I saw similar results.
elite-ai-assisted-coding.dev/p/boosting-...
Agents use mgrep for broad, semantic exploration and grep for precise symbol lookups
Instead of grep commands guessing at keywords, an agent makes a semantic query
mgrep "how is auth implemented?"
It then uses grep for precise function/class name searches.
No guessing ๐
mgrep is a command-line tool that brings semantic search to your codebase, letting agents search by intent, not just keywords.
It's much faster than grep alone, and works much better than traditional semantic search.
AI coding agents burn tokens guessing keywords for grep and flood the context window with noise
There's a better way.
I hosted a talk by @ruithebaker, a founding engineer at @mixedbreadai, about their solution.
mgrep ๐งต
elite-ai-assisted-coding.dev/p/mgrep-wit...
For all the code, charts, and a deeper dive into the mechanics, check out the full blog post
isaacflath.com/blog/2025-1...
This two-stage compression (PQ + residual quant) means you get token-level understanding in fraction of the space
The Mixedbread team has 2 free eng and research talks soon for more info on research and how to use
Talk 1: maven.com/p/9c51af/th...
Talk 2: maven.com/p/0c0eed/mo...
Step 5: Extreme Quantization
ColBERT goes one step further. After PQ, it calculates the "residual" error (the small difference between the original and the approximation). Then, it quantizes that error, often down to just 1 or 2 bits per value!
Step 3: Store which cluster/centroid each piece belongs to
Step 4: reconstruct by looking up centroids and combining
Step 2: Cluster each collection of sub-vectors separately to find the centroids
Step 1: Split each embedding in half (make sub-vectors).
However, AI embeddings aren't single numbers; they're vectors (long lists of numbers). This is where Product Quantization (PQ), comes in. It's specifically designed to compress these vectors.
It "refactors" similar embeddings to reduce duplication by using k-means clustering. Let's break it down.
In the simplest form, quantization is a bit like rounding numbers. You give up precision to save space.
With scalar quantization, instead of storing a full 64-bit number, you can store an 8-bit code representing its approximate value (8x compression ratio).
mgrep (by MixedBread) is how I've been using this. It works with any coding agent as a CLI tool.
More embeddings (token vs chunk) means more info, which makes sense. But how can that scale?
It comes down to the quantization
isaacflath.com/blog/2025-1...
Here's the core ideas:
Q: Why is semantic search for code coming back? (Cursor, mgrep, etc)
A: Multi-vector architecture
Q: Why do I care?
MUCH less token usage + better responses if used
Q: What makes it work?
A: Token (not chunk) level embeddings with extreme quantization
Here's what I learned ๐งต