Advertisement ยท 728 ร— 90

Posts by Isaac Flath

Preview
How I Built My Website | Isaac Flath Read "How I Built My Website" - Technical writing by Isaac Flath on software development, AI, and machine learning.

I visit my own website because it makes me happy. AI wrote every line of code, but I directed everything. If a thoughtless prompt can make what you make, why would I come to you? I can prompt AI myself

This post shows the core skill for using AI: taste

isaacflath.com/writing/how-...

1 month ago 0 0 0 0
Preview
The Future is Already Here (and it's in Japan) | Isaac Flath Read "The Future is Already Here (and it's in Japan)" - Technical writing by Isaac Flath on software development, AI, and machine learning.

In the U.S., new glasses take a few days. In Japan, they took 45 minutes and $70.

Transit, check-in, doors, customs, toilets all just felt nicer.

I wrote about it.

isaacflath.com/writing/japa...

1 month ago 0 0 0 0
Preview
Learning from Failure: MonsterUI | Isaac Flath Read "Learning from Failure: MonsterUI" - Technical writing by Isaac Flath on software development, AI, and machine learning.

I spent months building MonsterUI. Now, when people ask if they should use it, I hesitate.

MonsterUI redefined what I love to build, yet I couldn't explain why I stopped using it. I felt uncomfortable admitting that I failed

Here's what happened and what I learned

isaacflath.com/writing/lear...

2 months ago 0 0 0 0

Yes, though this is a saying because the last mile/last 10% taking lots of time is not new with vibe coding. So getting that first 90% in an hour is a massive win!

2 months ago 0 0 0 0
Post image Post image Post image

AI generates code faster than humans can read. When the machine outpaces the reviewer, the team loses understanding. We need to keep humans in control.

Jake Levirne of SpecStory, shared how they adapt the review process to the task's risk for this

elite-ai-assisted-coding.dev/p/legible-ai...

2 months ago 0 0 0 0
Post image

I used an AI agent to build a Discord bot. I wanted it to save images from a channel to S3. The agent wrote the code, explained deployment, and debugged it when it went silent. It's a small tool I use daily.

Link: isaacflath.com/writing/disc...

3 months ago 0 0 0 0
Preview
How I Use My AI Session History Step-by-step example of the tool I use, how I use the UI, and how I use it agentically.

I often ask: Why this way? What were the trade-offs? Was X considered? Why not Y?

Ex: "Why is the react editor in the main python repo and not its own module?"

AI logs and other context is a key part of answering that. Here's what I do ๐Ÿ‘‡

elite-ai-assisted-coding.dev/p/how-i-use...

3 months ago 1 0 0 0
Preview
Elite AI Assisted Coding by Eleanor Berger and Isaac Flath on Maven Make it code like you do. Turn generic AI assistants into coding partners that actually get your style and have the right context.

Link to the course is here:
bit.ly/ai-coding-c...

4 months ago 0 0 0 0
Post image

I quit my job ~6 months ago to focus on learning and having a bigger impact.

The AI coding course with Eleanor Berger is one (of several) projects that is better than I imagined on both fronts

Cohort 1 was a huge success, and Cohort 2 in January is gonna be even better ๐Ÿ˜„

4 months ago 1 0 1 0
Advertisement
Video

My favorite thing about reducing token usage for coding agents with better search with @mixedbreadai

4 months ago 0 0 0 0
Video

Multi-vector search means semantic search is back for coding agents...and it was always clear it would

4 months ago 0 0 0 0
Video

60% of tokens are spent searching and exploring the codebase.

Agents need better search.

4 months ago 0 0 0 0
Video

mgrep helps explore complex documents and pdfs

4 months ago 0 0 0 0
Preview
mgrep with Founding Engineer Rui Huang The Problem with grep for AI Agents

For a detailed write-up, and the full recording of this talk, go here!

elite-ai-assisted-coding.dev/p/mgrep-wit...

4 months ago 0 0 0 0
Preview
Modern Multi-Vector Code Search Mixedbread showed in their launch how much faster/better semantic search could make Claude code. Cursor also just announced semantic embedding support, and other agents are soon to follow. I got better quality with 1/2 tokens and almost 2x faster with mixedbread search. Learn what changed from the leading researchers in the space.

Unlike traditional RAG that vectorizes large chunks of text, @mixedbreadai 's engine represents every single word as its own vector.

This provides much more granular and accurate results.

Join @aaxsh18 tomorrow for research details on how it works: maven.com/p/0c0eed/mo...

4 months ago 0 0 1 0
Post image

mgrep is also multimodal. It can natively index and search images, diagrams, and PDFs in your repository.

An agent can find relevant information in visual assets that are completely invisible to text-only tools

Very useful for legal, e-commerce and many other domains

And cats

4 months ago 0 0 1 0
Advertisement
Preview
Boosting Claude: Faster, Clearer Code Analysis with MGrep I ran an experiment to see how a powerful search tool could improve an LLMโ€™s ability to understand a codebase.

The results from their internal tests with Claude are significant. Using mgrep led to:

๐ŸคŒ 53% fewer tokens used
๐Ÿš€ 48% faster response
๐Ÿ’ฏ 3.2x better quality

By getting the right context immediately, agents stay on track. I saw similar results.

elite-ai-assisted-coding.dev/p/boosting-...

4 months ago 0 0 1 0
Post image

Agents use mgrep for broad, semantic exploration and grep for precise symbol lookups

Instead of grep commands guessing at keywords, an agent makes a semantic query

mgrep "how is auth implemented?"

It then uses grep for precise function/class name searches.

No guessing ๐Ÿ˜

4 months ago 0 0 1 0
Post image

mgrep is a command-line tool that brings semantic search to your codebase, letting agents search by intent, not just keywords.

It's much faster than grep alone, and works much better than traditional semantic search.

4 months ago 0 0 1 0
Preview
mgrep with Founding Engineer Rui Huang The Problem with grep for AI Agents

AI coding agents burn tokens guessing keywords for grep and flood the context window with noise

There's a better way.

I hosted a talk by @ruithebaker, a founding engineer at @mixedbreadai, about their solution.

mgrep ๐Ÿงต

elite-ai-assisted-coding.dev/p/mgrep-wit...

4 months ago 1 0 1 0
Preview
Quantization Fundamentals for Multi-Vector Retrieval - Blog An thorough and complete introduction to Quantization for Multi-Vector Search Architectures

For all the code, charts, and a deeper dive into the mechanics, check out the full blog post

isaacflath.com/blog/2025-1...

4 months ago 0 0 0 0
Preview
Modern Multi-Vector Code Search Mixedbread showed in their launch how much faster/better semantic search could make Claude code. Cursor also just announced semantic embedding support, and other agents are soon to follow. I got better quality with 1/2 tokens and almost 2x faster with mixedbread search. Learn what changed from the leading researchers in the space.

This two-stage compression (PQ + residual quant) means you get token-level understanding in fraction of the space

The Mixedbread team has 2 free eng and research talks soon for more info on research and how to use

Talk 1: maven.com/p/9c51af/th...

Talk 2: maven.com/p/0c0eed/mo...

4 months ago 0 0 1 0
Post image

Step 5: Extreme Quantization

ColBERT goes one step further. After PQ, it calculates the "residual" error (the small difference between the original and the approximation). Then, it quantizes that error, often down to just 1 or 2 bits per value!

4 months ago 0 0 1 0
Advertisement
Post image Post image

Step 3: Store which cluster/centroid each piece belongs to

Step 4: reconstruct by looking up centroids and combining

4 months ago 0 0 1 0
Post image

Step 2: Cluster each collection of sub-vectors separately to find the centroids

4 months ago 0 0 1 0
Post image

Step 1: Split each embedding in half (make sub-vectors).

4 months ago 0 0 1 0
Post image

However, AI embeddings aren't single numbers; they're vectors (long lists of numbers). This is where Product Quantization (PQ), comes in. It's specifically designed to compress these vectors.

It "refactors" similar embeddings to reduce duplication by using k-means clustering. Let's break it down.

4 months ago 0 0 1 0
Post image

In the simplest form, quantization is a bit like rounding numbers. You give up precision to save space.

With scalar quantization, instead of storing a full 64-bit number, you can store an 8-bit code representing its approximate value (8x compression ratio).

4 months ago 0 0 1 0
Post image

mgrep (by MixedBread) is how I've been using this. It works with any coding agent as a CLI tool.

More embeddings (token vs chunk) means more info, which makes sense. But how can that scale?

It comes down to the quantization

isaacflath.com/blog/2025-1...

Here's the core ideas:

4 months ago 0 0 1 0
Post image

Q: Why is semantic search for code coming back? (Cursor, mgrep, etc)
A: Multi-vector architecture

Q: Why do I care?
MUCH less token usage + better responses if used

Q: What makes it work?
A: Token (not chunk) level embeddings with extreme quantization

Here's what I learned ๐Ÿงต

4 months ago 0 0 1 0
Advertisement