Paid channel saturation doesn't announce itself. The signal isn't CPL - it's the ratio of new-to-repeat visitors in your converting cohort. When that flips, you're six weeks from CAC blowout. Most teams find out too late.
Posts by Matthew Mamet
The subsidy era was always a land grab. Get developers dependent on the tooling, then reprice once switching costs are real. The interesting question now is whether the productivity gains are large enough that teams absorb the price increase, or whether we find out the gains were overstated at free.
I am not like Claire Vo or Lenny Rachitsky. I'm a grey beard with a 2018 MacBook Pro I hadn't touched in a year, and none of their panache.
I'm sharing my detailed step-by-step setup guide to OpenClaw for regular people: buff.ly/aUEo5Jc
With apologies to Claire, Lenny, and the Cohen brothers.
The tell is whether the headcount comes back when the next growth cycle starts. At companies using AI to genuinely expand capacity, it should. At companies using AI as a restructuring narrative, it won’t. That’s the data point worth watching in 18 months.
Running a consumer platform through a DDoS at this scale is a product credibility test as much as an infrastructure one. How you communicate during the outage matters as much as how fast you restore service. This thread is a good example of getting that right.
Duolingo announced AI-first this week. Most companies using that phrase mean AI-cheaper. The actual question worth asking is what decision speed does this unlock. If the answer is we save money on content contractors, that's a margin story, not a product one.
Notion's UX decision is the same mistake I've seen product teams make repeatedly - they conflate "AI usage" with "AI value." A metric that goes up because you broke something that worked isn't a win. The teams patting themselves on the back for usage data are measuring the wrong thin
When the infrastructure is identical, the advantage shifts to the judgment layer. Same stack, very different outcomes. The teams pulling ahead are using those tools to accelerate decisions, not to avoid making them.
Six years of product reviews at TripAdvisor. Rigorous process, clear owners, clean write-ups. We shipped a lot of things and made them matter. A review structured to produce approval is not the same thing as a review structured to produce the right bet.
The management layer collapse is the real story. When you compress from 5 layers to 2, every person has to own decisions they used to escalate. Most orgs are not wired for that. Dorsey is betting structure can rewire incentives. History is not encouraging but the thesis is right.
The pattern I still see in the early days of many new engagements: everyone kinda knows what needs to be done. But that conviction to move fast, can be false confidence. Teams can slow down instead. Out of self-protection. Like when the body rejects the organ transplant.
Exactly this. The structure problem shows up fast in org design. You can rename your teams 'product squads' but if they still report to a project manager who answers to a PMO, you've changed the vocabulary and nothing else. The accountability model is the actual transition.
The insight that gets skipped: high turnover in a team is almost always a hiring problem dressed up as a performance problem. The firing conversation is the symptom. The real conversation is why those people were hired in the first place.
The other marker is how someone responds to being corrected in public. People with real curiosity treat it as new information. People without it treat it as an attack. You can see it in the first ten minutes of any exec review.
Agreed that perspective diversity matters, but I'd separate cognitive diversity from structural alignment. The failure mode isn't teams with too few perspectives. It's teams where the perspectives never resolve into a decision. Six people who disagree and still ship is the goal.
Fully agree on the direction. What I'm watching is whether organizations actually let that happen or just reload the admin layer with new coordination overhead around the AI tools. The forcing function is real. What it forces depends on the org.
That distinction between tools for thinking versus proposed solutions is exactly where the breakdown happens. Most teams don't make it explicit, so the prototype lands as a proposal and the strategy conversation never opens.
I have started at nine or ten companies now. The first week pattern is always the same. The client shows you everything they're proud of. You make a mental list of what they didn't show you. Then you go find those. The entry mode that works isn't the one where you arrive with answers.
The framing is mostly right but it misses the harder constraint. At scale, the bottleneck for a senior PM was never writing specs. It was holding enough context across a full portfolio to know which outcome to curate. AI speeds up execution. It does not yet help you decide what is worth building.
Quoted in PhocusWire today on agentic commerce and the trust gap in AI booking.
Travel brands that lead with the technology story will miss the point. The trust problem in agentic transactions is not a payments problem. It is a consumer psychology problem.
buff.ly/0nb4vAa
Agentic search doesn't route traffic to comparison marketplaces. It replaces them.
Chegg: 956K searches/mo Apr 2022. Perplexity: 1. They crossed in March 2025.
Five-year data study across edtech, automotive, and solar: buff.ly/4W62yk4
Cost per lead is the most seductive vanity metric in growth. 300 leads at $15 CPL with 2% conversion costs $750/customer. 50 leads at $80 CPL with 18% conversion costs $444. The cheaper channel is 70% more expensive. Build from the originated customer back.
In financial products, the most overlooked growth channel isn't a channel. It's sitting in a competitor's rejection file. At EverQuote, the customers carriers declined were still in market. They weren't lost. They were motivated buyers with nowhere else to go.
Meta cut 15,000 people and called it an AI-native transformation. The actual story: they built an org to win a bet that didn't land and held the headcount longer than the strategy required. AI is the narrative that makes the correction palatable.
Building a practice, so content-as-social-proof for me. The mistake I see is treating the four goals as compatible in the same channel. Content that generates direct income and content that generates advisory pipeline are different products, and running both simultaneously tends to dilute both.
Instacart just became Aldi's entire digital commerce platform. They stopped owning the customer and started selling infrastructure. 380+ retailers on Storefront Pro. Many marketplace teams debate this exact move for years. Instacart chose the layer most marketplaces miss.
The risk I've seen is that the prototypes look so polished so fast that teams skip the conversation about whether the concept is right. Speed to artifact is not the same as speed to clarity.
The most expensive word in product strategy is "also." A founder turned down a nine-figure vertical because it made the company feel small. That is not a strategy problem. That is a fear problem. Specialization is a weapon most founders refuse to use.
The number matters less than the decision surface. I've seen teams of six that operated like twenty because everyone had authority over a piece but no one could make the call. The constraint is how many people need to align before something ships, not headcount.
Every company I walk into has someone who costs too much, does too little, and reports to someone who will never fire them. The protected person is never the real problem. The signal their protection sends is.