Advertisement · 728 × 90

Posts by Chris Bee

I went to a competitor's website. Asked AI to read through their features and build a prototype of a copy. Two sentences and a link. 15 minutes later, it delivered a working version of what I wanted.

Not shocking if you're paying attention. Still wild to see it work.

1 week ago 0 0 0 0

The vibe shift is real. I talked to a number of leaders, academics and old school developers in the past weeks that openly admitted they never thought the models would be able to generate production quality code they way they can right now. All have changing their mindset.

4 weeks ago 0 0 0 0

79% of companies paying for OpenAI also pay for Anthropic.

Not choosing. Hedging.

The model isn't the bottleneck for most of these teams. Getting everyone aligned on what to build and how to spec it out is. No subscription fixes that.

1 month ago 2 0 0 0

Claude and ChatGPT are powerful. But they only do what we ask and see what we give them. Continuous, system-level context that feeds product decisions feels like a gap.

I’m curious how others are thinking about this or if anyone has wired up a solution in this arena that they like.

1 month ago 0 0 0 0

What we don’t really have yet is an always-on product agent that continuously ingests company context, tracks patterns over time, and proactively suggests high-leverage projects and priorites based on what’s actually happening across the business.

1 month ago 3 0 3 0

They paste it into an LLM or pull from and MCP connection, ask for insights, and then repeat the process a week or a month later.

It works, but it’s episodic and dependent on one person holding everything together.

1 month ago 0 0 1 0

Huge step forward, but there’s still a structural gap.

Today, the synthesis layer is manual. A PM gathers context from Slack threads, sales calls, churn reports, dashboards, roadmap docs, and data warehouse queries.

1 month ago 0 0 2 0
Advertisement

LLMs have dramatically improved how product managers work.

You can synthesize ten customer interviews in seconds. You can analyze hundreds of support tickets and extract clear themes. Drafting PRDs, refining strategy, even pressure-testing ideas is faster than ever.

1 month ago 1 0 1 0

Same has always been true. Difference is the judgement of a human dev being in the loop.

1 month ago 0 0 0 0

Today, coding agents doesn't slow down to ask clarifying questions.

It ships something plausible.

Spec was right: review and merge. Spec was vague: two hours debugging instead of 20 minutes writing it down up front.

1 month ago 3 0 1 0

Adding people to a late project just makes it more late.

1 month ago 0 0 0 0

Planning cycles have become shorter.

1 month ago 0 0 0 0

Ther is not moat. Build anyway.

1 month ago 0 0 0 0

Exactly. In this new world, it’s all that matters.

2 months ago 0 0 1 0
Advertisement

A well-defined feature with 1 engineer will ship faster than a vague one with 3.

Product leaders: the best allocation strategy is better requirements. Not more people.

2 months ago 1 0 1 0

The most expensive allocation mistake is not putting the wrong person on a task.

It is putting the right person on a task that was never defined well.

Senior engineers are fast, but they often are stuck translating vague requests into actual work. Fix the input.

2 months ago 2 0 1 0

Beginners use AI to avoid writing complex code.
Experts use AI to avoid reading documentation.
Masters use AI to create entire production features.

The goal isn't to let the machine do the thinking. It is to let the machine do the lower level work that steals your time.

2 months ago 1 0 0 0

In startups, you either win or you learn with everything you do.

2 months ago 0 0 0 0
Post image

Had a blast at @AITinkerers Seattle last week. There’s nothing quite like being in a room full of people who are shipping and building
innovative things.

Huge thanks to the community for the energy and the great conversations around Spec Driven Development. It’s an exciting time to be building!

2 months ago 1 0 0 0

Biggest drag with AI coding on a team is review. The agent can write code fast, but it can’t explain the decisions it made. If the spec and constraints aren’t written down, code review turns into detective work.

2 months ago 1 0 1 0

AI coding workflow:
Start session
Paste in context
Realize context is missing important info
Update context
Kick off coding
Realize you forgot a constraint
Update context again
Restart coding
Validate locally, realize you missed a use case
Update context again
Finally ship something

2 months ago 0 0 0 0

Love how Boris Cherny runs 5 Claudes with a CLAUDE. md that learns from every mistake. He's dogfooding his own product and it shows. The real unlock isn't a better model, it's building the system around it.

2 months ago 0 0 0 0

Why does any PR from more than a week ago look like an ancient manuscript?

2 months ago 0 0 0 0

AI coding in 2026:
Step 1: Generate 400 lines in 60 seconds
Step 2: Spend 45 minutes reviewing, testing and reworking
Step 3: Write post about how “AI made engineering 10x faster”

3 months ago 0 0 0 0
Advertisement
Post image

How shipping product feels at times🥹

3 months ago 1 0 0 0

AI coding is rarely blocked by generation.
It’s blocked by clarity and review.
If the spec is vague, the agent still ships code, you just pay for it later.

3 months ago 1 0 0 0
Post image

Project managers when the “spec” is actually just a chat transcript.

3 months ago 0 0 0 0

Happy circle back day to all those who celebrate.

3 months ago 0 0 0 0

AI coding tools are creating more technical debt, not less.

Why? Junior engineers are jumping straight to solutions without proper specs.

Your AI is only as good as your requirements. Garbage in, garbage out.

3 months ago 1 0 0 0

Agile is dead and AI killed it.

Scrum was built for human handoffs and 2-week sprints. AI agents work in 2-hour cycles and don't need standups.

Your 'junior developer' can code for 48 hours straight, but only if you give it the right context and specs upfront.

3 months ago 1 0 0 0