The developers who know that (and build deep visibility into their systems so they can direct AI strategically) are the ones who become genuine multipliers.
Great read from @codingwithroby.bsky.social
codingwithroby.substack.com/p/how-ai-is-...
Posts by Steph Johnson
Raw output was never the real bottleneck. System design, infrastructure decisions, architectural judgment: that's what compounds or creates drag over time.
AI still struggles to get those right.
The 10x engineer has never been the person who coded the fastest. This is now more true than ever.
And, it should reframe how you think about AI productivity.
Automatic capture and correlation of every piece of data from a user's session plus the corresponding system behavior.
The problem: AI generates code faster than teams can review (and debug) it.
The constraint: AI tools need complete visibility into runtime context, not sampled fragments, to more accurately generate code or assist with debugging.
The solution: 👇
This triage matrix assumes visibility. Without full-stack, auto-correlated, unsampled data, you're making expensive decisions based on incomplete information.
How often does that guess turn out wrong?
Oftentimes, you're making high-stakes decisions (what issues to prioritize, which developers to pull off other work, whether to wake someone up at 2am) before you fully understand:
• Root cause
• Blast radius
• Ramifications
This is how engineering leaders triage production issues: 👇
What this matrix doesn't show: the hidden cost of triaging blind.
AI tools boost velocity but erode deep system knowledge.
Debugging and system understanding are the next challenge.
Great article by Stephane Moreau: open.substack.com/pub/blog4ems...
Teams are rushing to add AI debugging to their observability stacks. But if the underlying data is:
• Aggressively sampled
• Missing payloads
• Scattered across disconnected tools
Adding AI on top just means faster access to incomplete data.
Fix the data problem first.
Your AI tools (a) don't have access to the data they need, or (b) require humans to manually gather and correlate the data.
AI agents need correlated, contextual data to be useful. Right now, most teams don't have that, and their observability tools weren't built to provide it.
The data exists, but it's scattered and unstructured:
‣ Frontend errors live in Sentry
‣ Backend traces live in Datadog
‣ User actions live in... Screen recordings? Support tickets?
So when an AI tries to answer "why did checkout fail for this user?", it can't.
Imagine you're looking for a specific email, but:
‣ Your inbox has 100,000 of them
‣ They're all labeled "Email"
‣ There's no search function
‣ Some emails are in Gmail, some in Outlook, some in Yahoo
That's what AI agents face when trying to debug your system. 🧵
If your observability data isn't correlated across frontend and backend (or you're missing critical data due to sampling or lack of instrumentation) adding AI on top won't fix it.
It'll just give you faster access to incomplete information.
AI debugging is only as good as the data you feed it.
That's the skill gap that's emerging:
not who can ship features fastest, but who can explain why their system behaves the way it does (and fix it with confidence when it doesn't).
AI has lowered the barrier to writing code.
But it hasn't made systems easier to understand.
When something breaks in production, you still need deep knowledge of your system, the ability to read traces, and the instinct to know where to look.
The best engineers were never the ones who wrote code fast or with “clever” solutions.
The gap between top and bottom performers continues to widen.
PS. Multiplayer captures all of this 👆 automatically (request/response content and headers from internal services AND external dependencies), correlated in a single session recording.
There a debugging bottleneck few talk about: the hours engineers spend reconstructing what happened in production because critical context is missing. For example:
• What payload did we send?
• What did the external API return?
• Which headers were set?
• What did the middleware modify?
Which pie chart is your team living in?
This is the difference between 3 hours of context switching and 10 minutes of clarity.
Bad debugging = manual correlation across scattered tools.
Good debugging = auto-correlated runtime context in one place.
Not all session replays are built for the same job.
📊 Product analytics tools answer questions about user behavior.
🪲 Debugging tools need to answer questions about system behavior.
When bugs span APIs, services, and data layers, engineers need replays that correlate user actions to backend data.👇
If you’re open to sharing what didn’t click, what felt heavy, or what made you pause, it would genuinely help us build a better experience for all of our users.
You can schedule time with me here: cal.com/multiplayer/...
Building developer tools means constantly stress-testing your own assumptions.
If you signed up for Multiplayer and bounced during onboarding, understanding why is incredibly valuable to us.
We’re offering a $50 gift card for a short conversation about your experience (15–20 min).
Session replay is useful, but when visibility stops at the UI, engineers are left stitching together logs, traces, and payloads by hand. That friction adds up quickly.
Multiplayer is worth a look (and a free try!) if your debugging workflow still involves too much tab-hopping.
Question for teams using LogRocket: how much time do you spend jumping between tools to connect frontend issues to backend problems?
6/6 Grateful to our customers, design partners, and community for supporting us and pushing us forward … we’re excited for what we’re building next.
www.multiplayer.app/blog/multipl...
5/
I’m incredibly proud of our team. Not just for shipping fast, but for shipping thoughtfully, listening closely to our users, and raising the bar on quality with every release. 💜
4/
• An MCP server to feed full-stack context into AI tools
• A VS Code extension to debug from inside the editor
• Mobile (React Native) support
• Notebooks for full-cycle debugging and documentation
• Automatic system architecture maps that stay up to date
3/ Seeing the full list of everything we produced all in one place really brought it home for me.
This year, with a lean team, we shipped:
• Multiple recording modes for capturing issues when they happen
• Annotations and sketches directly on session recordings