Advertisement · 728 × 90

Posts by Steph Johnson

Preview
How AI Is Changing What It Means to Be a 10x Engineer The Role of Mentorship, Infrastructure, and AI in Multiplying Team Velocity

The developers who know that (and build deep visibility into their systems so they can direct AI strategically) are the ones who become genuine multipliers.

Great read from @codingwithroby.bsky.social

codingwithroby.substack.com/p/how-ai-is-...

1 month ago 0 1 0 0

Raw output was never the real bottleneck. System design, infrastructure decisions, architectural judgment: that's what compounds or creates drag over time.

AI still struggles to get those right.

1 month ago 1 0 1 0
Post image

The 10x engineer has never been the person who coded the fastest. This is now more true than ever.

And, it should reframe how you think about AI productivity.

1 month ago 0 0 1 0

Automatic capture and correlation of every piece of data from a user's session plus the corresponding system behavior.

1 month ago 0 0 0 0

The problem: AI generates code faster than teams can review (and debug) it.

The constraint: AI tools need complete visibility into runtime context, not sampled fragments, to more accurately generate code or assist with debugging.

The solution: 👇

1 month ago 0 0 1 0
Preview
The hidden costs of tech support Quantifying the engineering cost of customer support.

Check out this article about the hidden cost of technical support: leaddev.com/software-qua...

1 month ago 0 0 0 0

This triage matrix assumes visibility. Without full-stack, auto-correlated, unsampled data, you're making expensive decisions based on incomplete information.

How often does that guess turn out wrong?

1 month ago 2 0 2 0

Oftentimes, you're making high-stakes decisions (what issues to prioritize, which developers to pull off other work, whether to wake someone up at 2am) before you fully understand:

• Root cause
• Blast radius
• Ramifications

1 month ago 0 0 1 0
Post image

This is how engineering leaders triage production issues: 👇

What this matrix doesn't show: the hidden cost of triaging blind.

1 month ago 0 0 1 0
Post image

AI tools boost velocity but erode deep system knowledge.

Debugging and system understanding are the next challenge.

Great article by Stephane Moreau: open.substack.com/pub/blog4ems...

1 month ago 0 0 0 0
Advertisement

Teams are rushing to add AI debugging to their observability stacks. But if the underlying data is:

• Aggressively sampled
• Missing payloads
• Scattered across disconnected tools

Adding AI on top just means faster access to incomplete data.

Fix the data problem first.

1 month ago 0 0 0 0

Your AI tools (a) don't have access to the data they need, or (b) require humans to manually gather and correlate the data.

AI agents need correlated, contextual data to be useful. Right now, most teams don't have that, and their observability tools weren't built to provide it.

2 months ago 0 0 0 0

The data exists, but it's scattered and unstructured:

‣ Frontend errors live in Sentry
‣ Backend traces live in Datadog
‣ User actions live in... Screen recordings? Support tickets?

So when an AI tries to answer "why did checkout fail for this user?", it can't.

2 months ago 0 0 1 0

Imagine you're looking for a specific email, but:

‣ Your inbox has 100,000 of them
‣ They're all labeled "Email"
‣ There's no search function
‣ Some emails are in Gmail, some in Outlook, some in Yahoo

That's what AI agents face when trying to debug your system. 🧵

2 months ago 0 0 1 0

If your observability data isn't correlated across frontend and backend (or you're missing critical data due to sampling or lack of instrumentation) adding AI on top won't fix it.

It'll just give you faster access to incomplete information.

AI debugging is only as good as the data you feed it.

2 months ago 1 0 0 0

That's the skill gap that's emerging:

not who can ship features fastest, but who can explain why their system behaves the way it does (and fix it with confidence when it doesn't).

2 months ago 0 0 0 0

AI has lowered the barrier to writing code.
But it hasn't made systems easier to understand.

When something breaks in production, you still need deep knowledge of your system, the ability to read traces, and the instinct to know where to look.

2 months ago 0 0 1 0
Advertisement

The best engineers were never the ones who wrote code fast or with “clever” solutions.

The gap between top and bottom performers continues to widen.

2 months ago 0 0 1 0

PS. Multiplayer captures all of this 👆 automatically (request/response content and headers from internal services AND external dependencies), correlated in a single session recording.

2 months ago 0 0 0 0
Post image

There a debugging bottleneck few talk about: the hours engineers spend reconstructing what happened in production because critical context is missing. For example:

• What payload did we send?
• What did the external API return?
• Which headers were set?
• What did the middleware modify?

2 months ago 0 0 1 0
Post image

Which pie chart is your team living in?

This is the difference between 3 hours of context switching and 10 minutes of clarity.

Bad debugging = manual correlation across scattered tools.
Good debugging = auto-correlated runtime context in one place.

2 months ago 1 0 0 1

Not all session replays are built for the same job.

📊 Product analytics tools answer questions about user behavior.
🪲 Debugging tools need to answer questions about system behavior.

When bugs span APIs, services, and data layers, engineers need replays that correlate user actions to backend data.👇

3 months ago 1 0 0 0
Preview
30 Min Meeting | Multiplayer | Cal.com 30 Min Meeting

If you’re open to sharing what didn’t click, what felt heavy, or what made you pause, it would genuinely help us build a better experience for all of our users.

You can schedule time with me here: cal.com/multiplayer/...

3 months ago 0 0 0 0
Post image

Building developer tools means constantly stress-testing your own assumptions.

If you signed up for Multiplayer and bounced during onboarding, understanding why is incredibly valuable to us.

We’re offering a $50 gift card for a short conversation about your experience (15–20 min).

3 months ago 0 0 1 0

Session replay is useful, but when visibility stops at the UI, engineers are left stitching together logs, traces, and payloads by hand. That friction adds up quickly.

Multiplayer is worth a look (and a free try!) if your debugging workflow still involves too much tab-hopping.

3 months ago 0 0 0 0

Question for teams using LogRocket: how much time do you spend jumping between tools to connect frontend issues to backend problems?

3 months ago 1 0 1 0
Multiplayer 2025: year in review In 2025 we focused on a simple but ambitious goal: making debugging faster, less fragmented and less manual. Check out all our releases to make that possible.

6/6 Grateful to our customers, design partners, and community for supporting us and pushing us forward … we’re excited for what we’re building next.

www.multiplayer.app/blog/multipl...

3 months ago 0 0 0 0
Advertisement

5/

I’m incredibly proud of our team. Not just for shipping fast, but for shipping thoughtfully, listening closely to our users, and raising the bar on quality with every release. 💜

3 months ago 0 0 1 0

4/

• An MCP server to feed full-stack context into AI tools
• A VS Code extension to debug from inside the editor
• Mobile (React Native) support
• Notebooks for full-cycle debugging and documentation
• Automatic system architecture maps that stay up to date

3 months ago 0 0 1 0

3/ Seeing the full list of everything we produced all in one place really brought it home for me.

This year, with a lean team, we shipped:

• Multiple recording modes for capturing issues when they happen
• Annotations and sketches directly on session recordings

3 months ago 0 0 1 0