Advertisement · 728 × 90

Posts by Mehdi Hasan Khan

Yeah, we are agreeing here, that's the trend I'm seeing too. For a small team or greenfield project, I'd jump right into that. Larger brownfield migrations are multi step because of the humans in the system. Unlearning is often harder than learning for people. Tech is pointless without adoption. :)

2 days ago 0 0 1 0

It's good to know Honeycomb is leading the way there to help with that. Some other vendors are still on the fence for providing this support officially (although they acknowledged it's a very common ask). I guess something to do with owning the risk of small and hard to detect errors in the process.

2 days ago 0 0 0 0

I understand that you see that mix up a lot (monitoring vs observability). I understand the difference. But investing in being able to answer unknown-unknown doesn't mean you get to ignore known/expected monitoring needs. It's not obsolete, and never will be. They coexist in production systems.

2 days ago 0 0 1 0

Yep, I'm betting on LLMs, agents to solve this migration pain at some point. Tried already, I guess the vendor specific query languages, convensions aren't well represented in the training data in the generic models, they still hallucinate massively. Maybe soon.

2 days ago 0 0 1 0

There's no solution to this, and for practical reasons. Vendors innovate in the storage, query efficiency, correlation layer. It's their differentiator.

Just using OTel will not take the migration pain away, a big part of the pain is still there, and people need to share that too. That's my point

2 days ago 0 0 1 0

Not sure how SLOs solve that migration pain. Define error budget, set SLO based alerts, the fundamentals are the same, but the query languages are still different among vendors.

Dashboards are not yesterday's thing either. You need to see trends often.

2 days ago 0 0 2 0
Preview
Before OpenTelemetry, switching observability vendors meant ripping out instrumentation and rewiring integrations across every service. With OTel, standardized agents, vendor-neutral collection… | N... Before OpenTelemetry, switching observability vendors meant ripping out instrumentation and rewiring integrations across every service. With OTel, standardized agents, vendor-neutral collection, the...

And I don't think I'm the only one saying or seeing that: www.linkedin.com/posts/tiwari...

(Not affiliated with the post author or their product)

2 days ago 0 0 0 0
Advertisement

After migrating between a few o11y vendors, switching instrumentation/agents was never the hardest part. The major friction was always the existing dashboards and alerts. It's semantics if you want to call that lock-in or not, it's a major barrier that OTel doesn't aim to solve.

2 days ago 0 0 1 0

Vendor lock-in is still there though, just in a different layer.

Sending consistent telemetry to a different vendor only solves half of the problem. Every vendor using their own query language(s) means migrating alerts, dashboards still requires a major migration effort.

2 days ago 0 0 1 0
Preview
VictoriaMetrics VictoriaMetrics is a fast, cost-effective and scalable open source monitoring solution and time series database typically used for processing high volumes of data and for long term data storage.

The latest update for #VictoriaMetrics includes "Benchmarking #Kubernetes Log Collectors: vlagent, Vector, Fluent Bit, #OpenTelemetry Collector, and more" and "VictoriaMetrics February 2026 Ecosystem Updates".

#DevOps #TimeSeries #OpenSource https://opsmtrs.com/3Lk7JnI

2 weeks ago 2 1 0 0
Preview
Deprecating Span Events API OpenTelemetry is deprecating the Span Event API. This post explains why we’re making this change, what it means at a high level, and how you can prepare. In short: We want to remove confusion and…

‼️ #OpenTelemetry is deprecating the Span Events API.

Our aim is to remove the confusion caused by having overlapping ways to emit events: span events and log-based events.

Read our latest post to learn more about how you can prepare.

buff.ly/0hOwfuk

2 weeks ago 9 3 0 1
Preview
Building a dry-run mode for the OpenTelemetry Collector | Ubuntu Teams continuously deploy programmable telemetry pipelines to production, without having access to a dry-run mode. At the same time, most organizations lack staging environments that resemble producti...

I just published a blog post on an experiment I did with building a dry-run mode for the OpenTelemetry Collector. It's a work in progress, and was mainly built to scratch my own itch, but nonetheless -- enjoy!

ubuntu.com/blog/buildin...

#observability #otel #o11y #sre

2 weeks ago 8 4 0 0

CloudWatch charges you $0.50/GB to ingest logs, $0.03/GB/month to store them, and $0.005/GB every time you search them.

During an incident, your query volume spikes and so does your bill.

2 weeks ago 4 2 1 1
Preview
OpenTelemetry roadmap: Sampling rates and collector improvements ahead At OTel Unplugged EU, OpenTelemetry's maintainers laid out an ambitious roadmap — from smarter sampling and entity definitions to Arrow's stateful OTLP.

At OTel Unplugged EU, OpenTelemetry's maintainers laid out an ambitious roadmap — from smarter sampling and entity definitions to Arrow's stateful OTLP.

1 month ago 3 2 0 0

Maybe I’m just mad because I missed the easiest grift in history: take existing o11y vocabulary, swap “request_id” for “prompt_id”, ship a dashboard, print money

1 month ago 2 1 0 0
Post image

We cut @nodejs memory in half with a one-line Docker image swap.

No code changes. No new APIs. Just smaller heaps.

Here's what happened when we tested pointer compression on real workloads đź§µ

1 month ago 48 7 3 1
Preview
evlog - Logging that makes sense Wide events and structured errors for TypeScript. One log per request, full context, errors that explain why and how to fix.

Been having a lot of fun pushing evlog lately: zero‑dependency core, wide events, structured errors, and fresh adapters for PostHog, Sentry, OTLP, Axiom & Workers.

Into observability with good DX? This might be your new favorite logging layer → dub.sh/better-log

1 month ago 3 1 0 0
Advertisement
Preview
Mastering the OpenTelemetry OTLP HTTP Exporter · Dash0 Learn how to configure and optimize the OTLP HTTP exporter for secure reliable and scalable telemetry delivery

The OTLP HTTP exporter is the boundary against data loss.

To build a resilient #observability pipeline, you need to tune persistence and backoff.

We explain how to configure #OpenTelemetry so a simple network blip doesn’t turn into an #SRE incident.

Full deep dive 👉 dash0.link/otel-otlp-ht...

2 months ago 2 1 0 0
Preview
Why the OpenTelemetry Batch Processor is Going Away (Eventually) · Dash0 An analysis of why the OpenTelemetry community is moving away from the in-memory batch processor in favor of exporter-level batching. This post explains the architectural limitations of memory bufferi...

Still using the #OpenTelemetry Batch Processor?
In-memory buffering can mean 100% data loss.

The community now favors exporter-level batching for better durability. Julia breaks down why #observability and #CloudNative teams are making the switch:

👉 dash0.link/the-otel-bat...

2 months ago 3 2 0 0
Preview
OpenTelemetry News | Datadog Open Source Hub OpenTelemetry News is a place to share the latest updates from across the OpenTelemetry ecosystem. You’ll find a mix of community announcements, cross-company contributions, and Datadog updates relate...

Excited to see Juliano Costa’s OTel newsletter go public.

This has been running internally at Datadog for the past year, tracking project updates across the Collector, specs, and the broader OTel community.

Now available publicly:
opensource.datadoghq.com/category/ote...

2 months ago 1 1 0 0
Preview
OpenTelemetry JS Statement on Node.js DOS Mitigation You may have seen a recent Node.js security advisory and related coverage discussing a potential denial-of-service issue involving async_hooks. OpenTelemetry (and other APM tools) were mentioned…

Read the statement from #OpenTelemetry about the Node.js denial-of-service issue reported recently. 👇

opentelemetry.io/blog/2026/ot...

2 months ago 8 3 0 0

Yep, the last resort if nothing is out there :)

3 months ago 0 0 0 1
Preview
hyperdx-js/packages/instrumentation-sentry-node at main · hyperdxio/hyperdx-js HyperDX for Node.js and Browsers. Contribute to hyperdxio/hyperdx-js development by creating an account on GitHub.

Found this github.com/hyperdxio/hy...

But didn't test that yet. Looks a bit incomplete to me (from reading the code).

3 months ago 1 0 0 0

Hey OpenTelemetry experts, anyone know of any drop-in replacement of Sentry SDKs (for error tracking) that record exceptions in OTel span events?

Mostly for backend error tracking. Sentry compatible SDK/API is enough to migrate away from Sentry, not looking for a complete error tracking product.

3 months ago 2 0 3 0
Preview
SigNoz Open-source Observability platform. Understand issues in your deployed applications & solve them quickly.

The latest update for #SigNoz includes "Reducing #OpenTelemetry Bundle Size in Browser Frontend" and "OpenTelemetry Agents - The Complete Beginner's Guide (2025)".

#Monitoring #Observability #OpenSource https://opsmtrs.com/3kBqQNU

3 months ago 5 1 0 0

We also do tailsampling with traces. It needs OTel end to end. DD agent doesn't work for that (can only export to Datadog). So straight to DDOT.

3 months ago 1 0 0 0
Advertisement

Primary benefit is official vendor support when you need it. As agent.

Tons of maintenance benefits as well. Comes bundled with the helm chart with everything else etc.

If you need additional OTel collector components you can build on top of it as well.

3 months ago 2 0 1 0

We are using OTEL (traces) with DD. DDOT collector is amazing. It's the best of both worlds imo. You get all the flexibility on telemetry transformation from a huge OTEL ecosystem, cost control, and the usability of DD.

OTEL metrics came later and still a mess unfortunately.

3 months ago 1 0 2 0

DD is so far ahead on usability that if you have experienced that and went back to LGTM it feels like a sharp downgrade unfortunately. LGTM is flexible though, but the o11y team needs to put in the work. The company pays either way at the end, either to the vendor, or with the internal investment.

3 months ago 1 0 1 0

RED metrics, runtime metrics or business/custom metrics?

OTEL traces work fine. RED metrics can be generated from traces in the collector too.

Also doesn't LGTM stack prefer Prometheus metrics? Any reason to use OTEL for metrics there?

3 months ago 2 0 1 0