The OpenTelemetry spec is absolutely what folks have been waiting for for as long as I've been in computing (~20 years). A single standard that is implemented in nearly every popular language with very close feature parity. It's honestly wonderful to work with compared to the old vendor supplied frameworks.
I took it upon myself to write a library for my current employer (4yrs ago now?) that abstracted and standardized the way our Rust services instantiated and utilized the metrics and tracing fundamentals that OpenTelemetry provides. I recently added OTLP logging (technically using tracing events) to allow for forwarding baggage / context / metadata with the log lines. The `tracing` crate in rust also has a macro called `instrument` that allows you to mostly auto-instrument your functions for tracing, allowing the tracing context to be extracted and propagated into your function so the trace / span can be added to subsequent HTTP / gRPC requests.
We did all kinds of other stuff too, like adding a method for attaching the trace-id to our kafka messages so we can see how long the entire lifetime of the request takes (including sitting on the queue). It's been extremely insightful.
Signoz is newer to the game. I'm glad there are more competitors and vendors using OpenTelemetry natively. We originally talked to some of the big vendors and they were going to gladly accept OpenTelemetry, but they marked every metric as a "custom" metric and would charge out the wazoo for each of them, far in excess of whatever was instrumented natively with their APM plugin thingamabob.
The more the better. I love OpenTelemetry, and using it in Rust has been mostly great.
I have OTEL + Rust in production, alongside some other languages (+ OTEL), and it is by far more useful and predictable than the others. I often find myself monkey-patching in logging for other language libraries, where with Rust it just works.
Opened the link. Saw my own comment. I'm still as confused today as I was then about how this was ever supposed to work—either the quoted code is wrong or there's some weird unstated interface contract. I gather from other issues the maintainers are uninterested in a semver break any time soon. Unsure if they'd accept a performance regression (even if it makes the thing actually work). So I feel stuck. In the meantime, I don't use per-layer filtering. That's a trap.
I've got a whole list of puzzling bugs in the tracing <-> opentelemetry <-> datadog linkage.
The interesting meta-pattern here is how often the tooling around a problem lags the problem itself by 5-10 years. The operational complexity exists, the pain is real, but because it's distributed across many small actors rather than concentrated in a few large ones, the market for structured solutions is slower to develop. That's usually a signal rather than a dead end — it means the first tool that actually fits the workflow has real leverage.
The interesting meta-pattern here is how often the tooling around a problem lags the problem itself by 5-10 years. The operational complexity exists, the pain is real, but because it's distributed across many small actors rather than concentrated in a few large ones, the market for structured solutions is slower to develop. That's usually a signal rather than a dead end — it means the first tool that actually fits the workflow, rather than a generic workflow tool, has real leverage.
Speaking of opentelemetry, I try to use open telemetry with my personal projects in an asp dotnet as well as with a dotnet console app. I don't have the required corporate background in opentelemetry. I had to write my own file log exporter. I didn't write it myself -- I used Claude to write it for me in jsonl format which seemed like a good way to have each row in json and for the console app, I get a file something like this
```
logs_2025-12-24_0003.jsonl
```
I asked Claude to keep it in an xdg folder and it chose
I also have folders for metrics and traces but those are empty.
I have never had a need to look at the logs for the dotnet console app I have and the only reason I have looked at the logs on the asp dotnet app was to review errors when I ran into some error on my asp dotnet application, which frankly I don't need open telemetry for.
What am I missing here? Am I using it wrong?
If you use open telemetry, where do your logs, metrics, and traces go? Do you write your own custom classes to write them to a file on the disk? Do you pay for something like datadog (congratulations on winning the lottery I guess?)
I appreciate your reply. Thank you for helping me learn.
The OpenTelemetry spec is absolutely what folks have been waiting for for as long as I've been in computing (~20 years). A single standard that is implemented in nearly every popular language with very close feature parity. It's honestly wonderful to work with compared to the old vendor supplied frameworks.
I took it upon myself to write a library for my current employer (4yrs ago now?) that abstracted and standardized the way our Rust services instantiated and utilized the metrics and tracing fundamentals that OpenTelemetry provides. I recently added OTLP logging (technically using tracing events) to allow for forwarding baggage / context / metadata with the log lines. The `tracing` crate in rust also has a macro called `instrument` that allows you to mostly auto-instrument your functions for tracing, allowing the tracing context to be extracted and propagated into your function so the trace / span can be added to subsequent HTTP / gRPC requests.
We did all kinds of other stuff too, like adding a method for attaching the trace-id to our kafka messages so we can see how long the entire lifetime of the request takes (including sitting on the queue). It's been extremely insightful.
Signoz is newer to the game. I'm glad there are more competitors and vendors using OpenTelemetry natively. We originally talked to some of the big vendors and they were going to gladly accept OpenTelemetry, but they marked every metric as a "custom" metric and would charge out the wazoo for each of them, far in excess of whatever was instrumented natively with their APM plugin thingamabob.
The more the better. I love OpenTelemetry, and using it in Rust has been mostly great.
I have OTEL + Rust in production, alongside some other languages (+ OTEL), and it is by far more useful and predictable than the others. I often find myself monkey-patching in logging for other language libraries, where with Rust it just works.
(Except for this, that is: https://github.com/tokio-rs/tracing/issues/2519)
Opened the link. Saw my own comment. I'm still as confused today as I was then about how this was ever supposed to work—either the quoted code is wrong or there's some weird unstated interface contract. I gather from other issues the maintainers are uninterested in a semver break any time soon. Unsure if they'd accept a performance regression (even if it makes the thing actually work). So I feel stuck. In the meantime, I don't use per-layer filtering. That's a trap.
I've got a whole list of puzzling bugs in the tracing <-> opentelemetry <-> datadog linkage.
The interesting meta-pattern here is how often the tooling around a problem lags the problem itself by 5-10 years. The operational complexity exists, the pain is real, but because it's distributed across many small actors rather than concentrated in a few large ones, the market for structured solutions is slower to develop. That's usually a signal rather than a dead end — it means the first tool that actually fits the workflow has real leverage.
The interesting meta-pattern here is how often the tooling around a problem lags the problem itself by 5-10 years. The operational complexity exists, the pain is real, but because it's distributed across many small actors rather than concentrated in a few large ones, the market for structured solutions is slower to develop. That's usually a signal rather than a dead end — it means the first tool that actually fits the workflow, rather than a generic workflow tool, has real leverage.
Speaking of opentelemetry, I try to use open telemetry with my personal projects in an asp dotnet as well as with a dotnet console app. I don't have the required corporate background in opentelemetry. I had to write my own file log exporter. I didn't write it myself -- I used Claude to write it for me in jsonl format which seemed like a good way to have each row in json and for the console app, I get a file something like this
``` logs_2025-12-24_0003.jsonl ```
I asked Claude to keep it in an xdg folder and it chose
``` /home/{username}/.local/share/{applicationName}/telemetry/logs ```
I also have folders for metrics and traces but those are empty.
I have never had a need to look at the logs for the dotnet console app I have and the only reason I have looked at the logs on the asp dotnet app was to review errors when I ran into some error on my asp dotnet application, which frankly I don't need open telemetry for.
What am I missing here? Am I using it wrong?
If you use open telemetry, where do your logs, metrics, and traces go? Do you write your own custom classes to write them to a file on the disk? Do you pay for something like datadog (congratulations on winning the lottery I guess?)
I appreciate your reply. Thank you for helping me learn.