Echoing the comment below, I guess one obvious thing is that we are a team at ClickHouse and an official first-party product on top. That translates into:
- We're flexible on top of any ClickHouse instance, you can use virtually any schema in ClickHouse and things will still work. Custom schemas are pretty important for either tuned high performance or once you're at a scale like Anthropic. This makes it also incredibly easy to get started (especially if you already have data in ClickHouse).
- The above also means you don't need to buy into OTel. I love OTel but some companies choose to use Vector, Cribl, S3, a custom writing script, etc for good reasons. All of that is supported natively due to the various ClickHouse integrations, and naturally means you can use ClickStack/HyperDX in that scenario as well.
- We also have some cool tools around wrangling telemetry at scale, from Event Deltas (high cardinality correlation between slow spans and normal spans to root cause issues) to Event Patterns (clustering similar logs or spans together automatically with ML) - all of these help users dive into their data in easier ways than just searching & charting.
- We also have session replay capability - to truly unify everything from click to infra metrics.
We're built to work at the 100PB+ scale we run internally here for monitoring ClickHouse Cloud, but flexible enough to pin point specific user issues that get brought up once in a support case in an end-to-end manner.
There's probably a lot more I'm missing. Ultimately from a product philosophy standpoint, we aren't big believers in the "3 pillars" concept, which tends to manifest as 3 silos/tabs for "logs", "metrics", "traces" (this isn't just Signoz - but across the industry). I'm a big believer that we're building tools to unify and centralize signals/clues in one place and giving the right datapoint at the right time to the engineer. During an incident I just think about what's the next clue I can get to root cause an issue, not if I'm in the logging product or the tracing product.
So, seems like the direction you are going is trying to enable ingestion to different ClickHouse instances (Cloud/BYOC/Self hosted) and then use HyperDX as the query & visualization layer on top.
I think, fundamental difference we have at SigNoz on how we approach this is that we want to solve for observability and the fact that we use ClickHouse today is just a point in time fact. In future, we are open to use any other datastore which may be more performant for observability. We can also use different databases to augment different use cases in observability.
>Ultimately from a product philosophy standpoint, we aren't big believers in the "3 pillars" concept, which tends to manifest as 3 silos/tabs for "logs", "metrics", "traces" (this isn't just SigNoz - but across the industry).
I am not too sure on how this works in practice, do you expect people to write metrics and logs query in the same explorer. From our experience, the query writing experience is very different for logs and metrics and you need different defaults to make the query writing UX easier for users.
Echoing the comment below, I guess one obvious thing is that we are a team at ClickHouse and an official first-party product on top. That translates into:
- We're flexible on top of any ClickHouse instance, you can use virtually any schema in ClickHouse and things will still work. Custom schemas are pretty important for either tuned high performance or once you're at a scale like Anthropic. This makes it also incredibly easy to get started (especially if you already have data in ClickHouse). - The above also means you don't need to buy into OTel. I love OTel but some companies choose to use Vector, Cribl, S3, a custom writing script, etc for good reasons. All of that is supported natively due to the various ClickHouse integrations, and naturally means you can use ClickStack/HyperDX in that scenario as well. - We also have some cool tools around wrangling telemetry at scale, from Event Deltas (high cardinality correlation between slow spans and normal spans to root cause issues) to Event Patterns (clustering similar logs or spans together automatically with ML) - all of these help users dive into their data in easier ways than just searching & charting. - We also have session replay capability - to truly unify everything from click to infra metrics.
We're built to work at the 100PB+ scale we run internally here for monitoring ClickHouse Cloud, but flexible enough to pin point specific user issues that get brought up once in a support case in an end-to-end manner.
There's probably a lot more I'm missing. Ultimately from a product philosophy standpoint, we aren't big believers in the "3 pillars" concept, which tends to manifest as 3 silos/tabs for "logs", "metrics", "traces" (this isn't just Signoz - but across the industry). I'm a big believer that we're building tools to unify and centralize signals/clues in one place and giving the right datapoint at the right time to the engineer. During an incident I just think about what's the next clue I can get to root cause an issue, not if I'm in the logging product or the tracing product.
hey, SigNoz maintainer here.
> an official first-party product on top
So, seems like the direction you are going is trying to enable ingestion to different ClickHouse instances (Cloud/BYOC/Self hosted) and then use HyperDX as the query & visualization layer on top.
I think, fundamental difference we have at SigNoz on how we approach this is that we want to solve for observability and the fact that we use ClickHouse today is just a point in time fact. In future, we are open to use any other datastore which may be more performant for observability. We can also use different databases to augment different use cases in observability.
>Ultimately from a product philosophy standpoint, we aren't big believers in the "3 pillars" concept, which tends to manifest as 3 silos/tabs for "logs", "metrics", "traces" (this isn't just SigNoz - but across the industry).
I am not too sure on how this works in practice, do you expect people to write metrics and logs query in the same explorer. From our experience, the query writing experience is very different for logs and metrics and you need different defaults to make the query writing UX easier for users.
Though I agree, the ability to query across signals is an important point and we are already doing work on this at SigNoz (https://signoz.io/blog/observability-requires-querying-acros...).
"You" here is ClickHouse
Yes but that is because they got acquired by Clickhouse. But my question still remains.