Comment by smithclay
5 hours ago
Kudos to Ben for speaking to one of the elephants in the room in observability: data waste and the impact it has on your bill.
All major vendors have a nice dashboard and sometimes alerts to understand usage (broken down by signal type or tags) ... but there's clearly a need for more advanced analysis which Tero seems to be going after.
Speaking of the elephant in room in observability: why does storing data on a vendor cost so much in the first place? With most new observability startups choosing to store store data in columar formats on cheap object storage, think this is also getting challenged in 2026. The combination of cheap storage with meaningful data could breathe some new life into the space.
Excited to see what Tero builds.
Problem has never been the storage. Its running those queries to return in milliseconds - if its for a dashboard, an alert of your new AI agent trying to make sense of it.
Thank you! And you're right, it shouldn't cost that much. Financials are public for many of these vendors: 80%+ margins. The cost to value ratio has gotten way out of whack.
But even if storage were free, there's still a signal problem. Junk has a cost beyond the bill: infrastructure works harder, pipelines work harder, network egress adds up. And then there's noise. Engineers are inundated with it, which makes it harder to debug, understand their systems, and iterate on production. And if engineers struggle with noise and data quality, so does AI.
It's all related. Cheap storage is part of the solution, but understanding has to come first.