Comment by otterley
3 days ago
At the end of the day, it comes down to what sort of functionality you want out of your observability. Modest needs usually require modest resources: sure, you could just append to log files on your application hosts and ship them to a central aggregator where they're stored as-is. That's cheap and fast, but you won't get a lot of functionality out of it. If you want more, like real-time indexing, transformation, analytics, alerting, etc., it requires more resources. Ain't no such thing as a free lunch.
Surely you aren’t doing real time indexing, transformation, analytics, etc in the same service that is producing the logs.
A catastrophic increase in logging could certainly take down your log processing pipeline but it should not create cascading failures that compromise your service.
Of course not. Worst case should be backpressure, which means processing, indexing, and storage delays. Your service might be fine but your visibility will be reduced.
For sure. Your can definitely tip over your logging pipeline and impact visibility.
I just wanted to make sure we weren’t still talking about “causing a cascading outage due to increased log volumes” as was mentioned above, which would indicate a significant architectural issue.