Comment by dpark
4 days ago
Surely you aren’t doing real time indexing, transformation, analytics, etc in the same service that is producing the logs.
A catastrophic increase in logging could certainly take down your log processing pipeline but it should not create cascading failures that compromise your service.
Of course not. Worst case should be backpressure, which means processing, indexing, and storage delays. Your service might be fine but your visibility will be reduced.
For sure. Your can definitely tip over your logging pipeline and impact visibility.
I just wanted to make sure we weren’t still talking about “causing a cascading outage due to increased log volumes” as was mentioned above, which would indicate a significant architectural issue.