Comment by dirtbag__dad
18 hours ago
In my experience, which is series A or earlier data intensive SaaS, you can gauge whether a program is taking a reasonable amount of time just by running it and using your common sense.
P50 latency for a fastapi service’s endpoint is 30+ seconds. Your ingestion pipeline, which has a data ops person on your team waiting for it to complete, takes more than one business day to run.
Your program is obviously unacceptable. And, your problems are most likely completely unrelated to these heuristics. You either have an inefficient algorithm or more likely you are using the wrong tool (ex OLTP for OLAP) or the right tool the wrong way (bad relational modeling or an outdated LLM model).
If you are interested in shaving off milliseconds in this context then you are wasting your time on the wrong thing.
All that being said, I’m sure that there’s a very good reason to know this stuff in the context of some other domains, organizations, company size/moment. I suspect these metrics are irrelevant to disproportionately more people reading this.
At any rate, for those of us who like to learn, I still found this valuable but by no means common knowledge
I'm not sure it's common knowledge, but it is general knowledge. Not all HNers are writing web apps. Many may be writing truly compute bound applications.
In my experience writing computer vision software, people really struggle with the common sense of how fast computers really are. Some knowledge like how many nanoseconds an add takes can be very illuminating to understand whether their algorithm's runtime makes any sense. That may push loose the bit of common sense that their algorithm is somehow wrong. Often I see people fail to put bounds on their expectations. Numbers like these help set those bounds.
Thanks this is helpful framing!