← Back to context

Comment by marc_abonce

2 days ago

> The company is still standing and seems to be doing well financially, so I guess things turned out well enough, or maybe some of the technical decisions started trending more reasonable.

Perhaps I've been lucky or I haven't been observant enough, but I've never seen a company suffer financially because of inefficient code. Don't get me wrong, I still value good code for its own sake, but in my experience there is no correlation between that and profits.

I've seen customers be driven away by poorly performing interfaces. I've seen downtime caused by exponentially growing queries. I've seen poorly written queries return such large datasets that they cause servers to run out of memory processing the request.

Unless you're doing stock trades or black Friday sales, though, it can be pretty hard to pin down a specific inefficiency to a concrete measure of list income. Instead, people move on from products for general "we don't like it" vibes.

The most concrete measure, as someone else pointed out, is heavily inflated PAAS spend caused by poorly written code that requires beefier than necessary servers. In theory, moving faster means you're spending less money on developer salaries (the Ruby/rails mantra of old) but there's a distinct tipping point where you have to pony up and invest in performance improvements.

My previous job designed their data lake and operations on it with horrific incompetence, and their solution was just to use AWS Lambdas scaling into the thousands and tens of thousands to do stuff over it.

They made so much money but would then squander it on hopelessly inefficient designs that required an AWS spend that basically prevented them from ever financially scaling.

IME, the suffering of bad performing code is mostly secondary. It increases compute costs. Mostly because requiring more beefy VMs than strictly required which is still benign and possibly more cost-efficient than spending more engineering effort. Sometimes because of the lack of performance now more scaling and orchestration is required which comes additional complexity and therefore compute and staffing cost. This is rare to get noticed and fixed due to organisational momentum.

The worst is when the performance is so bad it starts to prevent onboarding new features or customers.

  • The real cost is an opportunity cost. It doesn't show up in the financials. Your ability to react quickly to new business opportunities is hurt. Most CEOs and boards don't notice it, until it's too late.

I bet Atlassian could make even more money with Jira if it wasn't this slow. They are not struggling as it is, but it's bad enough that it is costing them customers

But generally I would agree

  • Do you have any source for the "they are loosing customers"? I always thought, that they consciously decided to have that shit of an interface because no one relevant to them (i.e. purchasing departments) cares?

  • They are an example where someone decides "we will use jira" but they aren't necessarily the ones using it every day so the **ness doesn't matter to them.

Only if the bad code affects customer experience significantly. That only happens to a big enough amount of you really let things grow out of contol. At some point you'll get bugs that take forever to solve and angry customers as a result.

As a general rule it is hard to measure lost opportunity costs. That doesn't mean they don't exist or shouldn't be considered. I mean.. why do humans even acknowledge efficiency at all.. let alone almost always as a virtue?