Comment by friendly_chap

12 years ago

Machines are cheap. Developers are not. I would rather optimise for the latter.

Edit: since when having ADTs are so taxing for the machines? I did a quick search and I found no source saying they are expensive to implement.

I'm getting pretty sick of hearing this trope regurgitated all the time.

Developer time is often more expensive, but it's always a one-off cost. Machine costs are ongoing. Machines can also impose hard limits on scalability.

In the real world, software optimisation is often necessary. Ever play a computer game? Those have pretty hard limits on much time they can spend processing. You can't just throw hardware at the problem to make it go away when you don't control the client.

"Just add more hardware" is also an ecologically unsound and unsustainable approach. Ever wondered about the carbon footprint of the average data centre?

Maybe one day we'll have optimising compilers so good that thinking about machine-level data structures won't be necessary. They've come a long way in the last 20 years, but aren't quite there yet. In the meantime, if you ever find yourself actually needing to make something run within some hard limits on CPU time, RAM, etc., listen to people like Rob Pike: he know's what he's talking about, even if you don't like what he's saying.

In the meantime if you're working on an MVP, by all means optimise for developer time. In that context it's almost always the right decision.

  • "I'm getting pretty sick of hearing this"

    I hope I don't prove to be too detrimental to your health.

    "Developer time is often more expensive, but it's always a one-off cost. Machine costs are ongoing. Machines can also impose hard limits on scalability."

    A one off cost? I have never seen a codebase which gained consciousness, became self operating, fixed the bugs in itself and implemented new features, I hope I will, that's gonna be a truly glorious moment for humanity.

    "Ever play a computer game?"

    I did, but Go is rarely used for creating games. Typical use case: backend server services.

    "In the meantime if you're working on an MVP, by all means optimise for developer time. In that context it's almost always the right decision."

    Yes! On this site most people are working on some startup which will fail in 2 years. Performance is barely an issue.

    • >A one off cost? I have never seen a codebase which gained consciousness, became self operating, fixed the bugs in itself and implemented new features, I hope I will, that's gonna be a truly glorious moment for humanity.

      No, but I've seen lots of projects who were completed, shrink wrapped, and shipped, with the team disbanded or moving on to other projects (sometimes with a few people left behind for bug fixing).

      Especially most large scale enterprise /government / organisational projects are mostly one off, fire and forget affairs. A large team is assembled to create them, and then the support is offloaded to smaller team for fixes (and some tacked-on new features), and they run for decades on end.

      >I did, but Go is rarely used for creating games. Typical use case: backend server services.

      Which is bedide the point. The discussion was about those "programming principles" Rob Pike put forward, and the costs of developer vs machine etc -- not about Go in the least.

  • "Developer time is often more expensive, but it's always a one-off cost."

    Bear in mind that opportunity cost is an unrecoverable loss. Game programming where you press a DVD is an exception, but in the majority of areas the greater cost is actually in maintenance.

  • Exactly. I cannot imagine this kind of sloppy thinking being applied to any other form of engineering.

    Machine time eventually translates to user time. Slow code leads to poor user experience and if your product is successful, wasting the time of millions of people.

Machines have a cost, developers have a cost. One must optimize for both costs, and assuming machines or developers were free compared to the other would be very stupid.

There are many problems that require some efficiency to solve effectively, especially in pike's field of systems.

  • And who else is in a similar field? 3% of all programmers? In that case, fine. But most people I see raving about Go can afford the performance hit any time (if there is any).

    PS: Coding go is my day job.

    • The systems community is quite large, consisting of probably around 30-50% of the dev jobs in companies like microsoft, Google, Facebook, Apple.

      Go was designed as a system's language, I think it just eventually went in a different direction when people realized it would never match C++ or even Java in performance.

      1 reply →

Re: your edit regarding ADTs: it's not that they are expensive to implement, it's that they give you no control over the physical structure of the data, which is the really, really important thing when it comes to how expensive it is to perform operations on that data. That's the point Rob was making in #5.

BTW, if you want to gain an understanding of this stuff, the difference it can make and why, I'd recommend reading basically anything by Michael Abrash.

  • I think we can agree that having that level of control is unnecessary for most programming tasks. In fields where it matters, fine. Otherwise I let the compiler do it's job, probably they will get better/are already better at it than me. I don't trust humans, including myself. My compiler never made a bad decision so far because it woke up with a hangover/in a bad mood.

    • Your compiler has no idea about whether data structure A will perform 100 times faster than data structure B because B is scattered all over memory and incurs a page fault on every reference and A has locality of reference.

      No compiler optimization is going to get you a 100 to 1 improvement with a conventional language, but choosing the right data structure and algorithm for a task certainly can.

And if your problem requires only one of each, and needs them for the same amount of time, then your optimization would be the correct one.

Now scale this to a situation where solving the problem requires ten thousand machines for each developer working on code, and where each minute of time spent writing code translates into two days of machine time running that code, and the numbers start to look different.

  • When I will solve problems which require 10k machines/dev, I will get paid so much that I will be happy to write in brainfuck or lolcode.

    Meanwhile I just want to deliver the features for my product owner as quickly as possible! So we can both go home to our families in time and still deliver tons of business value.