Comment by robert_tweed

12 years ago

I don't think you've fully understood Rob's point. Go is pretty good at creating machine-friendly data structures.

So is Rust, and it has algebraic data types, type parametrization, lifetimes and its representation is like C's.

  • Nobody said anything negative about Rust; in fact, it has nothing to do with this thread.

    • It was poorly expressed, but I think gnuvince was referring to the fact that it is possible to implement machine-friendly ADTs. Rust is just one example of a language that has them.

      3 replies →

Machines are cheap. Developers are not. I would rather optimise for the latter.

Edit: since when having ADTs are so taxing for the machines? I did a quick search and I found no source saying they are expensive to implement.

  • I'm getting pretty sick of hearing this trope regurgitated all the time.

    Developer time is often more expensive, but it's always a one-off cost. Machine costs are ongoing. Machines can also impose hard limits on scalability.

    In the real world, software optimisation is often necessary. Ever play a computer game? Those have pretty hard limits on much time they can spend processing. You can't just throw hardware at the problem to make it go away when you don't control the client.

    "Just add more hardware" is also an ecologically unsound and unsustainable approach. Ever wondered about the carbon footprint of the average data centre?

    Maybe one day we'll have optimising compilers so good that thinking about machine-level data structures won't be necessary. They've come a long way in the last 20 years, but aren't quite there yet. In the meantime, if you ever find yourself actually needing to make something run within some hard limits on CPU time, RAM, etc., listen to people like Rob Pike: he know's what he's talking about, even if you don't like what he's saying.

    In the meantime if you're working on an MVP, by all means optimise for developer time. In that context it's almost always the right decision.

    • "I'm getting pretty sick of hearing this"

      I hope I don't prove to be too detrimental to your health.

      "Developer time is often more expensive, but it's always a one-off cost. Machine costs are ongoing. Machines can also impose hard limits on scalability."

      A one off cost? I have never seen a codebase which gained consciousness, became self operating, fixed the bugs in itself and implemented new features, I hope I will, that's gonna be a truly glorious moment for humanity.

      "Ever play a computer game?"

      I did, but Go is rarely used for creating games. Typical use case: backend server services.

      "In the meantime if you're working on an MVP, by all means optimise for developer time. In that context it's almost always the right decision."

      Yes! On this site most people are working on some startup which will fail in 2 years. Performance is barely an issue.

      1 reply →

    • "Developer time is often more expensive, but it's always a one-off cost."

      Bear in mind that opportunity cost is an unrecoverable loss. Game programming where you press a DVD is an exception, but in the majority of areas the greater cost is actually in maintenance.

    • Exactly. I cannot imagine this kind of sloppy thinking being applied to any other form of engineering.

      Machine time eventually translates to user time. Slow code leads to poor user experience and if your product is successful, wasting the time of millions of people.

  • Machines have a cost, developers have a cost. One must optimize for both costs, and assuming machines or developers were free compared to the other would be very stupid.

    There are many problems that require some efficiency to solve effectively, especially in pike's field of systems.

    • And who else is in a similar field? 3% of all programmers? In that case, fine. But most people I see raving about Go can afford the performance hit any time (if there is any).

      PS: Coding go is my day job.

      2 replies →

  • Re: your edit regarding ADTs: it's not that they are expensive to implement, it's that they give you no control over the physical structure of the data, which is the really, really important thing when it comes to how expensive it is to perform operations on that data. That's the point Rob was making in #5.

    BTW, if you want to gain an understanding of this stuff, the difference it can make and why, I'd recommend reading basically anything by Michael Abrash.

    • I think we can agree that having that level of control is unnecessary for most programming tasks. In fields where it matters, fine. Otherwise I let the compiler do it's job, probably they will get better/are already better at it than me. I don't trust humans, including myself. My compiler never made a bad decision so far because it woke up with a hangover/in a bad mood.

      1 reply →

  • And if your problem requires only one of each, and needs them for the same amount of time, then your optimization would be the correct one.

    Now scale this to a situation where solving the problem requires ten thousand machines for each developer working on code, and where each minute of time spent writing code translates into two days of machine time running that code, and the numbers start to look different.

    • When I will solve problems which require 10k machines/dev, I will get paid so much that I will be happy to write in brainfuck or lolcode.

      Meanwhile I just want to deliver the features for my product owner as quickly as possible! So we can both go home to our families in time and still deliver tons of business value.