Comment by comboy

5 years ago

Holy cow, I'm a very casual gamer, I was excited about the game but when it came out I decided I don't want to wait that long and I'll wait until they sort it out. 2 years later it still sucked. So I abandoned it. But.. this... ?! This is unbelievable. I'm certain that many people left this game because of the waiting time. Then there are man-years wasted (in a way different than desired).

Parsing JSON?! I thought it was some network game logic finding session magic. If this is true that's the biggest WTF I saw in the last few years and we've just finished 2020.

Stunning work just having binary at hand. But how could R* not do this? GTAV is so full of great engineering. But if it was a CPU bottleneck then who works there that wouldn't just be irked to try to nail it? I mean it seems like a natural thing to try to understand what's going on inside when time is much higher than expected even in the case where performance is not crucial. It was crucial here. Almost directly translates to profits. Unbelievable.

I don’t think the lesson here is “be careful when parsing json” so much as it’s “stop writing quadratic code.” The json quadratic algorithm was subtle. I think most people’s mental model of sscanf is that it would be linear in the number of bytes it scans, not that it would be linear in the length of the input. With smaller test data this may have been harder to catch. The linear search was also an example of bad quadratic code that works fine for small inputs.

Some useful lessons might be:

- try to make test more like prod.

- actually measure performance and try to improve it

- it’s very easy to write accidentally quadratic code and the canonical example is this sort of triangular computation where you do some linear amount of work processing all the finished or remaining items on each item you process.

As I read the article, my guess was that it was some terrible synchronisation bug (eg download a bit of data -> hand off to two sub tasks in parallel -> each tries to take out the same lock on something (eg some shared data or worse, a hash bucket but your hash function is really bad so collisions are frequent) -> one process takes a while doing something, the other doesn’t take long but more data can’t be downloaded until it’s done -> the slow process consistently wins the race on some machines -> downloads get blocked and only 1 cpu is being used)

  • > actually measure performance and try to improve it

    This really rings truest to me: I find it hard to believe nobody ever plays their own game but I’d easily believe that the internal culture doesn’t encourage anyone to do something about it. It’s pretty easy to imagine a hostile dev-QA relationship or management keeping everyone busy enough that it’s been in the backlog since it’s not causing crashes. After all, if you cut “overhead” enough you might turn a $1B game into a $1.5B one, right?

    • Lots of possibilities. Another one I imagined is that "only the senior devs know how to use a profiler, and they're stuck in meetings all the time."

      3 replies →

  • - do not implement your own JSON parser (I mean, really?).

    - if you do write a parser, do not use scanf (which is complex and subtle) for parsing, write a plain loop that dispatches on characters in a switch. But really, don't.

    • I think sscanf is subtle because when you think about what it does (for a given format string), it’s reasonably straightforward. The code in question did sscanf("%d", ...), which you read as “parse the digits at the start of the string into a number,” which is obviously linear. The subtlety is that sscanf doesn’t do what you expect. I think that “don’t use library functions that don’t do what you expect” is impossible advice.

      I don’t use my own json parser but I nearly do. If this were some custom format rather than json and the parser still used sscanf, the bug would still happen. So I think json is somewhat orthogonal to the matter.

      6 replies →

    • This is probably good advice but not even relevant. It's down one level from the real problem: when your game spends 6 minutes on a loading screen, *profile* the process first. You can't optimize what you haven't measured. Now, if you've identified that JSON parsing is slow, you can start worrying about how to fix that (which, obviously, should be "find and use a performant and well-tested library".)

  • > actually measure performance and try to improve it

    This reminds me that I used to do that all the time when programming with Matlab. I have stopped investigating performance bottlenecks after switching to Python. It is as if I traded performance profiling with unit testing in my switch from Matlab to Python.

    I wonder if there are performance profilers which I could easily plug into PyCharm to do what I used to do with Matlab's default IDE (with a built-in profiler) and catch up with good programming practices. Or maybe PyCharm does that already and I was not curious enough to investigate.

  • The JSON parsing is forgivable (I actually didn't know that scanf computed the length of the string for every call) but the deduplication code is a lot less so, especially in C++ where maps are available in the STL.

    It also comforts me into my decision of never using scanf, instead preferring manual parsing with strtok_r and strtol and friends. It's just not robust and flexible enough.

  • I thought the lesson is "listen to your customers and fix the issues they complain about".

> Parsing JSON?!

Many developers I have spoken to out there in the wild in my role as a consultant have wildly distorted mental models of performance, often many orders of magnitude incorrect.

They hear somewhere that "JSON is slow", which it is, but you and I know that it's not this slow. But "slow" can encompass something like 10 orders of magnitude, depending on context. Is it slow relative to a non-validating linear binary format? Yes. Is it minutes slow for a trivial amount of data? No. But in their mind... it is, and there's "nothing" that can be done about it.

Speaking of which: A HTTPS REST API call using JSON encoding between two PaaS web servers in Azure is about 3-10ms. A local function call is 3-10ns. In other words, a lightweight REST call is one million times slower than a local function call, yet many people assume that a distributed mesh microservices architecture has only "small overheads"! Nothing could be further from the truth.

Similarly, a disk read on a mechanical drive is hundreds of thousands of times slower than local memory, which is a thousand times slower than L1 cache.

With ratios like that being involved on a regular basis, it's no wonder that programmers make mistakes like this...

  • Tbe funny thing is, as a long time SDET, I had to give up trying to get people to write or architect in a more "local first" manner.

    Everyone thinks the network is free... Until it isn't. Every bit move in a computer has a time cost, and yes, it's small... But... When you have processors as fast as what exist today, it seems a sin that we delegate so much functionality out to some other machine across a network boundary when the same work could be done locally. The reason why though?

    Monetizability and trust. All trivial computation must be done on my services so they can be metered and charged for.

    We're hamstringing the programs we run for the sole reason that we don't want to make tools. We want to make invoices.

    • And like so many things, we're blind to how our economic systems are throwing sand in the gears of our technical ones.

      I love your point that shipping a library (of code to locally execute) with a good API would outperform an online HTTPS API for almost all tasks.

> But how could R* not do this? GTAV is so full of great engineering

I assume there were different people working on the core game engine and mechanics VS loading. It could as well be some modular system, where someone just implemented the task "load items during online mode loading screen screen".

Me and my twice-a-week gaming group enjoyed GTA V but abandoned it years ago simply because of the load times. We have 2x short slots (90-120 minutes) each week to play and don't want to waste them in loading screens.

We all would have picked this game back up in a second if the load times were reduced. Although I must say even with the same results as this author, 2 minutes is still too long. But I'll bet that, given the source code, there are other opportunities to improve.

  • I wonder if a paid subscription would have fixed this? If you left a paid MMO, they'd probably ask you to fill out an exit survey, and you could say "I'm canceling because load times are terrible", which would (hopefully) raise the priority of reducing load times. But since GTA online is "free", there's not a single exit point where they can ask "why did you stop playing".

    • Gta has made billions off of its shark card micro transaction system. So the incentives are probably pretty similar for player retention. Granted, the players leaving over load times are probably not the players who are invested enough to spend thousands in micro transactions.

      1 reply →

It gets worse they're brand new game Red Dead online does the same thing as soon as it did it the first time I logged out and charged back