Comment by ChrisMarshallNY

18 days ago

Interesting. It's one of those things that I’ve always “just assumed,” without thinking about it.

I did a lot of raster graphics programming, in my career, and graphics work makes heavy use of lookup tables.

Yesterday, I posted a rather simple tool I wrote[0]: a server that “frontloads” a set of polygons into a database, and then uses them, at query time. It’s fairly fast (but I’m sure it could be a lot faster). I wrote it in a few hours, and got pretty good performance, right out of the starting gate.

Pretty basic stuff. I doubt the pattern is unique, but it’s one that I’ve used for ages. It’s where I try to do as much “upfront” work as possible, and store “half-baked” results into memory.

Like I said, I always “just worked that way,” and never really thought about it. There’s been a lot of “rule of thumb” stuff in my work. Didn’t have an MIT professor to teach it, but it’s sort of gratifying to see that it wasn’t just “old wives” stuff.

There’s probably a ton of stuff that we do, every day, that we don’t think about. Some of it almost certainly originated from really smart folks like him, finding the best way (like the “winding number” algorithm, in that server[1]), and some of it also probably comes from “grug-brained programmers,” simply doing what makes sense.

[0] https://github.com/LittleGreenViper/LGV_TZ_Lookup/blob/e247f...

I'm in the depths of optimization on a game right now, and it's interesting how the gains I'm making currently all seem to be a matter of scaling the concept of lookup tables, and using the right tool for the job.

What I mean is that traditionally I think peoples' ideas of lookup tables are things like statically baked arrays setup at compile time, or even first thing at runtime, and they never change. But if you loosen your adherence to that last idea a bit, where a lookup table can change slightly over time, you can get a ton of mileage out of a comparatively small amount of memory compared to wasting cycles every frame.

As for the right tool for the job, I've read tons of dev logs and research papers over the years about moving work to the GPU, but this last few months stint of ripping my game inside out has really made me see the light. It's not just lookup tables built at compile or early runtime, but lookup tables modified slightly over time, and sent to the GPU as textures and used there.

Follow this train of thought long enough, and now we're just calling memory writes and reads "lookup tables" when they aren't really that anymore, but whos to say where the barrier really lies?

  • To some extent it is required that all serious work by a computer be the kind of repititious thing that can be at least partially addressed by a lookup table.

    If you take the example of a game, drawing sprites say. Drawing a single preloaded sprite of reasonable size is always cheap, so a slow frame must have an excessive number. It's very hard to construct a practical scenario of a large number of truly distinct sprites though. A level has a finite tile palette, a finite cast of characters, abilities, etc. It's hard to logistically get them all into a scene together, and even then it won't be that many. So the only scenario left where sprite drawing will be slow is drawing the same handful of sprites over and over again. By contrast that's super common: just spam a persistent projectile, tap the analog stick to generate dust particles, etc.

  • Without getting too much into detail (because the people I worked for were really paranoid, and I don't want to give them agita), we used to build lookup tables "on the fly," sometimes, deep inside iterators.

    For example, each block of pixels might have some calculated characteristics that were accessed by a hash into a LUT, but the characteristics would change, as we went through the image.

    We'd do a "triage" run, where we'd build the LUT, then a "detailed" run, where we'd apply the LUT to the pixels.

    It could get pretty hairy.