Comment by Jyaif
13 hours ago
It's important to note that the approach described focuses on giving fast results, not the best results.
Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.
This is a well known problem because early computers with monitors used to only be able to display characters.
At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?
And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.
Yeah, this is good to point out. The primary constraint I was working around was "this needs to run at a smooth 60FPS on mobile devices" which limits the type and amount of work one can do on each frame.
I'd probably arrive at a very different solution if coming at this from a "you've got infinite compute resources, maximize quality" angle.
You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.
For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.
> limiting output to a small set of characters gives it a more uniform look which may be nicer
And in the extreme that could totally change things. Maybe you want to reject ASCII and instead use the Unicode block that has every 2x3 and 2x4 braille pattern.
Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.
In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?
In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.
I've been working on something similar (didn't get to this stage yet) and was planning to do something very similar to the circle-sampling method but the staggering of circles is a really clever idea I had never considered. I was planning on sampling character pixels' alignment along orthogonal and diagonal axes. You could probably combine these approaches. But yeah, such an approach seemed particularly powerful for the reason you could encode it all in a table.
And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.