Comment by ant6n

3 years ago

Yeah, just define a 1-bit type of the values {0, MAX}, for some very large number MAX. And now we‘ve represented a very large number with a 1-bit value!

It's trivial in the extreme cases, but similar logic was used to implement the floating points - it compromised on the accuracy of results, but gave us a number system that's useful across a range of scales.

If we could find some set of staggeringly large values, perhaps even infinite, with some useful set of operations that could be performed on them (and mapping back to the set) then we can come up with ridiculous answers that aren't necessarily useless.