The limitation is the minimal quantization level. But for a 3d engine let's say your base increment is nanometers. Then you set your maximum dimension let's say 1000km. You only have to be able to represent number up 10^20 so 64-bit fixed point number is good enough.
Do everything in 128-bit fixed point numbers, and float are no more problem for anything scientific.
An alternative numerical notation uses decimals but marks which digits at the end that are repeating. With enough digits, this format can represent all rational numbers that can be written in the standard numerator/denominator format.
It does of course work with base 2 and exponents as well so you could still be using floating-point format, only with additional meta-data indicating the repeating range. When a result degenerates into a number that can't fit within the number of digits, you would be left with a regular floating-point number.
I'd want to write a simple calculator that uses this numerical format but I have only been able to find algorithms for addition and subtraction.
Every description I've found of the format has converted to the regular numerator/denominator form before multiplication and division.
No 3D engine in the real world uses 64-bit coordinates. With 32-bit coordinates, you could not hope to represent things in nanometers (you'd be stuck in a cube roughly 4x4x4 meters). Realistically you might choose millimeters, but that would certainly start to produce visible artifacts.
For games and most simulation, the "soft failure" of gradual precision loss is much more desirable than the wildly wrong effects you would get from fixed-point overflow.
This kind of problem appears also with floats, just later with 32-bit floats than with 64-bit ints.
And the solution to this problem is to adjust your coordinate space, e.g. make every nanometer represented as `1` but have the containing object matrix have scale fields set to 1e-9.
So this is not a theoretical problem, just a practical one: the z-fighting you get with floats, would happen much more often with integers - you absolutely can avoid it in both cases, but practically 3D engines are designed with performance in mind, and so some assumptions lead to limitations and you would get more of them with integers.
It's kind of a chicken and egg problem where people use floats because there are FPUs available. All the engineering effort which went into dealing with floats and the problem that comes with them, would have been better invested in making integers faster.
We went onto the wrong path, and inertia keep us going on the wrong path. And now the wrong path is even more tempting because all efforts have made it more practical and almost as good. We hide the precision complexity to the programmer but it's still lurking around instead of being tamed.
The absolute GPU cluster-fuck with as many floating types as you can write on a napkin while drunk at the bar, mean that at the end of the day your neural network is non-deterministic, and you can't replicate any result from your program from 6 month ago, or last library version. Your simulations results therefore are perishable.
Inability to replicate results mean that you can't verify weight modifications to your neural networks haven't been tampered by an adversary. So you just lose all fighting chance to build a secure system.
You also can't share work in a distributed fashion because since verification is not possible you can't trust any computation that you haven't done yourself.
Regarding your second paragraph, those issues are equally catastrophic for game engines. Therefore they generally use (float x,y,z,int zone_id) to reset the origin and avoid floating point errors. Think MMOs, open world games, etc. There are talks about this from all the way back to Dungeon Siege up to Uncharted
https://en.wikipedia.org/wiki/Fixed-point_arithmetic : allows you to have some thing which is integer math but works like floats. It's integer operations and bit shifts so really fast.
The limitation is the minimal quantization level. But for a 3d engine let's say your base increment is nanometers. Then you set your maximum dimension let's say 1000km. You only have to be able to represent number up 10^20 so 64-bit fixed point number is good enough.
Do everything in 128-bit fixed point numbers, and float are no more problem for anything scientific.
In modern systems float ops are often as fast as corresponding integer ops, so fixed point numbers are not necessarily faster now.
for general computation, I think Rationals (https://raku.org) are a good choice - and Raku has big Int as standard also
nevertheless, us Weitek guys made 32-bit FPUs to do 3D graphics (pipeline, 1 instruction per clock) P754, IBM, DEC standards to power SGI, Sun etc
this is still the best format to get graphics throughout per transistor (although the architectures have got a bit more parallel)
then 64-bit became popular for CAD (32-bit means the wallpaper in your aircraft carrier might sometimes be under the surface of your wall)
An alternative numerical notation uses decimals but marks which digits at the end that are repeating. With enough digits, this format can represent all rational numbers that can be written in the standard numerator/denominator format.
It does of course work with base 2 and exponents as well so you could still be using floating-point format, only with additional meta-data indicating the repeating range. When a result degenerates into a number that can't fit within the number of digits, you would be left with a regular floating-point number.
I'd want to write a simple calculator that uses this numerical format but I have only been able to find algorithms for addition and subtraction. Every description I've found of the format has converted to the regular numerator/denominator form before multiplication and division.
No 3D engine in the real world uses 64-bit coordinates. With 32-bit coordinates, you could not hope to represent things in nanometers (you'd be stuck in a cube roughly 4x4x4 meters). Realistically you might choose millimeters, but that would certainly start to produce visible artifacts.
For games and most simulation, the "soft failure" of gradual precision loss is much more desirable than the wildly wrong effects you would get from fixed-point overflow.
This kind of problem appears also with floats, just later with 32-bit floats than with 64-bit ints.
And the solution to this problem is to adjust your coordinate space, e.g. make every nanometer represented as `1` but have the containing object matrix have scale fields set to 1e-9.
So this is not a theoretical problem, just a practical one: the z-fighting you get with floats, would happen much more often with integers - you absolutely can avoid it in both cases, but practically 3D engines are designed with performance in mind, and so some assumptions lead to limitations and you would get more of them with integers.
The https://en.wikipedia.org/wiki/Z-fighting issue is the proof you often need those 64-bits.
It's kind of a chicken and egg problem where people use floats because there are FPUs available. All the engineering effort which went into dealing with floats and the problem that comes with them, would have been better invested in making integers faster.
We went onto the wrong path, and inertia keep us going on the wrong path. And now the wrong path is even more tempting because all efforts have made it more practical and almost as good. We hide the precision complexity to the programmer but it's still lurking around instead of being tamed.
The absolute GPU cluster-fuck with as many floating types as you can write on a napkin while drunk at the bar, mean that at the end of the day your neural network is non-deterministic, and you can't replicate any result from your program from 6 month ago, or last library version. Your simulations results therefore are perishable.
Inability to replicate results mean that you can't verify weight modifications to your neural networks haven't been tampered by an adversary. So you just lose all fighting chance to build a secure system.
You also can't share work in a distributed fashion because since verification is not possible you can't trust any computation that you haven't done yourself.
1 reply →
Regarding your second paragraph, those issues are equally catastrophic for game engines. Therefore they generally use (float x,y,z,int zone_id) to reset the origin and avoid floating point errors. Think MMOs, open world games, etc. There are talks about this from all the way back to Dungeon Siege up to Uncharted