Comment by frollogaston

19 days ago

Also, the O(1) random memory access assumption makes it easy to take memory for granted. Really it's something like O(n^(1/3)) when you're scaling the computer to the size of the problem, and you can see this in practice in datacenters.

I forget the name of the O(1) access model. Not UMA, something else.

O(n^(1/2)) really, since data centers are 2 dimensional, not 3 dimensional.

(Quite aside from the practical "we build on the surface of the earth" consideration, heat dissipation considerations limit you to a 2 dimensional circuit in 3-space.)

  • More fundamentally O(n^(1/2)) due to the holographic principle which states that the maximal amount of information encodable in a given region of space scales wrt its surface area, rather than its volume.

    (Even more aside to your practical heat dissipation constraint)

    • Just need to make sure all your computation is done in a volume with infinite surface area and zero volume. Encoding problem solved. Now then, how hyperbolic can we make the geometry of spacetime before things get too weird?

  • If you have rows of racks of machines, isn't that 3 dimensions? A machine can be on top of, behind, or next to another that it's directly connected to. And the components inside have their own non-uniform memory access.

    Or if you're saying heat dissipation scales with surface area and is 2D, I don't know. Would think that water cooling makes it more about volume, but I'm not an expert on that.

    • That example would be two dimensions still in the limit computation, since you can keep building outwards (add buildings) but not scale upwards (add floors)

      5 replies →

On the other hand, actual computers can work in parallel when you scale the hardware, something that the TM formulation doesn't cover. It can be interesting which algorithms work well with lots of computing power subject to data locality. (Brains being the classic example of this.)