← Back to context

Comment by jfengel

3 days ago

The formulas are really not very complex. The Standard Model is a single Lagrangian with a couple of dozen constants.

https://visit.cern/node/612

You can expand that Lagrangian out to look more complex, but that's just a matter of notation rather than a real illustration of its complexity. There's no need to treat all of the quarks as different terms when you can compress them into a single matrix.

General relativity adds one more equation, in a matrix notation.

And that's almost everything. That's the whole model of the universe. It just so happens that there are a few domains where the two parts cause conflicts, but they occur only under insanely extreme circumstances (points within black holes, the universe at less than 10^-43 seconds, etc.)

These all rely on real numbers, so there's no computational complexity to talk about. Anything you represent in a computer is an approximation.

It's conceivable that there is some version out there that doesn't rely on real numbers, and could be computed with integers in a Turing machine. It need not have high computational complexity; there's no need for it to be anything other than linear. But it would be linear in an insane number of terms, and computationally intractable.

>The Standard Model is a single Lagrangian with a couple of dozen constants.

I hear it's a bit more complex than that!

https://www.sciencealert.com/this-is-what-the-standard-model...

  • It's a single lagrangian with a couple of dozen constants, in their pics there as well. It's just expanded out to different degrees.

  • Nah it really is simpler than that, that picture has exploded the summations to make it look complicated. Although it is strangely hard to find the compressed version written down anywhere...

    the thing about Lagrangians is that they compose systems by adding terms together: L_AB = L_A + L_B if A and B don't interact. Each field acts like an independent system, plus some interaction terms if the fields interact. So most of the time, e.g. on Wikipedia[0], people write down the terms in little groups. But still, note on the Wikipedia page that there are not that many terms in the Lagrangian section, due to the internal summations.

    [0]: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...

I can't help but wonder if, under extreme conditions, the universe has some sort of naturally occurring floating-point error conditions, where precision is naturally eroded and weird things can occur.

  • That would occur if a naked singularity could exist. If black holes have a singularity then you could remove the event horizon. In general relativity, the mathematical condition for the existence of a black hole with an event horizon is simple. It is given by the following inequality: M^2 > (J/M)^2 + Q^2, where M is the mass of the black hole, J is its angular momentum and Q is its charge.

    Getting rid of the event horizon is simply a question of increasing the angular momentum and/or charge of this object until the inequality is reversed. When that happens the event horizon disappears and the exotic object beneath emerges.

  • I doubt it. Even the simplest physical system requires a truly insane number of basic operations. Practically everything is integrals-over-infinity. If there were implemented in a floating-point system, you'd need umpteen gazillion bits to avoid flagrant errors from happening all the time.

    It's not impossible that the universe is somehow implemented in an "umpteen gazillion bits, but not more" system, but it strikes me as a lot more likely that it really is just a real-number calculation.

  • That could very well be what the quantum uncertainty principal is, floating point non deterministic errors. It also could just be drawing comparisons among different problem domains.

    • The QUP is indeed what allows to quantize continuous equations with h, and once they have been turned into integers like this we can then meaningfully calculate our lack of information (aka 'entropy').