← Back to context

Comment by pclmulqdq

1 day ago

Somehow, we have all these schemes to factor huge numbers, and yet the current record for actual implementation of Shor's algorithm and similar algorithms came factoring the number 15 in 2012. There was a recent paper about "factoring" 31, but that paper involved taking a number of simplifying steps assuming exactly that the number in use was a Mersenne number. People in this field keep showing "algorithm improvements" or "new devices" that are good enough to write a paper and yet somehow there's always an implementation problem or a translation problem when someone comes asking about using it.

If this algorithm exists and works, and there are chips with 1000 noisy qubits, why has nobody used this algorithm to factor a 16-bit number? Why haven't they used it to factor the number 63? Factoring 63 on a quantum computer using a generic algorithm would be a huge advancement in capability, but there's always some reason why your fancy algorithm doesn't work with another guy's fancy hardware.

At the same time, we continue to have no actual understanding of the actual underlying physics of quantum superposition, which is the principle on which this whole thing relies. We know that it happens and we have lots of equations that show that it happens and we have lots of algorithms that rely on it working, but we have continued to be blissfully unaware of why it happens (other than that the math of our theory says so). In the year 3000, physicists will be looking back at these magical parts of quantum theory with the same ridicule we use looking back at the magical parts of Newton's gravity.

It’s clear you don’t know what you’re talking about.

  • If you are claiming to know what you're talking about, use one of these algorithms to factor the number 63 and you will get tenure.

    The easiest way to prove that you do know what you're doing is to demonstrate it through making progress, which is something that this field refuses to do.