Comment by peepeepoopoo101

13 hours ago

There's an awful lot of handwaving in this blog post. I'm sorry, but I'm not convinced. The author mentions how some devices that can seemingly solve exponential time complexity problems also require exponentially high precision, but there doesn't seem to be a strong argument for why that doesn't apply to quantum computers. We haven't experimentally demonstrated quantum computing at sufficient scales to prove that the required number of physical qubits to perform error correction doesn't scale exponentially.

We don’t need exponentially more physical qubits because we have quantum error correction schemes that exponentially decrease the logical error rate with only a polynomial increase in the number of qubits. There are in fact many schemes for this (https://errorcorrectionzoo.org/) with the surface code mentioned in the blog being a leading approach.

Details for how this could work for factoring are here: https://arxiv.org/abs/1905.09749

There will be engineering challenges to scale up these implementations but in principal you shouldn’t need exponential resources (unless there is something wrong with quantum mechanics). This sort of error correction scaling does not exist, for example, for analog computing.

Note that you do not need error correction for quantum computing. You only need it for digital quantum computing. There's a separate branch, analog quantum computing, that is also very promising.