← Back to context

Comment by dmurray

3 days ago

But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

> Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.

We haven't even done 15 yet in a "real" way yet. I susect the amount of time to factor 15 on fault tolerant qubits will be a lot longer than the time to go from 15 to 21.

The algorithm in question is a hypothetical algorithm for a hypothetical computer with certain properties. The properties in question are assumed to be cheap.

In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

  • > In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

    As far as i understand, that isn't an assumption.

    The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.

    • There are several properties that separate real quantum computers from the "BQP machine," including decoherence and SNR. Error-correction of qubits is mainly aimed at decoherence, but I'm not sure it really improves SNR of gates on logical qubits. SNR dictates how precisely you can manipulate the signal (these are a sort of weird kind of analog computer), and the QFTs involved in Shor's algorithm need some very precise rotations of qubits. Noise in the operation creates an error in that rotation angle. If your rotation is bad to begin with, I'm not sure the error correction actually helps.

    • > The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.

      This is an argument I've heard before and I don't really understand it[1]. I get that you can make a logical qubit out of physical qubits and build in error correction so the logical qubit has perfect SNR, but surely if (say the number of physical qubits you need to get the nth logical qubit is O(n^2) for example, then the SNR (of the whole system) isn't near infinite it's really bad.

      [1] Which may well be because I don't understand quantum mechanics ...

      2 replies →