← Back to context

Comment by bawolff

3 days ago

I always find this argument a little silly.

Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.

This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.

From my limited understanding, that's actually the opposite of the truth.

In QC systems, the engineering "difficulty" scales very badly with the number of gates or steps of the algorithm.

Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.

  • But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

    So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

    Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

    • > Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

      I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.

      We haven't even done 15 yet in a "real" way yet. I susect the amount of time to factor 15 on fault tolerant qubits will be a lot longer than the time to go from 15 to 21.

    • The algorithm in question is a hypothetical algorithm for a hypothetical computer with certain properties. The properties in question are assumed to be cheap.

      In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

      5 replies →

This is quite falatious and wrong. The first computers were built in order to solve problems immediately that were already being solved slowly by manual methods. There never was a period where people built computers so slow that they were slower than adding machines and slide rules, just because they seemed cool and might one day be much faster.

  • Charles babbage started on the difference engine in 1819. It took a very long time after that before computers were useful.

    • Additionally, part of the problem was that metal working at the time wasn't really advanced enough to make the required parts to the necessary precision at a reasonable price. Which sounds really quite similar to how modern quantum computers are right at the edge of current engineering technology.

The fact that it does appear to be so difficult to scale things up would suggest that the argument isn't silly.

Actually yes, how much numbers you can crunch per second and how big they are were among the first benchmarks for actual computers. Also, these prototypes were almost always immediately useful. (Think of the computer that cracked Enigma).

In comparison, there is no realistic path forward for scaling quantum computers. Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live. That is a fundamental physical truth. And since they're still struggling to do anything at all with a quantum computer, don't get your hopes up too much.

  • That would be the bombe, which didn't really crunch numbers at all, but was an electromechanical contraption to automate physically setting Enigma rotors to enumerate what combinations were possible matches.

  • > Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live.

    If what you are saying is that error rates increase exponentially such that quantum error correction can never correct more errors than it introduces, i don't think that is a widely accepted position in the field.