← Back to context

Comment by prof-dr-ir

3 days ago

I am confused, since even factoring 21 is apparently so difficult that it "isn’t yet a good benchmark for tracking the progress of quantum computers." [0]

So the "useful quantum computing" that is "imminent" is not the kind of quantum computing that involves the factorization of nearly prime numbers?

[0] https://algassert.com/post/2500

Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.

  • > Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.

    Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?

    I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?

    • Under a Moore's law-like scenario, factoring 21 happens only about ~5 years before factoring a 1024 bit number. With all the optimizations, factoring an n bit number only requires ~n logical qbits, but most of those optimizations only work for large n, so 21 is only ~5 doubles away from 2^1024.

      the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.

Perhaps? The sort of quantum computers that people are talking about now are not general purpose. So you might be able to make a useful quantum computer that is not Shor's algorithm.

  • Simulating the Hubbard model for superconductors at large scales is significantly more likely to happen sooner than factoring RSA-2048 with Shor’s algorithm.

    Google have been working on this for years

    Don't ask me if they've the top supercomputers beat, ask Gemini :)

  • I don't think that's correct, the research projects the article is talking about all seem to aim at making general purpose quantum computers eventually. Obviously they haven't succeded yet, but general purpose does seem to be what they are talking about.

I always find this argument a little silly.

Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.

This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.

  • From my limited understanding, that's actually the opposite of the truth.

    In QC systems, the engineering "difficulty" scales very badly with the number of gates or steps of the algorithm.

    Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.

    • But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

      So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

      Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

      7 replies →

  • This is quite falatious and wrong. The first computers were built in order to solve problems immediately that were already being solved slowly by manual methods. There never was a period where people built computers so slow that they were slower than adding machines and slide rules, just because they seemed cool and might one day be much faster.

  • The fact that it does appear to be so difficult to scale things up would suggest that the argument isn't silly.

  • Actually yes, how much numbers you can crunch per second and how big they are were among the first benchmarks for actual computers. Also, these prototypes were almost always immediately useful. (Think of the computer that cracked Enigma).

    In comparison, there is no realistic path forward for scaling quantum computers. Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live. That is a fundamental physical truth. And since they're still struggling to do anything at all with a quantum computer, don't get your hopes up too much.

    • That would be the bombe, which didn't really crunch numbers at all, but was an electromechanical contraption to automate physically setting Enigma rotors to enumerate what combinations were possible matches.

    • > Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live.

      If what you are saying is that error rates increase exponentially such that quantum error correction can never correct more errors than it introduces, i don't think that is a widely accepted position in the field.

      1 reply →