"if Microsoft’s claim stands, then topological qubits have finally reached some sort of parity with where more traditional qubits were 20-30 years ago. I.e., the non-topological approaches like superconducting, trapped-ion, and neutral-atom have an absolutely massive head start: there, Google, IBM, Quantinuum, QuEra, and other companies now routinely do experiments with dozens or even hundreds of entangled qubits, and thousands of two-qubit gates. Topological qubits can win if, and only if, they turn out to be so much more reliable that they leapfrog the earlier approaches—sort of like the transistor did to the vacuum tube and electromechanical relay. Whether that will happen is still an open question, to put it extremely mildly."
There seems to be a bit of a disconnect between the first and the second sentence (to my completely uneducated mind).
If topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?
Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
The point I think is this: if topological qubits are similar to other types of qubits, then investing in them is going to be disappointing because the other approaches have so much more work put into them.
So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
what Microsoft claim in their marketing copy reported by the FT - for the average reader - and what a third-party, well-known expert in the field thinks... are on very different levels AFAIC
A very important statement is in the peer review file that everyone should read:
"The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes."
Thanks for your interest. I'm part of the Microsoft team. Here are a couple of comments that might be helpful:
1) The Nature paper just released focuses on our technique of qubit readout. We interpret the data in terms of Majorana zero modes, and we also do our best to discuss other possible scenarios. We believe the analysis in the paper and supplemental information significantly constrains alternative explanations but cannot entirely exclude that possibility.
3) On top of the Nature paper, we have recently made addition progress which we just shared with various experts in the field at the Station Q conference in Santa Barbara. We will share more broadly at the upcoming APS March meeting. See also https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... for more context.
That's from the abstract of the upcoming conference talk (Mar14)
>Towards topological quantum computing using InAs-Al hybrid devices
Presenter: Chetan Nayak (Microsoft)
The fusion of non-Abelian anyons is a fundamental operation in measurement-only topological quantum computation. In one-dimensional topological superconductors, fusion amounts to a determination of the shared fermion parity of Majorana zero modes. Here, we introduce a device architecture that is compatible with future tests of fusion rules. We implement a single-shot interferometric measurement of fermion parity in indium arsenide-aluminum heterostructures with a gate-defined superconducting nanowire . The interferometer is formed by tunnel-coupling the proximitized nanowire to quantum dots. The nanowire causes a state-dependent shift of these quantum dots' quantum capacitance of up to 1fF. Our quantum capacitance measurements show flux h/2e-periodic bimodality with a signal-to-noise ratio of 1 in 3.6 microseconds at optimal flux values. From the time traces of the quantum capacitance measurements, we extract a dwell time in the two associated states that is longer than 1ms at in-plane magnetic fields of approximately 2T. These measurements are discussed in terms of both topologically trivial and non-trivial origins. The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.
As the recent results from CS and math on the front pages have shown, one doesn't have to be unknown or underfunded in order to produce
verifiable breakthroughs, but it might help..
Seems like John Baez didn't notice those lines in the peer review either
I think that Kalai here is very seriously understating how fringe/contrarian his views are. He's not merely stating that there's too much optimism about potential future results, or that there's some kind of intractable theoretical or practical bottleneck that we'll soon reach and won't be able to overcome. He's saying that any kind of quantum advantage—a thing that numerous experiments, from different labs in academia and industry, using a wide variety of approaches, have demonstrated over the past decade—is impossible, and therefore all of those experimental results were wrong and need to be retracted. His position was scientifically respectable back when the possibility he was denying hadn't actually happened yet, but I don't think it is anymore.
I think what many people are missing in the discussion here is that topological qbits are essentially a different type of component. This is analogous to relay-triode-transistor technology progression.
It is speculation still whether the top-q approach will be effective, but there are significant implications if it is. Scalability, reliability, and speed are all significant aspects on the table here.
While other technologies have a significant head start, much of the “head start” is transferrable knowledge, similar to the relay-triode-transistor-integrated circuit progression. Each new component type multiplies the effectiveness of the advances made by the previous generation of technologies, it doesn’t start over.
IF the topological qubits can be made to be reliable and they live up to their scalability promises, it COULD be a revolutionary step, enabling exponential gains in cost, scalability, and capability. IF.
topological analytics shows that under like-like exchanges multiple distinct pathways exist in 2D (in 3D topology does not have distinct pathways for these exchanges). this permits real anyon particles to exist when the physics is confined to 2D within quantum limits such as in a layer of graphene. certain configurations of layers (“moire materials”) can be made periodic to provide a suitable scale lattice for anyons to localize and adopt particular quantum states
anyons lie somewhere between fermions and bosons in their state occupancy and statistics - no 2 fermions may occupy the same state, bosons can all occupy the same state, anyons follow rational number patterns eg up to 2 anyons can occupy 3 states
I enjoy the quality of "it's too early to say" in Aaronson's writing. It won't stop share price movement or hopeless optimism amongst others.
I do wonder if he is running a simple 1st order differential on his own beliefs. He certainly has the chops here, and self introspection on the trajectory of highs and lows and the trends would interest me.
A bit off topic - I really like Scott Aaronson and his blog, but hate the comment section - he engages a lot with the comments (which is great!) but it's really hard to follow, as each comment is presented as a new message.
I made this small silly chrome extension to re-structure the comments to a more readable format - if anyone is interested
I find the opposite, he often makes some ridiculous claim in the post, the comments (the ones he lets through) rightfully point out how wrong he was, then he cherry-picks and engages one of the more outrageous comments, so a superficial observer is left with the impression that the original claim was OK.
The experiment with lots of qubits... technically yes they can do things. I think the factoring record is 21. But you might be disappointed a) when you see that most algorithms using quantum computers require conventional computational to transform the problem before and after the quantum steps b) when you learn we only have a few quantum algorithms, the are not general calculation machines and c) when you look under the hood and see that the error correcting stuff makes it actually kinda hard to tell how much is really being done by the actual quantum device.
Thanks for the reply. I've always been a bit puzzled from my limited knowledge of quantum mechanics as to how they are supposed to work. I mean you make a measurement on a quantum system and sure the probability amplitude is the result of adding up all sorts of possible paths but you still only get the one measurement out which I'm not sure how that's supposed to tell you much. All a bit beyond me.
That's the best-case scenario. It remains possible that topological qubits, even if they are theoretically achievable, will turn out to be a dead end engineeringwise. Presumably competing quantum computing labs think this is likely, since they're not working on topological qubits; only Microsoft thinks they'll end up being important.
So far, the only known algorithm relevant to AI that would run faster on a theoretical quantum computer is linear search, where quantum computers offer a modest speedup (linear search is O(n) on a classical computer, while Grover's algorithm is O(sqrt(n)) for quantum computers - this means that for a list of a million elements, you can scan it in 1 000 steps on a quantum computer instead of 1 000 000 steps on a classical one).
However, even this is extremely theoretical at this time - no quantum computer built so far can execute Grover's algorithm, they are not reliable enough to get any result with probability higher than noise, and anyway can't apply the amount of steps required for even a single pass without losing entanglement. So we are still very very very far away from a quantum computer that could reach anything like the computing performance of a single consumer-grade GPU. We're actually very far away from a quantum computer that even reaches the performance of a hand calculator at this time.
There is not an "OS" or anything even remotely like it. For now these things behave more like physics experiments than computers.
You can play around with "quantum programming" through (e.g.) some of IBM's offerings and there has been work on quantum programming languages like q# from Microsoft but its unclear (to me) how useful these are.
Other than fast factorization and linear search, is there anything that Quantum Computing can do? Those do seem important, but limited in scope - is this a solution in search of a problem?
I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Are we, in fact, in the very early stages of gradient descent toward what I want to call "software defined matter?"
If we're learning to make programmable quantum physics experiments and use them to do work, what is that the very beginning of? Imagine, say, 300 years from now.
Indeed Majorana fermions are completely unseen/unconsidered outside of Neutrinos. In fact all Standard Model fermions except Neutrinos are proven to be Dirac fermions
money quote:
The quote that struck me was
> I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.
If you are shocked by this, I suggest not reading his other recent topics.
10 replies →
There seems to be a bit of a disconnect between the first and the second sentence (to my completely uneducated mind).
If topological qubits turn out to be so much more reliable then it doesn't really matter how much time was spent trying to make other types of qubits more reliable. It's not really a head start, is it?
Or are there other problems besides preventing unwanted decoherence that might take that much time to solve?
The point I think is this: if topological qubits are similar to other types of qubits, then investing in them is going to be disappointing because the other approaches have so much more work put into them.
So, he is saying that this approach will only pay off if topological qubits are a fundamentally better approach than the others being tried. If they turn out to be, say, merely twice as good as trapped ion qubits, they'll still only get to the achievements of current trapped ion designs with another, say, 10-15 years of continued investment.
5 replies →
Yeah I mean that's exactly what MS are talking about, only requiring 1/20 of the checksum qubits or something.
https://www.ft.com/content/a60f44f5-81ca-4e66-8193-64c956b09...
what Microsoft claim in their marketing copy reported by the FT - for the average reader - and what a third-party, well-known expert in the field thinks... are on very different levels AFAIC
Microsoft is saying: we did it!
Everyone else is saying: prove it!
2 replies →
A very important statement is in the peer review file that everyone should read:
"The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes."
https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
Thanks for your interest. I'm part of the Microsoft team. Here are a couple of comments that might be helpful:
1) The Nature paper just released focuses on our technique of qubit readout. We interpret the data in terms of Majorana zero modes, and we also do our best to discuss other possible scenarios. We believe the analysis in the paper and supplemental information significantly constrains alternative explanations but cannot entirely exclude that possibility.
2) We have previously demonstrated strong evidence of Majorana zero modes in our devices, see https://journals.aps.org/prb/pdf/10.1103/PhysRevB.107.245423.
3) On top of the Nature paper, we have recently made addition progress which we just shared with various experts in the field at the Station Q conference in Santa Barbara. We will share more broadly at the upcoming APS March meeting. See also https://www.linkedin.com/posts/roman-lutchyn-bb9a382_interfe... for more context.
>signal-to-noise ratio of 1
Hmmm.. appreciate the honesty :)
That's from the abstract of the upcoming conference talk (Mar14)
>Towards topological quantum computing using InAs-Al hybrid devices
Presenter: Chetan Nayak (Microsoft)
The fusion of non-Abelian anyons is a fundamental operation in measurement-only topological quantum computation. In one-dimensional topological superconductors, fusion amounts to a determination of the shared fermion parity of Majorana zero modes. Here, we introduce a device architecture that is compatible with future tests of fusion rules. We implement a single-shot interferometric measurement of fermion parity in indium arsenide-aluminum heterostructures with a gate-defined superconducting nanowire . The interferometer is formed by tunnel-coupling the proximitized nanowire to quantum dots. The nanowire causes a state-dependent shift of these quantum dots' quantum capacitance of up to 1fF. Our quantum capacitance measurements show flux h/2e-periodic bimodality with a signal-to-noise ratio of 1 in 3.6 microseconds at optimal flux values. From the time traces of the quantum capacitance measurements, we extract a dwell time in the two associated states that is longer than 1ms at in-plane magnetic fields of approximately 2T. These measurements are discussed in terms of both topologically trivial and non-trivial origins. The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.
As the recent results from CS and math on the front pages have shown, one doesn't have to be unknown or underfunded in order to produce verifiable breakthroughs, but it might help..
Seems like John Baez didn't notice those lines in the peer review either
https://mathstodon.xyz/@johncarlosbaez/114031919391285877
TIL: read the peer review first
Wait so this tech just...doesn't work yet? Like at all?
Microsoft claims that it works. However, the Nature reviewers apparently do not yet feel comfortable vouching for this claim.
It's worse, it will (likely) never work at all.
Another recent writeup that adds some nuance to this (and other claims), summarizing the quantum-skeptic positions:
https://gilkalai.wordpress.com/2025/02/17/robert-alicki-mich...
I think that Kalai here is very seriously understating how fringe/contrarian his views are. He's not merely stating that there's too much optimism about potential future results, or that there's some kind of intractable theoretical or practical bottleneck that we'll soon reach and won't be able to overcome. He's saying that any kind of quantum advantage—a thing that numerous experiments, from different labs in academia and industry, using a wide variety of approaches, have demonstrated over the past decade—is impossible, and therefore all of those experimental results were wrong and need to be retracted. His position was scientifically respectable back when the possibility he was denying hadn't actually happened yet, but I don't think it is anymore.
1 reply →
I think what many people are missing in the discussion here is that topological qbits are essentially a different type of component. This is analogous to relay-triode-transistor technology progression.
It is speculation still whether the top-q approach will be effective, but there are significant implications if it is. Scalability, reliability, and speed are all significant aspects on the table here.
While other technologies have a significant head start, much of the “head start” is transferrable knowledge, similar to the relay-triode-transistor-integrated circuit progression. Each new component type multiplies the effectiveness of the advances made by the previous generation of technologies, it doesn’t start over.
IF the topological qubits can be made to be reliable and they live up to their scalability promises, it COULD be a revolutionary step, enabling exponential gains in cost, scalability, and capability. IF.
Recent and related:
Microsoft unveils Majorana 1 quantum processor - https://news.ycombinator.com/item?id=43104071 - Feb 2025 (150 comments)
topological analytics shows that under like-like exchanges multiple distinct pathways exist in 2D (in 3D topology does not have distinct pathways for these exchanges). this permits real anyon particles to exist when the physics is confined to 2D within quantum limits such as in a layer of graphene. certain configurations of layers (“moire materials”) can be made periodic to provide a suitable scale lattice for anyons to localize and adopt particular quantum states
anyons lie somewhere between fermions and bosons in their state occupancy and statistics - no 2 fermions may occupy the same state, bosons can all occupy the same state, anyons follow rational number patterns eg up to 2 anyons can occupy 3 states
I enjoy the quality of "it's too early to say" in Aaronson's writing. It won't stop share price movement or hopeless optimism amongst others.
I do wonder if he is running a simple 1st order differential on his own beliefs. He certainly has the chops here, and self introspection on the trajectory of highs and lows and the trends would interest me.
A bit off topic - I really like Scott Aaronson and his blog, but hate the comment section - he engages a lot with the comments (which is great!) but it's really hard to follow, as each comment is presented as a new message.
I made this small silly chrome extension to re-structure the comments to a more readable format - if anyone is interested
https://github.com/eliovi/shtetl-comment-optimized
I find the opposite, he often makes some ridiculous claim in the post, the comments (the ones he lets through) rightfully point out how wrong he was, then he cherry-picks and engages one of the more outrageous comments, so a superficial observer is left with the impression that the original claim was OK.
I remain curious if you can actually calculate anything with these gadgets? I mean can it add 2 and 2 or work out the factors of 30 or anything?
This experiment only created one qubit, so no.
The experiment with lots of qubits... technically yes they can do things. I think the factoring record is 21. But you might be disappointed a) when you see that most algorithms using quantum computers require conventional computational to transform the problem before and after the quantum steps b) when you learn we only have a few quantum algorithms, the are not general calculation machines and c) when you look under the hood and see that the error correcting stuff makes it actually kinda hard to tell how much is really being done by the actual quantum device.
Thanks for the reply. I've always been a bit puzzled from my limited knowledge of quantum mechanics as to how they are supposed to work. I mean you make a measurement on a quantum system and sure the probability amplitude is the result of adding up all sorts of possible paths but you still only get the one measurement out which I'm not sure how that's supposed to tell you much. All a bit beyond me.
3 replies →
We should celebrate this for what it is: another brick in the wall that we’re building to achieve practical quantum computing systems.
That's the best-case scenario. It remains possible that topological qubits, even if they are theoretically achievable, will turn out to be a dead end engineeringwise. Presumably competing quantum computing labs think this is likely, since they're not working on topological qubits; only Microsoft thinks they'll end up being important.
Yes, just like putting two bricks onto each other is a first step to the moon.
I wonder if this means that ai will have more capabilities with quantum computing.
So far, I haven't read how those chips are programmed, but it seems like it requires to re learn almost everything.
I don't even know if there is an OS for those.
So far, the only known algorithm relevant to AI that would run faster on a theoretical quantum computer is linear search, where quantum computers offer a modest speedup (linear search is O(n) on a classical computer, while Grover's algorithm is O(sqrt(n)) for quantum computers - this means that for a list of a million elements, you can scan it in 1 000 steps on a quantum computer instead of 1 000 000 steps on a classical one).
However, even this is extremely theoretical at this time - no quantum computer built so far can execute Grover's algorithm, they are not reliable enough to get any result with probability higher than noise, and anyway can't apply the amount of steps required for even a single pass without losing entanglement. So we are still very very very far away from a quantum computer that could reach anything like the computing performance of a single consumer-grade GPU. We're actually very far away from a quantum computer that even reaches the performance of a hand calculator at this time.
Pure Quantum Gradient Descent Algorithm and Full Quantum Variational Eigensolver https://arxiv.org/abs/2305.04198
https://en.wikipedia.org/wiki/Quantum_optimization_algorithm...
There is not an "OS" or anything even remotely like it. For now these things behave more like physics experiments than computers.
You can play around with "quantum programming" through (e.g.) some of IBM's offerings and there has been work on quantum programming languages like q# from Microsoft but its unclear (to me) how useful these are.
that's not the way to think about quantum computing AFAIK.
Think of these as accelerators you use to run some specific algorithm the result of which your "normal" application uses.
More akin to GPUs: your "normal" applications running on "normal" CPUs offload some specific computation to the GPU and then use the result.
> "an OS for those"
Or at least an OS driver for the devices supporting quantum computing if/when they become more standard.
Other than fast factorization and linear search, is there anything that Quantum Computing can do? Those do seem important, but limited in scope - is this a solution in search of a problem?
I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Hey anyons 2D electron gas. Wrote about it a while ago and get downvoted!
I had a thought while reading this:
Are we, in fact, in the very early stages of gradient descent toward what I want to call "software defined matter?"
If we're learning to make programmable quantum physics experiments and use them to do work, what is that the very beginning of? Imagine, say, 300 years from now.
TVs : displays already are software defined matter. But ya.
This seems quite a bold claim that Microsoft proofed that Neutrinos are Majorana particles...
Huh? This has nothing to do with neutrinos.
The chip is literally called Microsoft Majorana 1
Indeed Majorana fermions are completely unseen/unconsidered outside of Neutrinos. In fact all Standard Model fermions except Neutrinos are proven to be Dirac fermions
4 replies →