The underlying argument this article seems to be making is that an appropriate algorithm for any given application isn't always the one with the most efficient asymptotic performance for sufficiently large n -- for a given application (in this case routing), we have data on typical values of n that appear in reality and we can choose an algorithm that offers good enough (or optimal) performance for n in that constrained range, as well as potentially having other benefits such being simpler to implement correctly in a short amount of engineering time.
This argument is very much in line with Mike Acton's data-driven design philosophy [1] -- understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
It's common that new algorithms or data-structures with superior asymptotic efficiency are less performant for smaller problem sizes vs simpler alternatives. As always, it depends on the specifics of any given application.
> understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
Very much in line with what James Coplien and colleagues described with "Commonality and Variability Analysis" and "Family-oriented Abstraction, Specification, and Translation" (FAST) for Software Engineering in the 90's. Coplien's PhD thesis titled Multi-Paradigm Design and book titled Multi-Paradigm Design for C++ is based on this approach.
I came to realize that logs don't matter. O(log(n)) and O(1) are effectively the same thing.
The reason is that real computers have memory, accessing memory takes time, and bigger n needs more memory, simply to store the bits that represent the number. Already, we have a factor of O(log(n)) here.
But also each bit of memory takes physical space, we live in a 3D world, so, best case, on average, the distance to a memory cell is proportional to the cube root of the amount of memory cells we will have to access. And since the speed of light is finite, it is also a cube root in time.
So "constant time" is actually cube root on a real computer. And I am generous by saying "real". Real real computers have plenty of stuff going on inside of them, so in practice, complexity analysis at the O(log(n)) level is meaningless without considering the hardware details.
Going from log(n) to log2/3(n) is just an exercise in mathematics, it may lead to practical applications, but by itself, it doesn't mean much for your software.
Most of this is very true, except for the one caveat I'll point out that a space complexity O(P(n)) for some function P implies at least a O(cubedroot(P(n))) time complexity, but many algorithms don't have high space complexity. If you have a constant space complexity this doesn't factor in to time complexity at all. Some examples would be exponentiation by squaring, miller-rabin primality testing, pollard-rho factorization, etc.
Of course if you include the log(n) bits required just to store n, then sure you can factor in the log of the cubed root of n in the time complexity, but that's just log(n) / 3, so the cubed root doesn't matter here either.
Each time a discussion about sorting starts, I'm reminded of a "lively debate" I had with my boss/friend about the most optimal sorting approach. He claimed it's O(n) pointing to counting sort as an example. This didn't sit well with me. A sorting algorithm, I insisted, should be defined something like "a function taking an unsorted array of elements and returning a sorted one". But it seems there is no agreed upon definition and you can have something like counting sort where you get an array in O(n) but still need O(m) to get the index of a specific element. I argued that then the best sorting algorithm is the do-nothing algorithm. It returns the initial array in O(1), but needd O(m log m) to get you the index of an element (it uses merge sort to do it).
Usually when one talk about sorting, without specifying closer, one means comparison sort [1], which indeed has an average-case lower bound of O(n*log(n)). In more special cases all kinds of other runtimes are possible.
I can’t think of a single time I’ve needed a sorted list of only numbers. It’s always numbers and something else, like names or dates. Maybe for median calculations, but I don’t even use those that much either. Especially in telemetry, where mean is easy and median is not.
To be pedantic, median is cheaper than sorting. O(n) with a quicksort-like algorithm.
Also, if you're taking an average of floating point numbers, you might want to sort it first and add from smallest to largest, to better preserve precision
If the primary key is the number, it still works (and dates are just numbers by the way) because you can sort a heterogenous dataset by a single numeric key pretty trivially.
But sorting by arbitrary strings like names can’t avoid comparison sort.
The equivalent for positive link weight shortest paths is the “string algorithm” — build the network out of string and pull taut between the two knots/nodes, the resulting set of taut links is the shortest path.
I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".
Only if you assume a finite alphabet and bounded length. Relax either and you're back to O(n log n) for a fully general solution. Examples of both: tuples and strings.
(There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)
> Only if you assume a finite alphabet and bounded length
You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.
If you have an unbounded length then you can make sorting O(l n) where `l` is a bound on the lengths of your input. It's still linear in n, and also better than the O(l n logn) you would with traditional comparison based algorithms once you factor in the O(l) complexity of the comparison function for such elements.
Both are true in practice, so not unreasonable. For graph weights that is, not sorting.
That said the finite alphabet and bounded length requirements can be softened a bit. Even for general sorting algorithms.
I mean, for the kind of lexicographic sotable data we're talking about you can basically pick a convenient alphabet size without cost.
And unbounded length is not that big an obstruction. Sure you are going to need O(n log(n)) comparisons. But you can't compare data of unbounded length in constant time anyway. In the end you end up taking an amount of time that is at least proportional to the amount of data, which is optimal up to a constant factor. And if you fiddle with radix sort enough you can get it within something similar.
Basic ASCII strings and tuples aren't that big an obstruction. Fractions are more complicated.
Really the O(n log(n)) for comparison based sorts and O(N) for radix sort mean something different. One is the number of comparisons to the number of elements, and the other closer to the number of operations per amount of data. Though that assumes O(1) swaps, which is technically incorrect for data that doesn't fit a 64 bit computer.
In the most extreme case of this, you can use Counting Sort. Tangential to this, Spaghetti Sort makes me wonder about which parts of CS (especially the data structures, like arrays) are objective or just accidental.
No, I don't think you're missing anything. He never answered the title of the post ("Faster Than Dijkstra?"). Instead he went on a huge tangent about his experience writing software for routers and is dismissive of the algorithm because the router problem space he was working in did not deal with a node count high enough to warrant the need for a more complex algorithm. Dijkstra's algorithm is used for problem spaces with far higher number of nodes than he mentions... basically an article that talks about some kind of interesting things but doesn't say much about its central question.
> What I'm missing is certainly what the hell the algorithm even is and what is its complexity.
https://arxiv.org/pdf/2504.17033 - Linked from the second sentence of the submission, not hard to track down. And the complexity (by which I presume you mean algorithmic complexity) is stated in the submission and in the PDF linked by the submission and that I just shared with you.
Intriguing article. Sometimes practical issues override theoretical ones, and it would be interesting to see which one dominates in networking.
(side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)
Graph Theory and AI have been forever linked because it turns out the human brain is ridiculously good at finding a path through a graph that is somewhere in the neighborhood of 97% optimal, and we can do it in minutes while a computer would take weeks or until the heat death of the universe to do it better.
It's vexatious how good we are at it, and it's exactly the sort of problem that Science likes. We know it's true, but we can't reproduce it outside of the test subject(s). So it's a constant siren song to try to figure out how the fuck we do that and write a program that does it faster or more reliably.
Traveling Salesman was the last hard problem I picked up solely to stretch my brain and I probably understand the relationship between TSP and linear programming about as well as a Seahawks fan understands what it is to be a quarterback. I can see the bits and enjoy the results but fuck that looks intimidating.
>while a computer would take weeks or until the heat death of the universe to do it better.
I don't buy this, approximation algorithms are an entire field of CS, if you're OK with an approximate solution I'm sure computers could do that quickly as well.
standard algorithms for single-source/single-dest pathfinding scale log-linearly in the size of the graph, so compared to other combinatorial optimisation problems, optimal pathfinding is incredibly easy for computers to do & scales pretty well to industrial-sized problems. computers can also do optimal pathfinding for problems that humans would not be able to solve easily (because the graphs don't easily embed in 2d or 3d, say, so we can't bring our vision systems to bear)
other combinatorial optimisation problems - like the traveling salesman you mention - are much harder than pathfinding to solve optimally or even approximately
Many years ago, as an undergrad, I had a conversation with a grad student friend about the Selection algorithm (which will find the kth largest item in an unsorted list in O(n) time). I loved it, but when I tested it in practice it was slower than just sorting and selecting well into the billions of elements.
My friend said "that may be true, but consider the philosophical implication: because that algorithm exists, we know it's possible to answer the question in O(n) time. We once didn't know that, and there was no guarantee that it was possible. We might still be able to find a better O(n) algorithm."
I feel the same way about this. Sure, this might not be faster than Dijkstra's in practice, but now we know it's possible to do that at all.
I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.
It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.
Also please, please, can we stop with the "eww, math" reactions?
> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)
I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."
The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.
Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that.
e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).
Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.
I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.
> I struggle to see the point. The paper in question doesn't claim to be practically faster...
I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?
The underlying argument this article seems to be making is that an appropriate algorithm for any given application isn't always the one with the most efficient asymptotic performance for sufficiently large n -- for a given application (in this case routing), we have data on typical values of n that appear in reality and we can choose an algorithm that offers good enough (or optimal) performance for n in that constrained range, as well as potentially having other benefits such being simpler to implement correctly in a short amount of engineering time.
This argument is very much in line with Mike Acton's data-driven design philosophy [1] -- understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
It's common that new algorithms or data-structures with superior asymptotic efficiency are less performant for smaller problem sizes vs simpler alternatives. As always, it depends on the specifics of any given application.
[1] see Mike's CppCon 2014 talk "Data-Oriented Design and C++" https://www.youtube.com/watch?v=rX0ItVEVjHc
> understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
Very much in line with what James Coplien and colleagues described with "Commonality and Variability Analysis" and "Family-oriented Abstraction, Specification, and Translation" (FAST) for Software Engineering in the 90's. Coplien's PhD thesis titled Multi-Paradigm Design and book titled Multi-Paradigm Design for C++ is based on this approach.
Commonality and Variability in Software Engineering (pdf) - https://www.dre.vanderbilt.edu/~schmidt/PDF/Commonality_Vari...
Multi-Paradigm Design (pdf of PhD thesis) - https://tobeagile.com/wp-content/uploads/2011/12/CoplienThes...
I came to realize that logs don't matter. O(log(n)) and O(1) are effectively the same thing.
The reason is that real computers have memory, accessing memory takes time, and bigger n needs more memory, simply to store the bits that represent the number. Already, we have a factor of O(log(n)) here.
But also each bit of memory takes physical space, we live in a 3D world, so, best case, on average, the distance to a memory cell is proportional to the cube root of the amount of memory cells we will have to access. And since the speed of light is finite, it is also a cube root in time.
So "constant time" is actually cube root on a real computer. And I am generous by saying "real". Real real computers have plenty of stuff going on inside of them, so in practice, complexity analysis at the O(log(n)) level is meaningless without considering the hardware details.
Going from log(n) to log2/3(n) is just an exercise in mathematics, it may lead to practical applications, but by itself, it doesn't mean much for your software.
>logs don't matter. O(log(n)) and O(1) are effectively the same thing.
ever heard of a hashtable? that's because O(c) is better than O(log(N)). if they were the same, you would only have heard of binary search.
1 reply →
Most of this is very true, except for the one caveat I'll point out that a space complexity O(P(n)) for some function P implies at least a O(cubedroot(P(n))) time complexity, but many algorithms don't have high space complexity. If you have a constant space complexity this doesn't factor in to time complexity at all. Some examples would be exponentiation by squaring, miller-rabin primality testing, pollard-rho factorization, etc.
Of course if you include the log(n) bits required just to store n, then sure you can factor in the log of the cubed root of n in the time complexity, but that's just log(n) / 3, so the cubed root doesn't matter here either.
> Already, we have a factor of O(log(n)) here.
Doesn’t that mean that O(log(n)) is really O(log²(n))?
1 reply →
Each time a discussion about sorting starts, I'm reminded of a "lively debate" I had with my boss/friend about the most optimal sorting approach. He claimed it's O(n) pointing to counting sort as an example. This didn't sit well with me. A sorting algorithm, I insisted, should be defined something like "a function taking an unsorted array of elements and returning a sorted one". But it seems there is no agreed upon definition and you can have something like counting sort where you get an array in O(n) but still need O(m) to get the index of a specific element. I argued that then the best sorting algorithm is the do-nothing algorithm. It returns the initial array in O(1), but needd O(m log m) to get you the index of an element (it uses merge sort to do it).
Usually when one talk about sorting, without specifying closer, one means comparison sort [1], which indeed has an average-case lower bound of O(n*log(n)). In more special cases all kinds of other runtimes are possible.
1: https://en.wikipedia.org/wiki/Comparison_sort
[dead]
counting sort is O(nW), where W is largest value
if you don't care about W or it is essentially constant - then it can be dropped
but it is an input parameter that will change execution time
It's O(n+W), not O(n*W).
> if you don't care about W or it is essentially constant - then it can be dropped
Also, every algorithm that ends in a real computer is bound to a constant time. That's still not a practical thing to do.
1 reply →
W is the span or range.
I can’t think of a single time I’ve needed a sorted list of only numbers. It’s always numbers and something else, like names or dates. Maybe for median calculations, but I don’t even use those that much either. Especially in telemetry, where mean is easy and median is not.
To be pedantic, median is cheaper than sorting. O(n) with a quicksort-like algorithm.
Also, if you're taking an average of floating point numbers, you might want to sort it first and add from smallest to largest, to better preserve precision
4 replies →
If the primary key is the number, it still works (and dates are just numbers by the way) because you can sort a heterogenous dataset by a single numeric key pretty trivially.
But sorting by arbitrary strings like names can’t avoid comparison sort.
"ordering" means arranging things in order by some metric.
"sorting" means assigning things into bins (which are usually ordered).
This is news to me. Source?
6 replies →
You reminded me of "Sleep Sort" [0]
[0] https://news.ycombinator.com/item?id=2657277
Don't know about if Sleep Sort even is a valid sorting algorithm? Is this even real?
Obligatory mention of the linear time analog sorting algorithm known as the “spaghetti algorithm”.
https://en.wikipedia.org/wiki/Spaghetti_sort
https://every-algorithm.github.io/2023/11/25/spaghetti_sort....
The equivalent for positive link weight shortest paths is the “string algorithm” — build the network out of string and pull taut between the two knots/nodes, the resulting set of taut links is the shortest path.
Deja Vu.
I read this article a few days ago I'm sure, word-for-word, but it wasn't on this site in OP? It stood out because when it mentioned textbooks and said "including ours" I looked at the site and thought to myself "they do textbooks?".
> and thought to myself "they do textbooks?".
Indeed: https://systemsapproach.org/books-html/
If you are cheap on money, but you do have time, and like to get into networking, I can only highly recommend https://book.systemsapproach.org/
The link to https://book.systemsapproach.org/internetworking/routing.htm... is in the first paragraph.
This submission was made three days ago and bumped two hours ago: https://news.ycombinator.com/submitted?id=drbruced
Interestingly sorting is O(N) for a surprisingly large class of datatypes. Anything that behaves well with lexicographic sorting really.
Supposing one uses a 'trie' as a priority queue, the inserts and pops are effectively constant.
Only if you assume a finite alphabet and bounded length. Relax either and you're back to O(n log n) for a fully general solution. Examples of both: tuples and strings.
(There's also the problem of how you define your computational model. You can do better than O(n log n) in transdichotomous models. I'm assuming the hand-wavy, naive model the average algorithms class goes along with.)
> Only if you assume a finite alphabet and bounded length
You can generally reduce the problem to a finite alphabet by taking the finite subset that actually appears in the input.
If you have an unbounded length then you can make sorting O(l n) where `l` is a bound on the lengths of your input. It's still linear in n, and also better than the O(l n logn) you would with traditional comparison based algorithms once you factor in the O(l) complexity of the comparison function for such elements.
6 replies →
Both are true in practice, so not unreasonable. For graph weights that is, not sorting.
That said the finite alphabet and bounded length requirements can be softened a bit. Even for general sorting algorithms.
I mean, for the kind of lexicographic sotable data we're talking about you can basically pick a convenient alphabet size without cost.
And unbounded length is not that big an obstruction. Sure you are going to need O(n log(n)) comparisons. But you can't compare data of unbounded length in constant time anyway. In the end you end up taking an amount of time that is at least proportional to the amount of data, which is optimal up to a constant factor. And if you fiddle with radix sort enough you can get it within something similar.
Basic ASCII strings and tuples aren't that big an obstruction. Fractions are more complicated.
Really the O(n log(n)) for comparison based sorts and O(N) for radix sort mean something different. One is the number of comparisons to the number of elements, and the other closer to the number of operations per amount of data. Though that assumes O(1) swaps, which is technically incorrect for data that doesn't fit a 64 bit computer.
In the most extreme case of this, you can use Counting Sort. Tangential to this, Spaghetti Sort makes me wonder about which parts of CS (especially the data structures, like arrays) are objective or just accidental.
The transdichotomous model looks interesting.
1 reply →
Should have a more conspicuous link to the Quanta article that it references if not the original. (It’s there, just not on the top.)
https://www.quantamagazine.org/new-method-is-the-fastest-way...
https://arxiv.org/abs/2504.17033
He links to those in the first two sentences. How is that not "on the top"?
am I missing something?
this was a lot of words that sum up to "I heard that new algorithm exists but spent zero effort actually evaluating it"
No, I don't think you're missing anything. He never answered the title of the post ("Faster Than Dijkstra?"). Instead he went on a huge tangent about his experience writing software for routers and is dismissive of the algorithm because the router problem space he was working in did not deal with a node count high enough to warrant the need for a more complex algorithm. Dijkstra's algorithm is used for problem spaces with far higher number of nodes than he mentions... basically an article that talks about some kind of interesting things but doesn't say much about its central question.
What I'm missing is certainly what the hell the algorithm even is and what is its complexity. This guy just rambles about old switches.
> What I'm missing is certainly what the hell the algorithm even is and what is its complexity.
https://arxiv.org/pdf/2504.17033 - Linked from the second sentence of the submission, not hard to track down. And the complexity (by which I presume you mean algorithmic complexity) is stated in the submission and in the PDF linked by the submission and that I just shared with you.
1 reply →
Intriguing article. Sometimes practical issues override theoretical ones, and it would be interesting to see which one dominates in networking.
(side note: does anyone else get thrown off by the Epilogue font? It looks very wide in some cases and very narrow in others... makes me want to override it with Stylus if my employer hadn't blocked browser extensions for security reasons, which raises the question of why I am even reading the article on this computer....)
Graph Theory and AI have been forever linked because it turns out the human brain is ridiculously good at finding a path through a graph that is somewhere in the neighborhood of 97% optimal, and we can do it in minutes while a computer would take weeks or until the heat death of the universe to do it better.
It's vexatious how good we are at it, and it's exactly the sort of problem that Science likes. We know it's true, but we can't reproduce it outside of the test subject(s). So it's a constant siren song to try to figure out how the fuck we do that and write a program that does it faster or more reliably.
Traveling Salesman was the last hard problem I picked up solely to stretch my brain and I probably understand the relationship between TSP and linear programming about as well as a Seahawks fan understands what it is to be a quarterback. I can see the bits and enjoy the results but fuck that looks intimidating.
>while a computer would take weeks or until the heat death of the universe to do it better.
I don't buy this, approximation algorithms are an entire field of CS, if you're OK with an approximate solution I'm sure computers could do that quickly as well.
standard algorithms for single-source/single-dest pathfinding scale log-linearly in the size of the graph, so compared to other combinatorial optimisation problems, optimal pathfinding is incredibly easy for computers to do & scales pretty well to industrial-sized problems. computers can also do optimal pathfinding for problems that humans would not be able to solve easily (because the graphs don't easily embed in 2d or 3d, say, so we can't bring our vision systems to bear)
other combinatorial optimisation problems - like the traveling salesman you mention - are much harder than pathfinding to solve optimally or even approximately
I can only recommend (for all Germans here) this video from "dorfuchs":
https://youtu.be/3ge-AywiFxs?si=TbcRsBNkzGhpOxQ4&t=842
(timestamped=) He shows a derivation that at best, a sorting algorithm can do is O(n log(n)) for n real positive numbers.
From who now?
Many years ago, as an undergrad, I had a conversation with a grad student friend about the Selection algorithm (which will find the kth largest item in an unsorted list in O(n) time). I loved it, but when I tested it in practice it was slower than just sorting and selecting well into the billions of elements.
My friend said "that may be true, but consider the philosophical implication: because that algorithm exists, we know it's possible to answer the question in O(n) time. We once didn't know that, and there was no guarantee that it was possible. We might still be able to find a better O(n) algorithm."
I feel the same way about this. Sure, this might not be faster than Dijkstra's in practice, but now we know it's possible to do that at all.
I struggle to see the point. The paper in question doesn't claim to be practically faster or to want to "replace" Dijkstra, they are just saying "we got a better big-O" and I don't see any reason to doubt they're wrong about that.
It's actually common for algorithms with a lower asymptotic complexity to be worse in practice, a classic example is matrix multiplication.
Also please, please, can we stop with the "eww, math" reactions?
> The new approach claims order (m log^(2/3) n) which is clearly going to be less for large enough n. (I had to take a refresher course on log notation before I could even write that sentence with any confidence.)
I'm sure the author is just exaggerating, he's clearly very competent, but it's a sentence with the vibes of "I can't do 7x8 without a calculator."
The Quanta article on the paper was considerably more breathless in describing a fine piece of work in mathematics. The author here points out that one of the things that makes Dijkstra's result iconic is that it could be used practically in a straightforward way. As an engineer, beautiful mathematics is useless if I can't convert it to running code.
Actually there's a whole bunch of mathematics which I find useful as an engineer because it tells me that the perfection I have vaguely imagined I could reach for is literally not possible and so I shouldn't expend any effort on that.
e.g Two body gravity I can just do the math and get exact answers out. But for N> 2 bodies that doesn't work and it's not that I need to think a bit harder, maybe crack out some graduate textbooks to find a formula, if I did hopefully the grad books say "Three body problem generally not amenable to solution". I will need to do an approximation, exact answers are not available (except in a few edge cases).
1 reply →
More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.
If m > n (log n)^{1/3}
Then this algorithm is slower.
for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)
"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)
1 reply →
Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.
I read it as a musing on the folly of improvements that don’t deliver benefits within the practical bounds of actual problems. Which is a lesson seen everywhere in physical systems and manufacturing. Is it an amazing insight? No, but it’s a lesson that is relearned by everyone several times.
> I struggle to see the point. The paper in question doesn't claim to be practically faster...
I struggle to see the point of your comment. The blog post in question does not say that the paper in question claims to be faster in practice. It simply is examining if the new algorithm has any application in network routing; what is wrong with that?