Comment by bbrubaker

17 days ago

I'm the author of this article. If you ask a complexity theorist, they will tell you that they did in fact have a general intuition that certain problems require space close to to linear in time to solve (see e.g., Ryan's comment #22 on Scott Aaronson's blog post about the result: https://scottaaronson.blog/?p=8680, and the comments after that). The most intuitive way to see this is in a circuit/DAG picture, where the goal is to get from the input nodes of the graph to the output nodes. Some graphs are very "wide": cut the graph at some intermediate point, and there will be a lot of edges crossing the cut, each of which represents some information from an earlier stage in the computation that you'll need to remember to get to the output. Ryan's first result is a general-purpose method for doing any computation, even ones whose graph structure looks like this, in asymptotically far less space. That is precisely what makes the result so surprising!

My article was quite explicit in multiple places that the universal/comprehensive character of the result was that counterintuitive part:

- In the first paragraph: "memory was more powerful than computer scientists believed: A small amount would be as helpful as a lot of time in all conceivable computations."

- Further down in the introduction, in the passage you quoted: "Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their runtime, and researchers had long assumed there’s no way to do better. Williams’ proof established a mathematical procedure for transforming any algorithm — no matter what it does — into a form that uses much less space.

- In the third section, I explicitly state that researchers do believe space is more powerful than time in the specific sense that you're criticizing my article for misrepresenting: "But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren’t in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn’t as forgiving — once it passes, you can’t get it back."

- In the fourth section, I explain why researchers didn't think the HPV75 result could be improved further, despite their intuition that space is more powerful than time in the above sense: "While many problems can be solved with much less space than time, some intuitively seemed like they’d need nearly as much space as time."

TCS (and complexity theory specifically) are complicated subjects. I spend a lot of time interviewing researchers and thinking about how to distill the results of my reporting into a form that is accessible to readers with widely varying levels of familiarity with the subject matter. You are of course well within your rights to critique my stylistic choices, the narrative aspects of the story, and the order in which I presented information, but I will push back against the claim that my article is spreading misinformation about complexity theory. You're referring to a misconception that arises, by your own admission, when you don't read carefully. If it's the headline you object to, you could lodge a similar complaint against the complexity theorist Lance Fortnow: https://blog.computationalcomplexity.org/2025/02/you-need-mu....

Thanks for the detailed response to that comment. I just read and enjoyed your article, and your response seems accurate.

I can imagine it's tough to put that much effort into communicating something complex to a wide audience, only to have a bunch of very smart people here attempt to tear it apart.

  • Thanks for the kind words! I do get lots of gratifying positive feedback as well. I don't make a habit of arguing with strangers online, but I felt obliged to correct the record here for anyone who might encounter this later and come away thinking "wow, I can't believe Quanta got it completely backward."

    Here on HN people often complain about the level of detail, which is fair! I think they are often falling prey to a common fallacy about conditional probabilities. P("X reads Quanta"|"X has some technical background (college STEM major or more)") is likely larger than it is for most popular science magazine. But P("Y has some technical background"|"Y reads Quanta") is much lower than many people realize. There is a limit on how much technical stuff I can put in an article and still have it be accessible to many of our readers, and I care a lot about making things accessible.