Comment by sho_hn
1 day ago
I do understand the contention is that an LLM would be less thoughtful in editorializing which bits to make interactive, reasoning about the progression in understanding and delight by the user.
I'm not so sure it's that far out of reach, though. From what I've seen the reasoning models do, they're not too far away from being able to run a strategy of figuring out interesting increments of a problem, parameterizing them, making an interactive scene for those parameters, ... it feels within reach.
I said nothing about LLMs. I said this page was not simply regurgitation of facts.
I personally doubt LLMs are close to producing anything like this, but that wasn’t the point. You indicated that this should be easy for an LLM because it’s just a fact dump. Regardless of whether some future LLM can generate something like this, it’s much more complicated and interesting than a simple fact dump.