← Back to context

Comment by jodrellblank

3 hours ago

"large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:

> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"

which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.

[1] https://news.ycombinator.com/item?id=40523633