Comment by PaulHoule

8 days ago

I can sure talk your ear off about that one as I went way too far into the semantic web rabbit hole.

Training LLMs to use 'tools' of various types is a great idea, as it is to run them inside frameworks that check that their output satisfies various constraints. Still certain problems like the NP-complete nature of SAT solving (and many intelligent systems problems, such as word problems you'd expect an A.I. to solve, boil down to SAT solving) and problems such as the halting problem, Godel's theorem and such are still problems. I understand Doug Hofstader has softened his positions lately, but I think many of the problems set up in this book

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

(particularly the Achilles & Tortoise dialog) still stand today, as cringey as that book seems to me in 2025.

i am hoping for an slm "turing tape" small language model where the tokens are instructions for a copycat engine

As somebody who considers himself something of a Semantic Web enthusiast / advocate, and has also read GEB, I can totally relate. To me, this is really one of those "THE ISSUE" things: how can we use some notion of formal logic to solve problems, without being forced to give up hope due to incompleteness and/or the Halting Problem. Clearly you have to give up something as a tradeoff for making this stuff tractable, but I suppose it's an open question what you can tradeoff and how exactly that factors into the algorithm, as well as what guarantees (if any) remain...

  • I would start with the fact that there is nothing consistent or complete about humans. Penrose's argument that he is a thetan because he can do math doesn't hold water.