Comment by chii
7 hours ago
> they don't actually understand how
but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?
Does a chess engine "understand" the moves it makes?
7 hours ago
> they don't actually understand how
but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?
Does a chess engine "understand" the moves it makes?
If it empirically works, then sure. If instead every single solution it provides beyond a few trivial lines falls somewhere between "just a little bit off" and "relies entirely on core library functionality that doesn't actually exist" then I'd say it does matter and it's only slightly better than an opaque box that spouts random nonsense (which will soon include ads).
This sounds like you're copy-pasting code from ChatGPT's web interface, which is very 2024.
Agentic LLMs will notice if something is crap and won't compile and will retry, use the tools they have available to figure out what's the correct way, edit and retry again.
Those are 2024-era criticisms of LLMs for code.
Late 2025 models very rarely hallucinate nonexistent core library functionality - and they run inside coding agent harnesses so if they DO they notice that the code doesn't work and fix it.
get ready to tick those numbers over to 2026!
This is a semantic dead end when discussing results and career choices
It matters if AGI is the goal. If it remains a tool to make workers more productive, then it doesn't need to truly understand, since the humans using the tools understand. I'm of the opinion AI should have stood for Augmented (Human) Intelligence outside of science fiction. I believe that's what early pioneers like Douglas Engalbert thought. Clearly that's what Steve Jobs and Alan Kay thought computing was for.
AGI is such a meaningless concept. We can’t even fully design what human intelligence is (and when a human fails it meaning they lack human intelligence). It’s just philosophy.
AGI is about as well defined as "full self-driving" :D
It's an useless philosophical discussion.