Comment by torginus
5 days ago
First, false equivalence. The 'strawberry' problem was because LLMs operate not on text directly, but on embedding vectors, which made it hard for it to manipulate the syntax of language directly. This does not prevent it from properly doing math proofs.
Second, we know nothing about these models or how they work and trained, and indeed, if they can do these things or not. But a smart human could (by smart I mean someone who gets good grades at engineering school effortlessly, not Albert Einstein)
No comments yet
Contribute on Hacker News ↗