Comment by numba888
6 months ago
> but an almost-memorized computation or proof is likely to be plain wrong
hard to tell. never seen anyone trying it. model may almost-memorize and then fill the gaps at inference time as it's still doing some 'thinking'. But the main idea here is that there is a risk that model will spill out pieces of training data. OAI likely would not risk it at $100B++ valuation.
No comments yet
Contribute on Hacker News ↗