Comment by catlifeonmars
21 days ago
It seems kind of silly that you can’t teach an LLM new tricks though, doesn’t it? This doesn’t sound like an intrinsic limitation and more an artifact of how we produce model weights today.
21 days ago
It seems kind of silly that you can’t teach an LLM new tricks though, doesn’t it? This doesn’t sound like an intrinsic limitation and more an artifact of how we produce model weights today.
getting tricks embedded into the weights is expensive, it doesn't happen in a single pass
they's why we teach them new tricks on the fly (in-context learning) with instruction files
Right, it sounds like an artificial limitation.
it's more a mathematical / algorithmic limitation
5 replies →