← Back to context

Comment by in-silico

3 days ago

> How do we explain humans who invent new concepts

Simple: they are hallucinations that turn out to be correct or useful.

Ask ChatGPT to create a million new concepts that weren't in its training data and some of them are bound to be similarly correct or useful. The only difference is that humans have hands and eyes to test their new ideas.

How does NN create a concept not in its training data? (Does it explore negative idea-space?) What if a concept uses a word not yet invented? How does LLM produce that word and what cosine similarity would such a word have if it's never appeared next to others? How would we know if such a word is useful?