LLMs are great at knowledge transfer, the real question is how well can they demonstrate intelligence with "unknown unknown" types of questions. This model has the benefit of being released after that issue became public knowledge, so it's hard to know how it would've performed pre-hoc.
Every word and every hierarchy of words in natural language is understand by LLMs as embeddings (vectors).
Each vector has many many dimensions, and when we train the LLMs, their internal understanding of those vectors sees all sorts of dimensions. A simple way to visualize this is a word's vector being <1, 180, 1, 3, ... > which would all mean a certain value at that dimension. In this example say the dimensions are <gender, height in cm, kindness, social title/job, ...> . In this case, our example LLM could have learned that the example I gave is <Woman, 180, 100% kind, politician, ... >. The vector's undergo some transformation so every dimension is not that discretely clear cut.
In this case, elephant and car both semantically look very similar to vehicles. They basically would have most vectors very similar.
See this article. It shows that once you train an LLM, and you assign an embedding vector for each token, then you can see how the LLM can distinguish the difference between king and queen: man and woman.
LLMs are great at knowledge transfer, the real question is how well can they demonstrate intelligence with "unknown unknown" types of questions. This model has the benefit of being released after that issue became public knowledge, so it's hard to know how it would've performed pre-hoc.
Every word and every hierarchy of words in natural language is understand by LLMs as embeddings (vectors).
Each vector has many many dimensions, and when we train the LLMs, their internal understanding of those vectors sees all sorts of dimensions. A simple way to visualize this is a word's vector being <1, 180, 1, 3, ... > which would all mean a certain value at that dimension. In this example say the dimensions are <gender, height in cm, kindness, social title/job, ...> . In this case, our example LLM could have learned that the example I gave is <Woman, 180, 100% kind, politician, ... >. The vector's undergo some transformation so every dimension is not that discretely clear cut.
In this case, elephant and car both semantically look very similar to vehicles. They basically would have most vectors very similar.
See this article. It shows that once you train an LLM, and you assign an embedding vector for each token, then you can see how the LLM can distinguish the difference between king and queen: man and woman.
https://informatics.ed.ac.uk/news-events/news/news-archive/k...
Sure it is, but it's a different set of smarts than the kind of gotcha logic puzzle trying to be tested with the car wash question.
My gut says you’re right, but I don’t know if this is indeed true. It might be the same thing.