Yes but it doesn't generalize very well. Even on simple features like gender. If you go look at embeddings you'll find that man and woman are neighbors, just as king and queen are[0]. This is a better explanation for the result as you're just taking very small steps in the latent space.
Here, play around[1]
mother - parent + man = woman
father - parent + woman = man
father - parent + man = woman
mother - parent + woman = man
woman - human + man = girl
Or some that should be trivial
woman - man + man = girl
man - man + man = woman
woman - woman + woman = man
Working in very high dimensions is funky stuff. Embedding high dimensions into low dimensions results in even funkier stuff
I have seen this particular work example to work. You don't get the exact match but the closest one is indeed Queen.
Yes but it doesn't generalize very well. Even on simple features like gender. If you go look at embeddings you'll find that man and woman are neighbors, just as king and queen are[0]. This is a better explanation for the result as you're just taking very small steps in the latent space.
Here, play around[1]
Or some that should be trivial
Working in very high dimensions is funky stuff. Embedding high dimensions into low dimensions results in even funkier stuff
[0] https://projector.tensorflow.org/
[1] https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/
Thank you for the comment!
This led me to do a bit more research, and I see indeed the queen result is in itself infact "cheating" a bit: https://blog.esciencecenter.nl/king-man-woman-king-9a7fd2935...
#TheMoreYouKnow
so addition is not associative?
11 replies →
Shouldn't this itself be a part of training?
Having set of "king - male + female = queen" like relations, including more complex phrases to align embeddings.
It seems like terse, lightweight, information dense way to address essence of knowldge.