Comment by n2d4
2 days ago
For fun, I pasted these into ChatGPT o4-mini-high and asked it for an opinion:
data + plural = datasets
data - plural = datum
king - crown = ruler
king - princess = man
king - queen = prince
queen - king = woman
king + queen = royalty
boy + age = man
man - age = boy
woman - age = girl
woman + age = elderly woman
girl + age = woman
girl + old = grandmother
The results are surprisingly good, I don't think I could've done better as a human. But keep in mind that this doesn't do embedding math like OP! Although it does show how generic LLMs can solve some tasks better than traditional NLP.
The prompt I used:
> Remember those "semantic calculators" with AI embeddings? Like "king - man + woman = queen"? Pretend you're a semantic calculator, and give me the results for the following:
This is an LLM approximating a semantic calculator, based solely on trained-in knowledge of what that is and probably a good amount of sample output, yet somehow beating the results of a "real" semantic calculator. That's crazy!
The more I think about it the less surprised I am, but my initial thoughts were quite simply "now way" - surely an approximation of an NLP model made by another NLP model can't beat the original, but the LLM training process (and data volume) is just so much more powerful I guess...
This is basically the whole idea behind the transformer. Attention is much more powerful than embedding alone.
The transformers are initialized by embedding models...
Your embedding model is literally the translation layer converting the text to numbers. The transformers are the main processing unit of the embeddings. You can even see some self-reflection in the model as the transformer is composed of attention and a MLP sub-network. The attention mechanism generates the interrelational dependence of the data and the MLP projects up into a higher dimension before coming down so that this can untangle these relationships. But the idea is that you just repeat this process over and over. The attention mechanism has the benefit over CNN models because it has a larger receptive field, so can better process long range relationships (long range being across the input data) where CNNs bias for local relationships.
I hate to be pedantic, but the llm is definitely doing embedding math. In fact that’s all it does.
Sure! Although I think we both agree that the way those embeddings are transformed is significantly different ;)
(what I meant to say is that it doesn't do embedding math "LIKE" the OP — not that it doesn't do embedding math at all.)
Yeah we'd be impressed if an LLM calculated the product of a couple of 1000x1000 matrices.
I'm actually surprised that the performance is so poor and would expect a human to do much better. The GPT model has embedding PLUS a whole transformer model that can untangle the embedded structure.
To clarify some of the issues:
I think you are misunderstanding the architecture of these models. The embedding sub-network is the translation of text to numeric tokens. You'll find mention of the embedding sub-networks in both the GPT3[3] and GPT4 papers. Though they are given lower importance than other works. While much smaller than the main network, don't forget that embedding networks are still quite large. For the smaller models they constitute a significant part of the total parameter count[4]
After the embedding sub-network is your main transformer network. The purpose of this network is to perform embedding math! It is just that the goal is to do significantly more complicated math. Remember, these are learnable mappings (see Optimal Transport). We're just breaking it down into their two main intermediate mappings. But the embeddings still end up being a bottleneck. It is your literal gateway from words to numbers.
[0] https://en.wikipedia.org/wiki/Mass_noun
[1] https://www.merriam-webster.com/dictionary/data
[2] https://www.sciotoanalysis.com/news/2023/1/18/this-data-or-t...
[3] https://arxiv.org/abs/2005.14165
[4] https://arxiv.org/abs/2303.08774
[4] https://www.lesswrong.com/posts/3duR8CrvcHywrnhLo/how-does-g...
You are being unnecessarily cynical. These are all subjective. I thought "datum" and "datasets" was quite clever, and while I would've chosen "man" for "king - crown" myself, I actually find "ruler" a better solution after seeing it. But each to their own.
The rant about network architecture misses my point, which is that an LLM does not just do a linear transformation and a similarity search. Sure, in the most abstract sense it still just computes an output embedding from two input embeddings, but only in a very distant, pedantic way. (Actually, to be VERY pedantic, that would not even be true, because ChatGPT's tokenizer embeds tokens, not words. The in- and output of the model is more than just the semantic embedding of words; using two different but semantically equivalent words may result in different outputs with a transformer LLM, but not in a word semantics model.)
I just thought it was cool that ChatGPT is so good at it.
I'm an engineer and researcher, it is my job to find problems, so that they can be resolved. I'd say this is different from being cynical as that tends to be dismissive. I understand how my comment can come off that way, though it wasn't my intention, so I'm clarifying.
You're right that there's subjectivity but not infinitely so. There is a bound to this and that's both required for language to work and for us to build these models. I did agree that the data one was tricky so not really going to argue, I was just pointing out a critical detail given that the models learn through pattern matching rather than a dictionary. It's why I made the comment about humans. As for ruler minus crown, I gave my explication, would you care to share yours? I'd like to understand your point of view so I can better my interpretation of the results, because frankly I don't understand. What is the semantic relationship being changed if not the attribute of ruler?
The architecture part was a miscommunication. I hope you understand how I misunderstood you when you said "this doesn't do embedding math like OP!". It is clear I'm not alone either.
To be pedantic, people generally refer to the tokenization and embedding simply as embedding. It's the common verbiage. This is because with BPE you are performing these steps simultaneously and the term is appropriate given the longer usage in math.
I was just trying to help you understand a different viewpoint.
"King-crown=ruler" is IMO absolutely apt. Arguing that "crown" can be used metaphorically is a bit disingenuous because first, it's very rarely applied to non-monarchs, and is a very physical, concrete symbol of power that separates monarchs from other rulers.
"King-princess=man" can be thought to subtract the "royalty" part of "king"; "man" is just as good an answer as any else.
"King-queen=prince" I'd think of as subtracting "ruler" from "king", leaving a male non-ruling member of royalty. "gender-unspecified non-ruling royal" would be even better, but there's no word for that in English.
“King - queen = male” strikes me as logical, if we take king = (+human, +male, +royal), and queen = (+human, -male, +royal), then the difference is (0human, 2male, 0royal).
I take your point but highly disagree that it's disingenuous to view this metaphorically. The crown has always been a symbol of the seat of power and that usage dates back centuries. I've seen it commonly used to refer to leadership in general. Actually more often.
Notably even in the usage of Henry IV that the idiom draws from is using it in the metaphorical sense, despite also talking about a ruler so would wear a literal crown. There's similar frequent usage in widely popular shows like Game of Thrones. So I hope you can see why I really do not think it's fair to call me disingenuous. The metaphorical usage is extremely common.
I'll buy the king price relationship. That's fair. But it also seems to be in disagreement from the king queen one.
The specific cherry-picked examples from GP make sense to me.
If +/- plural can be taken to mean "make explicitly plural or singular", then this roughly works.
Rearrange (because embeddings are just vector math), and you get "king = ruler + crown". Yes, a king is a ruler who has a crown.
This isn't great, I'll grant, but there are many YA novels where someone becomes king (eventually) through marriage to a princess, or there is intrigue for the princess's hand for reasons of kingly succession, so "king = man + princess" roughly works.
I agree it's hard to make sense of "king - queen = prince". "A queen is a woman king" is often how queens are described to young children. In Chinese, it's actually the literal breakdown of 女王. I also agree there's a gender bias, but also literally everything about LLMs and various AI trained on large human-generated data encodes the bias of how we actually use language and thought patterns. It's one of the big concerns of those in the civil liberties space. Search "llm discrimination" or similar for more on this.
Playing around with age/time related gives a lot of interesting results:
I think a lot of words are hard to distill into a single embedding. A word may embed a number of conceptually distinct definitions, but my (incomplete) understanding of embeddings is that they are not context-sensitive, right? So averaging those distinct definitions through 1 label is probably fraught with problems when trying to do meaningful vector math with them that context/attention are able to help with.
[EDIT:formatting is hard without preview]
Can you do the same but each line is done in a seperate context?
...welcome to ChatGPT, everyone! If you've been asleep since...2022?
(some might say all an LLM does is embeddings :)