Comment by mhitza

3 days ago

My impression was that text embeddings are better suited for classification. Of course the big caveat is that the embeddings must have "internalized" the semantic concept you're trying to map.

From some article I have in my draft, experimenting with open source text embeddings:

    ./match venture capital
    purchase           0.74005488647684
    sale               0.80926752301733
    place              0.81188663814236
    positive sentiment 0.90793311875207
    negative sentiment 0.91083707598925
    time               0.9108697315425
 
    ./store sillicon valley
    ./match venture capital
    sillicon valley    0.7245139487301
    purchase           0.74005488647684
    sale               0.80926752301733
    place              0.81188663814236
    positive sentiment 0.90793311875207
    negative sentiment 0.91083707598925
    time               0.9108697315425

Of course you need to figure out what these black boxes understand. For example for sentiment analysis, instead of having it match against "positive" "negative" you would have the matching terms be "kawai" and "student debt". Depending how the text embedding internalized negatives and positives based on their training data.