← Back to context

Comment by balazstorok

3 days ago

Does someone have a good understanding how 2B models can be useful in production? What tasks are you using them for? I wonder what tasks you can fine-tune them on to produce 95-99% results (if anything).

The use case for small models include sentiment and intent analysis, spam and abuse detection, and classifications of various sorts. Generally LLM are thought of as chat models but the output need not be a conversation per se.

  • My impression was that text embeddings are better suited for classification. Of course the big caveat is that the embeddings must have "internalized" the semantic concept you're trying to map.

    From some article I have in my draft, experimenting with open source text embeddings:

        ./match venture capital
        purchase           0.74005488647684
        sale               0.80926752301733
        place              0.81188663814236
        positive sentiment 0.90793311875207
        negative sentiment 0.91083707598925
        time               0.9108697315425
     
        ./store sillicon valley
        ./match venture capital
        sillicon valley    0.7245139487301
        purchase           0.74005488647684
        sale               0.80926752301733
        place              0.81188663814236
        positive sentiment 0.90793311875207
        negative sentiment 0.91083707598925
        time               0.9108697315425
    

    Of course you need to figure out what these black boxes understand. For example for sentiment analysis, instead of having it match against "positive" "negative" you would have the matching terms be "kawai" and "student debt". Depending how the text embedding internalized negatives and positives based on their training data.

Anything you'd normally train a smaller custom model for, but with an LLM you can use a prompt instead of training.

2B models by themselves aren't so useful, but it's very interesting as a proof of concept, because the same technique used to train a 200B model could produce one that's much more efficient (cheaper and more environmentally friendly) than existing 200B models, especially with specialised hardware support.

I'm more interested in how users are taking 95-99% to 99.99% for generation-assisted tasks. I haven't seen a review or study of techniques, even though on the ground it's pretty trivial to think of some candidates.

  • Three strategies seem to be:

    - Use LLM to evaluate result and retry if it doesn't match.

    - let users trigger a retry

    - let users edit

I'm just playing / experimenting around with local LLM's. Just to see what I can do with them. One thing that comes to mind is gaming: E.g. text/dialog generation in procedural worlds / adventures.