← Back to context

Comment by Imnimo

2 years ago

An interesting question is, if you can get ChatGPT to generate high quality data for you, should you just cut out the middle-model and be using ChatGPT as your classifier?

The answer probably depends a lot on your specific problem domain and constraints, but a non-trivial amount of the time the answer will be that your task could be solved by a wrapper around the ChatGPT API.

You definitely can use LLMs to do your modeling. But sometimes you need very fast, cheap, and smaller models instead. Also there's research out there showing that using LLM to generate training data for targeted & specific models may result in better performance.

>should you just cut out the middle-model and be using ChatGPT as your classifier?

Oh you certainly could.

See here: GPT-3.5 outperforming elite crowdworkers on MTurk for Text annotation https://arxiv.org/abs/2303.15056

GPT-4 going toe to toe with expertrs (and significantly outperforming crowdworkers) on NLP tasks

https://www.artisana.ai/articles/gpt-4-outperforms-elite-cro...

I guess it will tke some time before the reality really sinks but the days of the artificial sota being obviously behind human efforts for NLP has come and gone.

> should you just cut out the middle-model and be using ChatGPT as your classifier?

And hope OpenAI forever provides the service, and at a reasonable price, latency, and volume?

  • > And hope OpenAI forever provides the service, and at a reasonable price, latency, and volume?

    They are enjoying the be the market leader for now, but OpenAI will soon be facing real competition, and LLM services will become a commodity product. That must be partly why they seeked Microsoft backing: to be a part of the "big tech".