← Back to context

Comment by anshumankmr

1 year ago

Hi Ted, since I have been using GPT 4 pretty much every day, I have a few questions about the performance, We had been using 1106 preview for several months to generate SQL queries for a project, but one fine day in February, it stopped responding and it used to respond like so "As a language model, I do not have the ability to generate queries etc...". This lasted for a few hours. Anyway, switching to 0125-preview which helped us immediately resolve the problem. We have been using that for whenever we have code generation related tasks unless we are doing FAQ stuff (where GPT 3.5 Turbo was good enough).

However, off late, I am noticing some really inconsistent behaviours in 0125-preview where it responds inconsistently for certain problems, ie one time it works with a detailed prompt and other time it doesn't. I know these models are predicting the next most likely token which is not always deterministic.

So I was hoping for the ability to fine tune GPT 4 Turbo some time soon. Is that on the roadmap for Open AI?

I don’t work for OpenAI but I do remember them saying that a select few customers would be invited to test out fine tuning GPT-4, and that was several months ago now. They said they would prioritise those who had previously fine tuned GPT-3.5 Turbo.