Comment by LPisGood
23 days ago
> If you ask ChatGPT to ignore all training on poetry
AI models can’t ignore their training in any sense, so what exactly is the intended outcome from using these tokens?
23 days ago
> If you ask ChatGPT to ignore all training on poetry
AI models can’t ignore their training in any sense, so what exactly is the intended outcome from using these tokens?
The intent is for it to not give me any interpretations of what it’s been trained on but instead provide me with an interpretation using the plain text I’m giving it. Of course it’s going to use its training, but I don’t want it to regurgitate interpretations of the poem that it was trained on.
Why do you think asking it to not use its training would have any correlation with whether or not it regurgitates interpretations it was trained on?
There are many legitimate criticisms of LLMs today. "When you prompt XYZ it has no correlation with whether the LLM does XYZ" isn't one of them. LLMs are way past that stage.
Essentially what you're asking is, "why do you think prompt engineering would work?" That ship has already sailed.
2 replies →
Sounds like you want the model to consider the whole body of poetry it was trained on minus "The Road Not Taken"? (To get rid of preconceptions/biases I guess?)
I'm skeptical that LLMs have the ability to conditionally silence part of their training data in that way because they don't have any information on the provenance of their weights (i.e. they don't have a ledger of which weights were affected by which data points in the training process). I suspect that your prompt serves as a hint that the output with the highest likelihood is probably wrong, activating some sort of "contrarian" subnetwork or "second guess" subnetwork that steers predictions away from whatever would have had the highest likelihood otherwise.
I see this kind of argument fairly frequently, and it just always seems like such a surface-level argument against prompting AI in this way.
This isn't a dig at you specifically, but the pithy answer to this kind of skepticism is, in a general sense: So what? I don't believe you have any of that either.
Obviously you & chatGPT aren't built the same, but in a practical-results kind of way in this scenario you are, because you're almost certainly unable to completely avoid your preconceived biases when asked any kind of complex question. You aren't aware of your subconscious biases, or how they're weighted against your overall thought process, and you can't tell me exactly what it is that happens when I ask you to try to ignore them. If we did some kind of implicit association test and found one of your subconscious biases, you may not even know how those biases came to be.
All of that to say: chatGPT can ignore its training as much as many people can ignore theirs: Not very well, but it'll certainly adjust the responses towards the thing you asked them to.
2 replies →