Comment by spprashant

7 hours ago

I think thats part of the issue I have with it constantly.

Let's say I am solving a problem. I suggest strategy Alpha, a few prompts later I realize this is not going to work. So I suggest strategy Bravo, but for whatever reason it will hold on to ideas from A and the output is a mix of the two. Even if I say forget about Alpha we don't want anything to do that, there will be certain pieces which only makes sense with Alpha, in the Bravo solution. I usually just start with a new chat at that point and hope the model is not relying on previous chat context.

This is a hard problem to solve because its hard to communicate our internal compartmentalization to a remote model.

Unfortunately, if it's in context then it can stay tethered to the subject. Asking it not to pay attention to a subject, doesn't remove attention from it, and probably actually reinforces it.

If you use the API playground, you can edit out dead ends and other subjects you don't want addressed anymore in the conversation.