Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.
This is why I wish chat UI's had separate categories of chats (like a few generic system prompts) that let you do more back-and-forth style discussions, or more "answers only" without adding any extra noise, or even an "exploration"/"tangent" slider.
The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO
Add "Complete this request as a single task and do not ask any follow-up questions." Or some variation of that. They keep screwing with default behavior, but you can explicitly direct the LLM to override it.
You can if you script the request yourself, or you could have a front end that lets you cut out those paragraphs from the conversation. I only say that because yesterday I followed this guide: https://fly.io/blog/everyone-write-an-agent/ except I had to figure out how to do it with Gemini API instead. The context is always just (essentially) a list of strings (or "parts" anyway, doesn't have to be strings) that you pass back to the model so you can make the context whatever you like. It shouldn't be too hard to make a frontend that lets you edit the context, and fairly easy to mock up if you just put the request in a script that you add to.
I think AI should present those continuation prompts as dynamic buttons, like "Summarize", "Yes, explain more" etc. based on the AI's last message, like the NPC conversation dialogs in some RPGs
Gemini has been doing this to me for the past few weeks at the end of basically every single response now, and it often seems to result in the subsequent responses getting off track and lower quality as all these extra tangets start polluting the context. Not to mention how distracting it is as it throws off the reply I was already halfway in the middle of composing by the time I read it.
This is why I wish chat UI's had separate categories of chats (like a few generic system prompts) that let you do more back-and-forth style discussions, or more "answers only" without adding any extra noise, or even an "exploration"/"tangent" slider.
The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO
Add "Complete this request as a single task and do not ask any follow-up questions." Or some variation of that. They keep screwing with default behavior, but you can explicitly direct the LLM to override it.
That doesn't help GPT-5; it /really/ wants to suggest follow-ups and ignored me telling it not to.
Why do you respond to its prompting? It's a machine
Because if I don't, it tends to misinterpret the next thing I say because it reads that as an answer to the question it just asked me.
1 reply →
Occasionally I find it helpful, but it would be good to have the option to remove it from the context.
You can if you script the request yourself, or you could have a front end that lets you cut out those paragraphs from the conversation. I only say that because yesterday I followed this guide: https://fly.io/blog/everyone-write-an-agent/ except I had to figure out how to do it with Gemini API instead. The context is always just (essentially) a list of strings (or "parts" anyway, doesn't have to be strings) that you pass back to the model so you can make the context whatever you like. It shouldn't be too hard to make a frontend that lets you edit the context, and fairly easy to mock up if you just put the request in a script that you add to.
I think AI should present those continuation prompts as dynamic buttons, like "Summarize", "Yes, explain more" etc. based on the AI's last message, like the NPC conversation dialogs in some RPGs
Claude code already does this, it'll present a series of questions with pre-set answers, and the opportunity to answer "custom: <free text>"
I have decided to call it engagement bait.