Comment by IMTDb

3 days ago

Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.

Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.

Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.

You send all the tokens an extra time at least

  • I’m not privy to their data on what this does to engagement, but intuitively it seems like the extra inference/token cost this incurs doesn’t align with their current model.

    If they were doing it to API customers, sure, but getting the free or flat-rate customers to use more tokens seems counterproductive.