← Back to context

Comment by carlos_rpn

7 months ago

Somehow I copied your prompt and got a knife with a single handle on the first try: https://chatgpt.com/s/m_689647439a848191b69aab3ebd9bc56c

Edit: chatGPT translated the prompt from english to portuguese when I copied the share link.

I think that is one of the most frustrating issues I currently face when using LLMs. One can send the same prompt in two separate chats and receive two drastically different responses.

  • It is frustrating that it’ll still give a bad response sometimes, but I consider the variation in responses a feature. If it’s going down the wrong path, it’s nice to be able to roll the dice again and get it back on track.

I’ve noticed inconsistencies like this, everyone said that it couldn’t count the b’s in blueberry, but it worked for me the first time, so I thought it was haters but played with a few other variations and got flaws. (Famously, it didn’t get r’s in strawberry).

I guess we know it’s non-deterministic but there must be some pretty basic randomizations in there somewhere, maybe around tuning its creativity?

  • Temperature is a very basic concept that makes LLMs work as well as they do in the first place. That's just how it works and that's how it's been always supposed to work.