← Back to context

Comment by a_wild_dandan

2 years ago

> What’s the analog for LLM context windows?

“Time to think.” The units of time for LLMs are tokens rather than seconds. Each token is another loop to calculate/consider concepts and what to do next. This is why “think step-by-step” works so well: you’re giving the model significantly more “time” to think and it’s storing its game plan to execute later, as opposed to demanding an answer right now, which is like screaming a question at a sleeping person and using whatever answer the poor person first blurts out from their surprised, reactionary stupor.