← Back to context

Comment by Tycho

2 years ago

I like thinking about analogs between computers and brains. For instance, working memory as RAM, or deja vu as orphaned linked lists.

What’s the analog for LLM context windows?

Maybe consciousness is essentially a context window, and when we dream during sleep we are compressing knowledge to free up context space or something.

> What’s the analog for LLM context windows?

“Time to think.” The units of time for LLMs are tokens rather than seconds. Each token is another loop to calculate/consider concepts and what to do next. This is why “think step-by-step” works so well: you’re giving the model significantly more “time” to think and it’s storing its game plan to execute later, as opposed to demanding an answer right now, which is like screaming a question at a sleeping person and using whatever answer the poor person first blurts out from their surprised, reactionary stupor.