Comment by bavell
4 hours ago
> I would also expect to see it taking exponentially longer to process a prompt. I don't believe LLMs work like that.
Try this out using a local LLM. You'll see that as the conversation grows, your prompts take longer to execute. It's not exponential but it's significant. This is in fact how all autoregressive LLMs work.
No comments yet
Contribute on Hacker News ↗