They do, but it doesn't mean that entire texts will remain in their context. Increasingly they can use agentic reading, whereby they will spawn an agent to read long texts, then present a condensed version back to the parent LLM, leading to a theoretical opportunity for information loss.
It doesn't mean they're paying attention.
They do, but it doesn't mean that entire texts will remain in their context. Increasingly they can use agentic reading, whereby they will spawn an agent to read long texts, then present a condensed version back to the parent LLM, leading to a theoretical opportunity for information loss.
Only because they are architecturally unable to not read something.
Well, the LLMs architecturally have to read everything they see. The agents attached to LLMs can choose what to look at.
until one day