Comment by hiccuphippo 10 hours ago Can LLMs compress those documents into smaller files that still retain the full context? 2 comments hiccuphippo Reply thellimist 10 hours ago What do you mean? hiccuphippo 9 hours ago The article says the LLM has to load 15540 tokens every time, I wonder if that can be reduced while retaining the context maybe with deduplications, removing superfluous words, using shorter expressions with the same meaning or things like that.
thellimist 10 hours ago What do you mean? hiccuphippo 9 hours ago The article says the LLM has to load 15540 tokens every time, I wonder if that can be reduced while retaining the context maybe with deduplications, removing superfluous words, using shorter expressions with the same meaning or things like that.
hiccuphippo 9 hours ago The article says the LLM has to load 15540 tokens every time, I wonder if that can be reduced while retaining the context maybe with deduplications, removing superfluous words, using shorter expressions with the same meaning or things like that.
What do you mean?
The article says the LLM has to load 15540 tokens every time, I wonder if that can be reduced while retaining the context maybe with deduplications, removing superfluous words, using shorter expressions with the same meaning or things like that.