Comment by WhitneyLand
14 days ago
>>When an LLM processes a document image, it first embeds it into a high-dimensional vector space through the attention mechanism…
This is a confusing way to describe attention and gets a bit off topic, the attention mechanism is not really what’s causing any of the issues in the article.
No comments yet
Contribute on Hacker News ↗