← Back to context

Comment by wrsh07

1 year ago

Ok so the paper presents a central metaphor for reasoning about LLMs.

The metaphor: a book with all possible conversations/ human writings ever, and a [good] llm is finding the spot in the book that exactly matches the context and reads from the book as a response.

Certainly if you've experimented with a model that hasn't been fine-tuned (eg via rlhf) this metaphor will be resonant.

Is it useful?

(How does it help me understand LLMs with different capabilities? How does it help me understand models with different fine tunings?)