Comment by grey-area
1 month ago
I wonder if th style shift has anything to do with training for conversation (ie. tuning models to respond well in a chat situation)?
1 month ago
I wonder if th style shift has anything to do with training for conversation (ie. tuning models to respond well in a chat situation)?
Probably. One common feature of LLM output is grammatical features that indicate information density, like nominalizations, longer words, participial clauses, and so on. Perhaps training tasks that involve asking the LLMs for concise explanations or summaries encourage the use of these features to give denser answers.