Comment by thorum

1 year ago

Reminds me of the article “Language Models Model Us”:

> “On a dataset of human-written essays, we find that gpt-3.5-turbo can accurately infer demographic information about the authors from just the essay text, and suspect it's inferring much more.

> Every time we sit down in front of an LLM like GPT-4, it starts with a blank slate. It knows nothing about who we are, other than what it knows about users in general. But with every word we type, we reveal more about ourselves -- our beliefs, our personality, our education level, even our gender. Just how clearly does the model see us by the end of the conversation, and why should that worry us?“

https://www.lesswrong.com/posts/dLg7CyeTE4pqbbcnp/language-m...