← Back to context

Comment by co_king_5

8 days ago

> I think AI writing is better used for ideation

It shocks me when proponents of AI writing for ideation aren't concerned with *Metaphoric Cleansing* and *Lexical Flattening* (to use two of the terms defined in the article)

Doesn't it concern you that the explanation of a concept by the AI may represent only a highly distorted caricature of the way that concept is actually understood by those who use it fluently?

Don't get me wrong, I think that LLMs are very useful as a sort of search engine for yet-unknown terms. But once you know *how* to talk about a concept (meaning you understand enough jargon to do traditional research), I find that I'm far better off tracking down books and human authored resources than I am trying to get the LLM to regurgitate its training data.

It's a concern but definitely depends on the context and my self perceived blast radius of what I'm researching. I do find myself often re-searching results to see what something more authoritative says about it, and I also continue to read source books on topics I'm taking a deep dive in. I have definitely seen current LLMs get my domain wrong.