Comment by friendzis

2 days ago

This little experiment of yours highlights the issue at hand quite well. In every language there is a thing called "voice": academic, formal, informal, intimate, etc. The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.

To continue the experiment I have fed the above paragraph to Gemini with this prompt "Fix grammar and wording issues in the following paragraphs, if needed reword to fit with and be well received in the hacker news community."

This experiment highlights the core issue. Every language has its own voice—academic, formal, informal, or intimate. Your rewritten paragraph leans into the notorious "LLM voice": it’s less direct, feels slightly pandering, and strips away the hooks that usually spark further discussion.

> The rewritten paragraph sounds written in the notorious "LLM voice". It's less direct, more pandering and removes injection points for further discussion.

Does it? I don't see it. If anything, it is more direct and clear, not less, i.e. "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" instead of the more convoluted "to search for a way to formulate my thoughts like I intend them to be received by the reader". How is it pandering? And how exactly does it remove "injection points"?

It basically chose more precise words where that was possible, resulting in a net improvement, AFAICS.

  • The task of helping to find wording that conveys your thoughts could mean several methods. It could mean you one-shot reword prompts and that helps you find wording. Or it could mean you're taking its output more substantially. Or you're going back and forth where the LLM is suggesting and you're suggesting too. It's incredibly vague what portion of "helping" the LLM is doing!

    Whereas "search" implies (to me) a kind of direct and analytical process of listing and throwing out brainstormed suggestions, like you would with a search engine.

    When I read the human version I actually get a sense of what that process looks like, and the LLM response definitely clouds or changes it by focusing on the result instead.

    • Absolutely. That was exactly how I meant it! Indeed that meaning was a bit lost in the LLM version.