← Back to context

Comment by yunohn

1 day ago

I have reached a point where any AI smell (of which this articles has many) makes me want to exit immediately. It feels tortuous to my reading sensibilities.

I blame fixed AI system prompts - they forcibly collapse all inputs into the same output space. Truly disappointing that OpenAI et all have no desire to change this before everything on the internet sounds the same forever.

You're probably right about the latter point, but I do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.

As you said, reading this stuff is taxing. What's more, this is a daily occurrence by now. If there's a silver lining, it's that the LLM smells are so obvious at the moment; I can close the tab as soon as I notice one.

  • > do wonder how hard it'd be to mask the default "marketing copywriter" tone of the LLM by asking it to assume some other tone in your prompt.

    Fairly easy, in my wife's experience. She repeatedly got accused of using chatgpt in her original writing (she's not a native english speaker, and was taught to use many of the same idioms that LLMs use) until she started actually using chatgpt with about two pages of instructions for tone to "humanize" her writing. The irony is staggering.

  • It’s pretty easy. I’ve written a fairly detailed guide to help Claude write in my tone of voice. It also coaxes it to avoid the obvious AI tells such as ‘It’s not X it’s Y’ sentences, American English and overuse of emojis and em dashes.

    It’s really useful for taking my first drafts and cleaning them up ready for a final polish.

    • https://ember.dev ’s deeper pages (not the blog, but the “resumelike” project pages) was written by claude with guidance and a substantial corpus of my own writing and i still couldn’t squash out all the GPTisms in the generation passes. probably net waste of time, for me, for writing.

  • It’s definitely partially solved by extensive custom prompting, as evidenced by sibling comments. But that’s a lot of effort for normal users and not a panacea either. I’d rather AI companies introduce noise/randomness themselves to solve this at scale.

    • I don’t think that’s a solution.

      The problem isn’t the surface tics—em dashes, short exclamatory sentences, lists of three, “Not X: Y!”.

      Those are symptoms of the deep, statistically-built tissue of LLM “understanding” of “how to write a technical blog post”.

      If you randomize the surface choices you’re effectively running into the same problem Data did on Star Trek: The Next Generation when he tried to get the computer to give him a novel Sherlock Holmes mystery on the holodeck. The computer created a nonsense mishmash of characters, scenes, and plot points from stories in its data bank.

      Good writing uses a common box of metaphorical & rhetorical tools in novel ways to communicate novel ideas. By design, LLMs are trying to avoid true (unpredictable) novelty! Thus they’ll inevitably use these tools to do the reverse of what an author should be attempting.