← Back to context

Comment by bartvk

1 day ago

I’m Dutch and we’re noted for our directness and bluntness. So my tolerance for fake flattery is zero. Every chat I start with an LLM, I prefix with “Be curt”.

I've seen a marked improvement after adding "You are a machine. You do not have emotions. You respond exactly to my questions, no fluff, just answers. Do not pretend to be a human. Be critical, honest, and direct." to the top of my personal preferences in Claude's settings.

  • I need to use this in Gemini. It gives good answers, I just wish it would stop prefixing them like this:

    "That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."

    Soviet commissars were less obsequious to Stalin.

  • I’ll have to give this a try. I’ve always included “Be concise. Excessive verbosity is a distraction.”

    But it doesn’t work much …

Perhaps you should consider adding “be more Dutch” to the system prompt.

(I’m serious, these things are so weird that it would probably work.)

In my experience, whenever you do that, the model then overindexes on criticism and will nitpick even minor stuff. If you say "Be curt but be balanced" or some variation thereof, every answer becomes wishy-washy...

  • Yeah, when I tell it to "Just be honest dude" it then tells me I'm dead wrong. I inevitably follow up with "No, not that KIND of honest!"

  • Maybe we need to go like they do in the movies “set truthfulness to 95%, curtness at 67% and just a touch of dry british humor (10%)”

Same here. Together with putting random emojis in answers. It's so over the top that saying "Excellent idea, rocket emoji" is a running joke with my wife when the other says something obvious :-)

I've tried variations of this. I find it will often cause it to include cringey bullshit phrases like:

"Here's your brutally honest answer–just the hard truth, no fluff: [...]"

I don't know whether that's better or worse than the fake flattery.

  • You need a system prompt to get that behaviour? I find ChatGPT does it constantly as its default setting:

    "Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"

    It's so annoying it makes me use other LLMs.

  • Curious whether you find this on the best models available. I find that Sonnet 4 and Gemini 2.5 Pro are much better at following the spirit of my system prompt rather than the letter. I do not use OpenAI models regularly, so I’m not sure about them.

Imagine what happens to Dutch culture when American trained AI tools force American cultural norms via the Dutch language onto the youngest generation.

And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.

  • Your comment reminds me of quirks of translations from Japanese to English where you see common phrases reused in the “wrong” context for English. “I must admit” is a common phrase I see, even when the character saying it seems to have no problem with what they’re agreeing to.

  • Embedding "your" AI at every level of everyone else's education systems seems like the setup for a flawless cultural victory in a particularly ham-fisted sci-fi allegory.

    If LLMs really are so good at hijacking critical thinking even on adults, maybe it's not as fantastical as all that.

  • What will happen? Californication has been around for a while, and, if anything, I would argue that AI is by design less biased than pop culture.