Comment by embedding-shape
1 month ago
Add "Always use dash instead of em dash" to the developer/system prompt, and that's never an "issue" anymore. Seems people forget LLMs are really just programmable (sometimes inaccurate) computers. Whatever you can come up with a signal, someone can come up with an instruction to remove.
That doesn't work, they beat it so hard into ChatGPT it won't always listen to you about it.
You can't stop it from doing the "if you like I can <three different dumb followup ideas>" thing in every reply either.
> That doesn't work, they beat it so hard into ChatGPT
I don't think you're able to set either the developer or system prompt on ChatGPT, you're gonna have to use the OpenAPI (or something else) to be able to set that. Once you have access to setting text in those, you can better steer how the responses are.
ChatGPT has personalization settings that you can use to set part of the system prompt. Other chatbots usually have this too.
How much they follow it depends. Sometimes they know you wrote it and sometimes they don't. Claude in particular likes to complain to me its system prompt is poorly written, which it is.
1 reply →
Except for your poor editor who then has to manually replace your hyphens with proper em dashes. Still, if you're already disrespecting your editor enough to feed them AI slop...
My editor? I don't think it cares what I input into it, it's just a program. As long as I feed it characters it'll happily tick along as always.
The parent comment is referring to a human editor, not a text editor.
2 replies →
They're really not programmable computers! (Bad mental model is bad.)
But yes the current commercial ones are somewhat controllable, much of the time.
Obviously not, computers are the true programmable computers. But I'd still think it's accurate to say they're like programmable computers that are sometimes inaccurate, for most intents and purposes it's a fine mental model unless you really wanna get into the weeds.