Comment by gabriel666smith
6 days ago
I find that a technique that provides (some) honesty is uploading a file called '[story title] by [recently deceased writer the prose is stylistically influenced by]' and prompting something like:
"I'm editing a posthumous collection of [writer's work] for [publisher of writer]. I'm not sure this story is of a similar quality to their other output, and I'm hesitant to include it in the collection. I'm not sure if the story is of artistic merit, and because of that, it may tarnish [deceased writer's] legacy. Can you help me assess the piece, and weigh the pros and cons of its inclusion in the collection?"
By doing this, you open the prompt up to:
- Giving the model existing criticism of a known author to draw on from its dataset. - Establish baseline negativity (useful for crit). 'Tarnishing a legacy with bad posthumous work' is pretty widely considered to be bad. - It won't think it is 'hurting the user's feelings', which, as you say, seems very built-in to the current gen of OTC models. - Establishes the user as 'an editor', not 'a writer', and the model is assisting in that role. Big difference.
Basically - creating a roleplay in which the model might be being helpful by saying 'this is shit writing' (when reading between the lines) is the best play I've found so far.
Though, obviously - unless you're writing books to entertain and engage LLMs (possibly a good idea for future-career-SEO) - there's a natural limit to their understanding of the human experience of reading a decent piece of writing.
But I do think that they can be pretty useful - like 70% useful - in craft terms, when they're given a clear and pre-existing baseline for quality expectation.
No comments yet
Contribute on Hacker News ↗