Comment by BoorishBears
2 days ago
I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
I run a product that generates interactive fiction (for search engine reasons I don't mention it in my comments, but there's a link to an April Fool's landing page in my post history where you can try it)
Because it's productized I need to "one-shot" the output, so I focus a lot on post-training models these days, but I've also used tricks like running wordfreq to find recently overused words and feed the list back to the model as words that cannot be used in the next generation.
Models couldn't always follow instructions like that (pink elephant problem), but recently they're getting better at it.
Yeah, there's often a heavy instruction and recency bias that just squeezes all of the nuance and subtlety out if it.