← Back to context

Comment by vessenes

9 days ago

Meh. Semantic Ablation - but toward a directed goal. If I say "How would Hemingway have said this, provided he had the same mindset he did post-war while writing for Collier's?"

Then the model will look for clusters that don't fit what the model consider's to be Hemingway/Colliers/Post-War and suggest in that fashion.

"edit this" -> blah

"imagine Tom Wolfe took a bunch of cocaine and was getting paid by the word to publish this after his first night with Aline Bernstein" -> probably less blah

These kinds of prompts don’t really improve the writing IME. It still gets riddled with the same tropes and phrases, or it veers off into textual vomit.

  • FWIW, I agree. Frontier LLMs are on their way to becoming competent stylists (I ask every major model release to write up a sample essay as Hemingway, and they are improving), but they are often skin-deep.