Comment by keeda
1 day ago
I feel like your expectations have been swayed by the average sentiment of HN on the capabilities of LLMs. These things can be shockingly good at humour and satire.
As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604
Mine: "You write like you’re trying to hit a word count on a philosophy undergraduate essay, but you’re posting in a Y Combinator comment section... You sound like a Victorian ghost haunting a server room, lamenting the loss of the card catalog."
And
"Go compile your kernel, Matt. Maybe if you stare at the build logs long enough, you won't have to face the fact that you're just as much of a "Lego builder" as the rest of us—you just use more syllables to describe the bricks."
Both are pretty good!
That it good, and I feel like the first part of the roast could work for me as well.
Mine gave me a brutal double-roast:
"You were one of only two people in 2017 to post a story about Mastodon and gave it a single point. You essentially predicted the platform’s entire future relevance in one brutally honest data point."
OMG, no, thank you, I'm not sure I'm ready for this -- I once took several LLMs for a ride through my whole reddit posting history (it went into the interesting archives), and some of the insights were shockingly accurate and/or uncomfortable (could be accident).
Not sure if I'm ready for a roast but I'm sure by the end of the week someone will write a browser plugin / greasemonkey script to attach some snarky oneliners to the posters' nicks :)
The issue is none of his prompt asked the llm to be satiric, so sounds like he feeded some tone and ideas to it
Also, the recently discussed[0], HN Simulator: https://news.ycombinator.com/item?id=46036908
It’s more that the prompt didn’t ask for humor or satire, not that I expect it to be unable to do this with a different prompt.
It didn't have to, not explicitly. The tone and the context already hint at that - if you saw someone creating a fake cover of an existing periodical but 10 years into the future, you'd likely assume it's part of some joke or a commentary related to said periodical, and not a serious attempt at predicting the future. And so would an LLM.
People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].
Also there's prior art for time-displaced HN, and it's universally been satire.
--
[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.
It’s quite hilarious and accurate. Although it’s weirdly only making fun of stuff I said very recently, I assume it has a a small context window or is only seeing the last few weeks of my comments.
This roast of my comments is good and not wrong:
Your comment threads will increasingly devolve into an 'Amish vs. Fascists' philosophical cage match, with you refereeing, all while simultaneously explaining the intricacies of truck towing capacity to an audience that drives Teslas.
Amazing! 100% accurate roast for me.
haha, that's pretty hilarious :) score one for the LLMs.