Comment by throwaway2037
2 days ago
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
> preface explaining it was written by an LLM
why can't the quality of the works stand on its own? Whether there's LLM generation or not should be irrelevant.
because we typically want to know the writer of a piece. we want to know where to lay credit.
every book you buy has an author credited. articles in newspapers and magazines have photographer and author attributions.
asking an ai to write you a story does not make you an author. if you ask someone to take a photo for you, you don’t magically get to say “look at this photograph, i’m a photographer.” if you ask someone to bake you a wedding cake, and then claim you baked it, you’re a fraud.
we deserve to know the actual writer.
> want to know the writer of a piece
but you dodged the question i asked - why can't a piece stand on the contents, rather than its pedigree?
Would you care if a writer used a pen name? Does that in any way diminish their works? What about the unknown editors that contributed?
11 replies →
I’ve said this many times before
AI is just a tool
If you used a fancy auto bake cake machine instead of an oven, you still get to claim that you made the cake.
100 years ago someone would be making the claim that using an oven to make cakes “doesn’t count”
All AI did was raise the bar
It’s quite clear here that the author spent a lot of time on this so he absolutely gets credit as the author
15 replies →
Largely, I agree with you. One famous counterpoint about labeling works of arts with the author: The Economist (the magazine) does not add the author to most of their articles.
> because we typically want to know the writer of a piece. we want to know where to lay credit.
Does the average person really do care all the time? Maybe the outlet it comes from as a whole (factuality, political lean) but more rarely the exact author. Many don’t even have the critical skills for any of it and consume whatever content is chosen for them by whatever algorithm is there. We probably should care, I just don’t think a lot of us do.
For me, needing to know that something’s written by AI serves threefold purposes:
1) acknowledging that it might be slop that someone threw together with no effort (important in regards to spam)
2) acknowledging that depending on the model the factuality might be low when it comes to anything niche (though people are wrong too, often enough)
3) mentally preparing myself for AI bullshit slop language, like “It’s not X, it’s Y.”, or just choose not to engage with it (it's the same disgust reaction as when I find a PDF and realize it's just scanned images, not proper text)
In general, unless the goal is either human interaction or a somewhat rare case of wanting to read a specific blog etc., most of the time I don’t categorically care whether something was lovingly created by a human or shoved out by a half baked version of Skynet - only that it’s good enough for whatever metrics I want to evaluate it by. I’m not ashamed of it and maybe that’s why I don’t take an issue with AI generated code either, as long as it’s good enough (sometimes better than what people write, other times quite shit when the models and harnesses are bad).
1 reply →
can't reply to your comment below so i will comment here
> why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
clearly it does to you?
thing is, this is a fool's errand to try to police what people credit when there is zero capability of verification and enforcement
the current social norms still value authorship, so people will just take or omit credit as they see most advantageous, even if it's merely an ego advantage, which it typically is but a proxy for brand building
what will happen if/when the currency of attribution is completely altered? hard to predict
my prediction is that track record will be considerably more important, not less, but human merit will be increasingly seen as irrelevant
Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.
Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.
> “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.
The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.
Well-put.
This "flavorlessness" is all over the story, and paired with the obviously genAI images is how I realized as I read that this was either generated or at the least deeply driven by AI.
It constantly described facial expressions, tones of voice, and other emotional cues in generic, dry terms that communicated nothing but the abstract notion of "this person felt a particular way about what happened and it's up to you, the reader, to imagine what that feeling was."
It felt very much like it was prompted to "show, don't tell," by someone who has no idea what that phrase actually means.
As a professional programmer with a deep background in literature and music, this is yet another example that if you aren't an expert in a field, you will get mediocre results at best from an LLM, while being deceived into thinking they're great.
2 replies →
I took that phrase differently. The story makes the point that the AIs fail when metrics of quality can't be expressed in words. The use of a bare "adequate" reinforces the opacity of the coffee's quality. Certainly it would have worked well to use more words to convey specifics of the "adequacy" as you mention, but IMO that would have undercut the link back to the theme of human ineffability.
Obviously everyone's mileage may vary, but I didn't see this as a huge defect, and actually felt it worked pretty well.
1 reply →
I started reading it then found it waffling on quite a bit, then came to the HN comments and saw - ah LLM. I could have saved time if I'd know.
Also I feel a bit conned. I was curious what Tom Hartmann was up to and now it seems he doesn't exist and it's just some slop?
For a while, people found solace in denial: "it's not good, it will never be good, and i will always be able to tell"
next stop will be to ask for some sort of regulation
People don’t want to self-disclose their use of AI I’ve noticed, especially the ones that put the least effort into using it. So this will only work for a small portion of the AI content.
We really need to stop thinking that every AI assisted thing is bound to be slop. "Shit in Shit out" often Applies in reverse aswell.