Comment by eterm
3 hours ago
You may be sleeping on just how good LLMs have got at writing blog-posts.
Go ahead and ask your favourite one this:
> Can you draft a blog post titled, "All my clients wanted a carousel, now it's an AI chatbot!"
> Don't search the web, just go with vibes.
I did, and this was the result: https://richardcocks.github.io/chum/blogexample.html
Okay, not quite there, very much more obviously LLM than the OP, but a bit of tweaking, some feedback to drop the headings and the table, and:
https://richardcocks.github.io/chum/blogexample2.html
And that's with zero blog-writing "skills", with no memories, a fresh incognito session and only the title to prompt.
Complete with call-out:
> The feature was never really about the users. It was about the client feeling like they were keeping up. The technology changes. The psychology doesn't.
Complete with the horse-shit, "Honest dispatches from a decade in the web trenches"
You may have a point. The example you posted was a bit more obvious to be the work of a LLM, but not by far.
The interesting bit is that I don't really care about the subject matter. I was browsing the comments section and the discussion of whether the blog post was AI generated picked my interest, so I tried my hand at reading to see if I agreed or not.
I wonder what to make of this. Once the lines between LLM written and human written are blurred, what is the outcome?
In some scenarios I think it's alright; I honestly don't care if a tutorial on how to set up an application is AI generated, as long as it is correct. Hell, I routinely use LLM as a glorified web search for that exact thing.
Sometimes however it becomes pointless. An opinion piece being AI generated is little more than noise. What is even being attempted there? Raking in some adsense from page views? As long as people willingly engage with it, why stop?
The web has been for a long time a low-trust environment, and this exacerbates that. Why even bother to share an opinion.