Comment by willio58

3 years ago

AI generated content is only as successful as the content it generates.

If you were to read some content that's useful to you, does it matter the source? If the content generated by this platform is just "spam", then wouldn't it fail on its own without the need for an HN brigade against them?

I get that google is increasingly being degraded by SEO content, but that's a problem with google, not this site allowing users to use AI generated content as a writing tool.

If you were to read some content that's useful to you, does it matter the source?

yes, very much. the source of the content is half of its value.

there is a big difference if i read a physics paper from richard feynman vs one from a high school student or one generated with AI.

i expect the feynman paper to be thoroughly researched. i expect the high schoolers paper to be a collection of interesting facts that the student learned and is excited about and i expect the AI paper to be a random collection of stuff found on the internet.

the problem is that it has already been proven that it is possible to write good sounding nonsense and have that published as scientific research. therefore the only thing that makes me trust any text is the reputation of the author.

an AI generated text can't possibly be useful because it is inherently untrustworthy. at best it can function as entertainment, but i believe we already have enough mediocre entertainment, we don't need AI to generate any more and drown out the less common better quality work out there.

  • > the problem is that it has already been proven that it is possible to write good sounding nonsense and have that published as scientific research.

    This seems to be an issue with how science publishing works, how things are 'peer reviewed', etc. If things were _truly_ peer reviewed, would the peers not catch nonsense in papers? And if it was not caught by a true review, then maybe it's not nonsense after all.

    > an AI generated text can't possibly be useful because it is inherently untrustworthy

    Probably 25% of the code I 'write' these days has been written by AI, through Copilot. Copilot isn't perfect but it can create a basis to start from that saves me lots of time. This is how I view these content writing tools, something to get the ball rolling, but not something you would use to generate all the content on your site without editing.

> AI generated content is only as successful as the content it generates.

What does success mean? Let's assume you mean "Profit from visits generated by SEO"

> If you were to read some content that's useful to you, does it matter the source?

I am doubtful about there being any intersection in a venn diagram between

* Content that is useful to me

* Successful at extracting money through SEO

* AI generated

> I get that google is increasingly being degraded by SEO content, but that's a problem with google, not this site allowing users to use AI generated content as a writing tool.

This is disingenuous. There is no way we can believe that AI generate content will be as accurate and useful as human generated content. We want the quality human generated content. Until we get to AIs that are as intelligent as human beings, and share our context.

Using AI as a tool where the human edits it is probably OK. But blasting out AI-generated articles with no review is going to be horrible. And using Google as justification - well Googles weakness is what makes this idea profitable!

> If you were to read some content that's useful to you, does it matter the source?

The problem I have with this is that it presumes knowledge on the part of the reader.

The AI will frequently produce wrong content that I incorrectly believe was valuable to me.

The AI will frequently omit important things an actual human expert would not have.

It will become increasingly difficult to tell the difference between trustworthy and non-trustworthy content.

I agree 100%!

  • I want to preface this that I think the work your company does is important. SEO is much like advertising and marketing in that it often gets a bad rap by techies, but they are nonetheless important because otherwise one is strongly handicapped when it comes to developing and selling a product.

    Nonetheless there are degrees to this and pushing a set of features designed to automatically generate text that closely mimics what other high-ranking content looks like has an outsize impact on the quality of the "commons": in this case Google or any other index of internet content. As other commentators have pointed out, the current state of AI means that while the style of the text is likely to be in line with other content, the actual content of the text may be either useless or actively wrong.

    Given the prevalence of such articles already in search results, it is unlikely that the normal mechanism of human perusal of search results will be an effective quality filter.

    It is true that, as I allude to, such articles already exist and in this sense Contentedge is not enabling a novel new harm per se. However, it drastically changes the ease of doing so, and doing the same thing, just at a larger scale and higher efficiency, given sufficiently big values of "larger" and "higher" can have a qualitative impact on the harm rendered.