Comment by ethbr1
2 days ago
> the tool itself isn't the issue, but rather the abdication of responsibility by the author
The biggest current social problem with AI content is our collective lack of transparency into how much human responsibility was taken.
Give a <100% reliable/accurate AI tool, the same post/code may have had {every line vetted by a human} or {no lines vetted by a human}... and readers have no way of telling which it is!
Because even if no edits needed to be made, the former carries a lot more signal than the latter, because it reduces risk of AI slop and therefore makes the content more valuable.
At the same time, it also costs more time to produce, so in any competitive marketplace (YouTube, paid comments, startup code, etc.) the unvetted AI content will dominate.
No comments yet
Contribute on Hacker News ↗