← Back to context

Comment by lingrush4

7 hours ago

I fully believe that OpenAI is essentially stealing the work of others by training their models on it without permission. However, giving a corporation infamous for promoting authoritarianism full access to millions of private conversations is not the answer.

OpenAI is right here. The NYT needs to prove their case another way.

> giving a corporation infamous for promoting authoritarianism

The NYT is certainly open to criticism along many fronts, but I don't have the slightest idea what you mean in claiming it promotes authoritarianism.

  • Well, the sponsors of the 1619 Project really don’t have a leg to stand on when it comes to ethics.

    • I already said the NYT is certainly open to criticism. I fail to see any connection between the 1619 Project and authoritarianism.

I'll bet you're right in some cases. I don't think that it is as pervasive as it has been made out to be though, but the argument requires some framing and current rules, regulation, and laws aren't tuned to make legal sense of this. (This is a little tangential, because the complaint seems to be about getting ChatGPT to reproduce content verbatim to a third party.)

There are two things I think about:

First, and generally, an AI ought to be able to ingest content like news articles because it's beneficial for users of AI. I would like to question an AI about current events.

Secondly, however, the legal mechanism by which it does that isn't clear. I think it would be helpful if these outlets would provide the information as long as the AI won't reproduce the content verbatim. If that does not happen, then another framing might liken the AI ingestion as an individual going to the library to read the paper. In that case, we don't require the individual to retroactively pay for the experience or unlearn what he may have learned while at the library.

Well the court disagrees with you and found that this is evidence that the NYT needs to prove its case. No surprise, considering its direct evidence of exactly what OpenAI is claiming in its defense...