← Back to context

Comment by zlg_codes

2 years ago

You have no way to prove that Google, MS, et al wouldn't make AI products if they couldn't prevent you from using the output.

Also, what exactly is stopping someone from documenting the output from all possible prompts?

It's legal theater and can't be enforced.

It's not theater, it's very real. Companies are making decisions to not use data generated from openai. They are making the decision because they know if they go the other way they know they risk it being leaked via someone internal that they are doing it, that it's pretty easy to figure out during a discovery process. I'm involved in this issue right now, and no one is treating it as something to just blow off. I know several other companies in the same boat.

They have many orders of magnitude more money and attorneys that would work full-time on such a case to ensure that even if they lost the court battle, the person or company doing the thing that they didn't like would be effectively bankrupted, so they still win in the end.

  • And if such an effort leaves the jurisdiction, to a country with no obligations to the litigating country?

    We need to dispel with this idea that sociopaths in suits have earned or legitimate power.

    • The courts have power, the companies know it and behave accordingly.

      Everything you are saying is only true for two guys in a garage. The folks with something to lose don't behave in this dreamworld fashion.

      2 replies →