Comment by dvfjsdhgfv
2 days ago
> Their disclaimer already suggests they don't train on your photos.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
I am aware that many companies train on illegally acquired content and that bothers me too.
There is that initial phase of potential fair use within reason, but the illegal acquisition is still a crime. Eventually after they've distilled things enough, it can become more firmly fair use.
So they just take the legal risk and do it, because after enough training the legal challenges should be within an acceptable range.
That makes sense for publicly released images, books and data. There exists some plausible deniability in sweeping up influences that have already been released into the world. Private data can contain unique things which the world has not seen yet, which becomes a bigger problem.
Meta/Facebook? I would not and will never trust them. Microsoft? I still trust them a lot more than many other companies. The fact many people are even bothered by this, is because they actually use OneDrive. Why not Dropbox or Google Drive? I certainly trust OneDrive more than I trust Dropbox or Google Drive. That trust is not infinite, but it's there.
If Microsoft abuses that trust in a truly critical way that resonates beyond the technically literate, that would not just hurt their end-user personal business, but it would hurt their B2B as well.