← Back to context

Comment by chwahoo

13 hours ago

I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.

It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.

I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.

What in Meta's history would lead you to give them the benefit of the doubt like this?

  • Perhaps I'm ignorant.

    I know some of the criticism of Meta: many people don't like the way their products are optimized for engagement. I've heard about their weird AI bots interacting on their platform as if they were people. And I know people of all political stripes have had complaints about content moderation and their algorithm.

    But all of that is within the bounds of the law and their terms of service.

    None of it would remotely approach something like: bypassing the well-advertised features in the glasses that show when the camera is in use and secretly recording things to train AI. It's hard to imagine any company's lawyers approving something like that. (this sounds like what many commenters believe is happening)

    FWIW, I suspect this is the relevant section of the Privacy policy:

    > "When you use the Meta AI service on your AI Glasses (if available for your device), we use your information, like Media and audio recordings of your voice to provide the service."

    from: https://www.meta.com/legal/privacy-policy/

    if so, "to provide the service" is doing a lot of work

> there is a fairly logical combination of settings

I think it's anything but logical, if users (like yourself) have no idea what those settings are, as evident from your previous post.