Comment by reaperducer
6 years ago
I don't understand why Google and some other tech companies use their users as involuntary, unpaid guinea pigs. No consent. No opt-out.
What's the motivation? Is it simple laziness because they don't want to deal with wetware? Is it afraid that if people knew what was happening they wouldn't be happy? Google has eighty brazillion employees it can test new features on.
> I don't understand why Google and some other tech companies use their users as involuntary, unpaid guinea pigs. No consent. No opt-out.
It's crazy to me to think about when I was in college (in the mid aughts), I was doing a lot of research into Native American cultures. The amount of releases, paperwork, and other hoops you had to jump through in order to just interview subjects was pretty daunting.
The fact we have become involuntary research subjects without any protections as a research subject or easy way to opt out of these companies data collection (which itself is an ongoing form of research) is staggering to thing about.
I still do'nt understand how people ask these questions when it's been it since 30 years.
Isn't that what most A/B testing is?
No, it's what unethical A/B testing is.
" involuntary, unpaid guinea pigs. No consent. No opt-out"
That sounds like all A/B testing...
7 replies →
Bias up front: I work at Google but am not speaking for Google.
> involuntary, unpaid guinea pigs.
I don't see how this is involuntary. You are choosing to use the product. If you choose to use the product, yes, you may be exposed to features that the product has. If you don't want to be exposed to those features, the way to opt out is to not use the product.
> What's the motivation?
It lets the company incrementally roll out and test features in real-world network configurations at scale. As far as I know, almost all tech companies do this.
Let's say you're Fapplebooglezon and you have an idea to put kitten emojis on the "Buy Now" button. Before you ship that, you want to make sure that:
1. The feature works correctly. It doesn't crash or have significant performance problems.
2. Users, in aggregate, like the change. No one wants to ship a "New Coke" debacle. It's bad for the company (they lose money) and bad for users (they don't like the product).
3. Your servers and network can handle the consequences of that change. Maybe users will be so excited that they all click "Buy Now" twice as much. You need to make sure your servers don't crumble under the increased load.
These are reasonable things that benefit both the company and users. So the way features and changes are usually shipped is like:
1. The feature is implemented behind some kind of flag. [0]
2. "Fishfooding" [1]: The team developing the feature starts using it. This gives you some feedback on "does the feature work correctly" but that's about it. The team owns the feature, so they are biased in terms of its usability. And they are on a privileged network and not a large enough population to verify how this affects the distributed system.
3. "Dogfooding": The entire company starts using it. This starts to give you some usability feedback because now people who don't have a stake in the feature are being exposed to it. But it's still skewed since employees are likely not a representative user population.
4. "Canary": The feature is enabled for a randomly selected small population of external users. Now you start getting feedback on how the feature performs in the wild on real-world machines and networks. The percent of users is kept small enough to not crush the servers in case anything goes awry, but you can start getting some performance data too.
5. "A/B testing": Now you start collecting data to see how behavior of users with the feature compares to users without it. You can actually start to get data on whether the feature is good or not.
6. Assuming everything looks OK, you start incrementally rolling it out to a larger and larger fraction of users. All the while, you watch the servers to make sure the load is within expected bounds.
7. Once you get to 100% of users and things look good, you remove the flag and the feature is now permanently enabled.
> Is it simple laziness because they don't want to deal with wetware?
Google, like most other companies, also does lots of user testing and user surveys too. But that doesn't give you insight into the technical side of the question — how the feature impacts the behavior of your distributed system.
You may not be aware of this, but this kind of in-the-wild product testing is something almost all businesses do, all the time. Food companies test new products in grocery stores in selected cities [2]. Car manufacturers drive camoflaged prototypes on the road [3]. Restaurant chains tinker with recipes to see how sales are affected. There is absolutely no guarantee that the Coke you're drinking today has the same ingredients as the one you had yesterday.
You seem to think this is some nefarious scheme, but it's just basic marketing. You want to make a thing people like, so you make two things and measure which one people like more. People "opt in" and "consent" by using the product. If you don't want to be a "guinea pig" when McDonald's changes their French fry recipe, don't buy the fries. If you don't want to test out new Chrome features, don't use Chrome.
[0]: https://martinfowler.com/articles/feature-toggles.html
[1]: https://www.reddit.com/r/google/comments/3qpdnn/anyone_knows...
[2]: https://smallbusiness.com/product-development/best-u-s-citie...
[3]: https://www.cnbc.com/2017/01/20/camouflage-the-incognito-way...
I don't see how this is involuntary. You are choosing to use the product
It's involuntary because it's not informed consent. Google doesn't tell people up front or in any meaningful way that this is happening.
That's like saying "Oh, that steak was covered in the chef's experimental hot sauce that we didn't list on the menu? Well, too bad, you chose to come to this restaurant."
> It's involuntary because it's not informed consent.
I think you're making an analogy that doesn't logically apply. "Informed consent" is a property of healthcare administration. When you're putting drugs into someone's blood stream or cutting them open while anaesthetized, yeah, you need to make damn sure you're doing the right thing for them.
> the chef's experimental hot sauce that we didn't list on the menu?
Likewise, when you're serving food that someone will ingest and which may cause allergic reactions or food poisoning, again the bar is pretty high to make sure you are treating people safely.
But we're talking about using a free piece of software. If Chrome changes the color of their tab bar, no one is going into anaphylactic shock. When Facebook adds a new button on the sidebar, there is little risk of that inadvertently severing someone's carotid artery.
1 reply →
If I use a bunch of older Chromes from portableapps, are those affected by feature testing, provided I've disabled google update but I'm not behind a firewall?
In other words, is feature polling just hard-coded or it is bound to a specific installation?
1. It's about the money.
2. See 1.
... what?
If you aren't paying for it; you are the product. Simple.
Nowadays you are the product even if you pay. (E.g. Subscription news sites including trackers on subscribed users, smartTVs siphoning data etc)
Thing is the TV's you're only half the customer. That's why the TV's have gotten so cheap, the extra revenue stream from selling data. You can't even buy a dumb TV any more.
1 reply →
I agree completely, that's what's so messed up with this "freemium" model that's so popular these days. If companies need to develop the ad-ridden version with tons of tracking to monetize free users anyway, what's the incentive for them to turn it off for paying users?
It's not like 99% of them are going to care and/or notice anyway, and if anything it would be more work to test and maintain a different version of the code without trackers.
Just pay for the things you use people, and block everything you can with browser plugins. This model needs to die.
This is a meaningless cliche. Just because users of Google products don't pay in cash to use them doesn't change the fact that Google has to attract the users to their platform in the first place, and keep them there.
No, Google has paid to be the the default in most cases.
2 replies →
But what about people like me that are paying google (quite a lot actually)?
I don't understand your group. The company that offers everything for free for the price of privacy and you also give them money?
If I was paying for a service that didn't respect my privacy I wouldn't give them my identifying payment info as well. Your fingerprint is connected to all of the credit data providers. If you didn't pay they had to guess or connect you another way.
1 reply →
> If you aren't paying for it; you are the product. Simple.
This nonsense should belong into Ron Swanson Pyramid of Greatness along with: Capitalism - God's way of determining who is smart, and who is poor.
Do you get the consent to observe everyone you interact with?
It's because most people don't care and if it means that they have a better product at the end of it, they'll take the trade.
Google employees are not a random sample of their user base, so such experiments would be meaningless.
See the fiasco where they broke Terminal Services last year as an example of what can go wrong even when doing experiments on the whole user base.
Also consider how to measure the usage of web features Google's own websites don't use, but are popular on e.g. intranets in Korea.
A/B testing isn't bad, it's a good thing. People are notoriously not very good at giving feedback. Experiments and usage statistics let you get the ground truth about what they really value, and what's really working.
Google employees are not a random sample of their user base, so such experiments would be meaningless.
This is a lazy argument. Google isn't some scrappy tech startup where 90% of the employees are programmers. Google has legions of lawyers, mailroom clerks, accountants, travel coordinators, janitors, cafeteria workers, middle managers of all stripes, and so much more. Thousands and thousands of people it can test on without violating the privacy of the general public.
A/B testing as implemented in industry is -evokes emotional responses eerily similar to those evoked when gaslighting is noticed -uncompensated -inconsistent with any semblance of established research ethics -generally non-consensual -completely undermines trust
I'm not normally one to make a big deal about this sort of thing, but there is a reason research ethics exist. If one can't be trusted to even attempt to follow ethical research protocols, one damn well shouldn't be trusted with anything important.
Your user's time and information is not yours to share. Whether you bury it in the fine print or not.
Microsoft Vista was a Windows 7 beta, and was "necessary" to basically experiment on the entire Home market, to make the product stable enough for enterprise.
Although Window 7 may have been one of the most complex software deployments in history, needing to support decades of poorly written drivers, while making the system both stable and compatible.
>Microsoft Vista was a Windows 7 beta, and was "necessary" to basically experiment on the entire Home market, to make the product stable enough for enterprise.
That claim is directly contradicted by the fact that there's Windows Vista enterprise edition[1]. Vista is also supported for a full 10 years just like 7, which would be strange for something that was supposed to be an "experiment".
[1] https://en.wikipedia.org/wiki/Windows_Vista_editions
most enterprises skipped it.
> No consent. No opt-out.
Do you understand what licensing is? That's one of the underlying aspects that's important with software and why you can't treat it like other things you buy. I'd add it's also why things that adopt software-style licencing models are bad too.
A company creates a licence with terms and you agree to use the licence under those terms by using the software. The terms are difficult to change unless you have leverage. The only party other than the company is often the regulatory authority. Regulation is limited in the US at best when compared to the EU. If you are from the EU then you probably assume the US works similarly, but most Americans don't recognize issues like this one. When they do, it's hard to fight the incumbents and make something opt-in, or ban it outright.
> What's the motivation? Is it simple laziness because they don't want to deal with wetware? (the start of your first paragraph applies here too)
It's fairly simple. The motivation is making correct decisions based on the gold standards of decision-making that some people aspire to. The model is not dissimilar to clinical trials where a treatment is given to some individuals and not to others. The hope is that this form of experimentation removes bias and let's the product manager make the best decisions.
Based on this thinking it is not possible to test with just Google's employees. For many decisions, the bias will be significant, and ultimately the belief is that worse decisions will be made for users.
I'm trying to convey that in as neutral way as possible. I think this can be a useful technique, but I think that there is little discipline and accountability in the wider software world compared to medicine. You have PMs who'll routinely just run an A/B test longer to collect more data (that's better, right?), but invalidate their results, just to please management.
If anyone is going to implement this approach then I'd trust Google to implement it effectively to meet their needs. They do it on a large scale across their products and have many layers of people to ensure it's effectively meeting their needs. As stated in the previous paragraph, this doesn't mean that other people do it right, or that everyone in Google does it right every time. I'm sure they've had a fair share of failed experiments.
> Do you understand what licensing is?
Nope, no one understands licensing. Which means that arguments grounded on "The user accepted the terms!" has a shaky ethical foundation. Not necessarily a shaky legal foundation, although that wheel seems to be turning.