Comment by at_a_remove
5 years ago
Jon Williams, circa 1987, wrote a story of a far-flung humanity's future in "Dinosaurs," in which humans had been engineered into a variety of specialized forms to better serve humanity. After nine million years of tweaking, most of them are not too bright but they are perfect at what they do. Ambassador Drill is trying to prevent a newly discovered species, the Shar, from treading on the toes of humanity, because if the Shar do have even a slight accidental conflict as the result of human terraforming ships wiping out Shar colonies because they just didn't notice them, the rather terrifyingly adapted military subspecies branches of humanity will utterly wipe out the Shar, as they have efficiently done with so many others, just as a reflex. Ambassador Drill fears that negotations, despite his desire for peace, may not go well, because the terraforming ships will take a long time to receive information that the Shar are in fact sentient and billions of them ought not to be wiped out ...
Google, somehow, strikes me as this vision of humanity, but without an Ambassador Drill. It simply lumbers forward, doing its thing. It is to be modeled as a threat not because it is malign, but because it doesn't notice you exist as it takes another step forward. Threat modeling Lovecraft-style: entities that are alien and unlikely to single you out in particular, it's just what they do is a problem.
Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.
> Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms
That's fine when you're a plucky growth startup. Less fine when you run half the internet.
If Google doesn't want to admit it's a mature business and pivot into margin-eating, but risk-reducing support staffing, then okay: break it back up into enough startup-sized chunks that the response failure of one isn't an existential threat to everyone.
This lack of staffing is something that really annoys me. It's all over the big tech companies, and is often cited as the reason why (for example) YouTube, Twitter, Facebook, etc cannot possibly proactively police (before publishing) all their user content due to the huge volume.
Of course they can; Google and the rest earn enough to throw people at the problems they cause/enable. If they can't, then they should stop. If you cannot scale responsibly, then you should not scale at all as your business has simply externalised your costs onto everyone else you impact.
There is a limit to which problems you can throw people at, though. Facebook’s and Youtube’s human moderators suffer from the trauma of watching millions of awful videos every day. Policing provocative posts that are dogwhistling while still allowing satire and legitimate free expression is incredibly challenging and requires lots of context in very different fields. It’s not as simple as setting up a side office in the Philippines and hiring a thousand locals for moderation.
1 reply →
I agree. Google is such a large behemoth who actively tries to avoid customer support if they can. Splitting it to smaller business with a bit of autonomy and not having to rely on ad money fueling everything else means those smaller businesses have to give a shit about customers and compete on even ground.
Same applies to Facebook and other tech companies. The root issue is taking huge profits from area of business into other avenues which compete with the market on unfair ground (or out right buying out competition)
However anti-trust in US has eroded significantly.
> However anti-trust in US has eroded significantly.
Perhaps compared to the 40s-70s, but certainly not compared to the Reagan era. Starting with the Obama administration, there's been a strong rebirth of the anti-trust movement and it's only gaining momentum (see many recent examples of blocked mergers)[1].
[1] https://hbr.org/2017/12/the-rise-fall-and-rebirth-of-the-u-s...
1 reply →
And as long as the internet giants are on the correct side of the culture war there will be scant appetite for breaking them up or reigning them in. As long as you need 5 phone calls to silence someone and erase them from the online, there is no chance in hell that the government will try to break them up.
> That's fine when you're a plucky growth startup. Less fine when you run half the internet.
It's never fine.
The abdication of responsibility and, more importantly, liability to algorithms is everything that's wrong with the internet and the economy. The reason these tech conglomerates are able to get so big when companies before them couldn't is because it's impossible to scale the way they have without employing thousands of humans to do the jobs that are being poorly done by their algorithms. Nothing they're doing is really a new idea, they just cut costs and made the business more profitable. The promise is that the algorithms/AI can do just as good of a job as humans but that was always a lie and, by the time everyone caught on, they were "too big to fail".
> It's never fine.
It kind of is, though.
The idea is that the full algorithm is "automation plus some guy". Automation takes care of 99.9% of it, and some guy handles the 0.01% that's exceptional, falls through the cracks, and so on.
The problem is when you scale from 100,000 events per day to half a trillion, and your fallback is still basically "some guy". At ten failures a day, contacting The Guy means sending an email, and maybe sometimes it takes two. At a million failures a day, your only prayer of reaching The Guy is to get to the top of HN, or write a viral Twitter thread.
There are some things which are important enough that they can't be left up to this formula, and maybe you're thinking of those. I'm not, and I doubt the person you're replying to is either.
This is probably a big part of why Google is invested in (limited) AI, because a good enough "artificial support person" means having their cake and eating it too.
The issue with (limited) AI is that it's seductive. It allows executives to avoid spending actual money on problems, while chalking failures up to technical issues.
The responsible thing would be to (1) staff up a support org to ensure reasonable SLAs & (2) cut that support org when (and if) AI has proven itself capable of the task.
> It simply lumbers forward, doing its thing. It is to be modeled as a threat not because it is malign, but because it doesn't notice you exist as it takes another step forward.
This is a concept that I think deserves more popular currency. Every so often, you step on a snail. People actually hate doing this, because it's gross, and they will actively seek to avoid it. But that doesn't always work, and the fact that the human (1) would have preferred not to step on it; and (2) could, hypothetically, easily have avoided doing so, doesn't make things any better for the snail.
This is also what bothers me about people who swim with whales. Whales are very big. They are so big that just being near them can easily kill you, even though the whales generally harbor no ill intent.
I'm curious if whales more dangerous on an hour-by-hour basis than driving?
That's generally my rubric for whether a safety concern is possibly worth avoiding an activity over.
> I'm curious if whales more dangerous on an hour-by-hour basis than driving?
It depends on how many passengers you pack in a whale.
1 reply →
> “You will have killed us,” Gram said, “destroyed the culture that we have built for thousands of years, and you won’t even give it any thought. Your species doesn’t think about what it does any more. It just acts, like a single-celled animal, engulfing everything it can reach. You say that you are a conscious species, but that isn’t true. Your every action is... instinct. Or reflex.
Good story. I can imagine what the specialized humans did to the generalist humans eons ago.
Except in our case, Google's terraforming ships couldn't care less. It's just not part of their programming that there might be some intelligent life out there worth caring about that might be hurt by their actions, so there's no way for them to receive this information. It's not that it's hard to explain, there's nobody to explain it to.
Modern large corporations are just an more inefficient, less effective paperclip maximizer, with humans gumming up the works.
Google is striving hard to remove the "human" part of the problem.
After finished reading the parent comment,
> Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms. I can imagine it still muttering "The algorithms said ..." as anti-trust measures reverse-Frankenstein it into hopefully more manageable pieces.
I immediately pressed C-f to search the string "paperclip maximizer", and was not disappointed. Thanks for mentioning it.
Your making another perfect case of why Google should be broken up. It’s important that we can choose again.
Sounds like a non-aligned AI.
It essentially is a non-aligned AI. AIs don't need to be implemented in silico. Bureaucracy is by itself a computing medium too.
That makes me wonder if someone has ever written a scientific paper proving that the bureaucratic processes in place at their company are Turing Complete. You can imagine some sort of Rule 110 cellular automaton being implemented in TPS reports.
2 replies →
Thanks for mentioning this story; I just finished it and it's a great read.
Thanks for the story recommendation!