Comment by ethbr0
5 years ago
> Google's desire for scale, scale, scale, meant that interactions must be handled through The Algorithms
That's fine when you're a plucky growth startup. Less fine when you run half the internet.
If Google doesn't want to admit it's a mature business and pivot into margin-eating, but risk-reducing support staffing, then okay: break it back up into enough startup-sized chunks that the response failure of one isn't an existential threat to everyone.
This lack of staffing is something that really annoys me. It's all over the big tech companies, and is often cited as the reason why (for example) YouTube, Twitter, Facebook, etc cannot possibly proactively police (before publishing) all their user content due to the huge volume.
Of course they can; Google and the rest earn enough to throw people at the problems they cause/enable. If they can't, then they should stop. If you cannot scale responsibly, then you should not scale at all as your business has simply externalised your costs onto everyone else you impact.
There is a limit to which problems you can throw people at, though. Facebook’s and Youtube’s human moderators suffer from the trauma of watching millions of awful videos every day. Policing provocative posts that are dogwhistling while still allowing satire and legitimate free expression is incredibly challenging and requires lots of context in very different fields. It’s not as simple as setting up a side office in the Philippines and hiring a thousand locals for moderation.
Yes: skilled labor. This is not a novel problem. Other companies create internal training pipelines and pay higher wages to attract those sorts of employees, when they're critical to business success.
I agree. Google is such a large behemoth who actively tries to avoid customer support if they can. Splitting it to smaller business with a bit of autonomy and not having to rely on ad money fueling everything else means those smaller businesses have to give a shit about customers and compete on even ground.
Same applies to Facebook and other tech companies. The root issue is taking huge profits from area of business into other avenues which compete with the market on unfair ground (or out right buying out competition)
However anti-trust in US has eroded significantly.
> However anti-trust in US has eroded significantly.
Perhaps compared to the 40s-70s, but certainly not compared to the Reagan era. Starting with the Obama administration, there's been a strong rebirth of the anti-trust movement and it's only gaining momentum (see many recent examples of blocked mergers)[1].
[1] https://hbr.org/2017/12/the-rise-fall-and-rebirth-of-the-u-s...
The Obama admin used it only to attack enemies.
Renata Hesse was part of that effort, and has since worked for Google and Amazon, and is now expected to be in charge of anti-trust at Biden's DOJ.
And as long as the internet giants are on the correct side of the culture war there will be scant appetite for breaking them up or reigning them in. As long as you need 5 phone calls to silence someone and erase them from the online, there is no chance in hell that the government will try to break them up.
> That's fine when you're a plucky growth startup. Less fine when you run half the internet.
It's never fine.
The abdication of responsibility and, more importantly, liability to algorithms is everything that's wrong with the internet and the economy. The reason these tech conglomerates are able to get so big when companies before them couldn't is because it's impossible to scale the way they have without employing thousands of humans to do the jobs that are being poorly done by their algorithms. Nothing they're doing is really a new idea, they just cut costs and made the business more profitable. The promise is that the algorithms/AI can do just as good of a job as humans but that was always a lie and, by the time everyone caught on, they were "too big to fail".
> It's never fine.
It kind of is, though.
The idea is that the full algorithm is "automation plus some guy". Automation takes care of 99.9% of it, and some guy handles the 0.01% that's exceptional, falls through the cracks, and so on.
The problem is when you scale from 100,000 events per day to half a trillion, and your fallback is still basically "some guy". At ten failures a day, contacting The Guy means sending an email, and maybe sometimes it takes two. At a million failures a day, your only prayer of reaching The Guy is to get to the top of HN, or write a viral Twitter thread.
There are some things which are important enough that they can't be left up to this formula, and maybe you're thinking of those. I'm not, and I doubt the person you're replying to is either.
This is probably a big part of why Google is invested in (limited) AI, because a good enough "artificial support person" means having their cake and eating it too.
The issue with (limited) AI is that it's seductive. It allows executives to avoid spending actual money on problems, while chalking failures up to technical issues.
The responsible thing would be to (1) staff up a support org to ensure reasonable SLAs & (2) cut that support org when (and if) AI has proven itself capable of the task.