Comment by aaroninsf

2 years ago

You misconstrue Schneier's point, which is sadly, correct.

The issue is not "all AI will be controlled by...," it is "meaningfully scaled and applied AI will be deployed by..."

You can today use Blender and other OSS to render exquisite 4K or higher projection-ready animation, etc etc.; but that does not give you access to distribution or marketing or any of the other consolidated multi-modal resources of Disney.

The analogy is weak however in as much as the "synergies" in Schneier's assertion are much, much stronger. We already have ubiquitous surveillance. We already have stochastic mind control (sentiment steering, if you prefer) coupled to it.

What ML/AI and LLM do for an existing oligopoly is render its advantages largely unassailable. Whatever advances come in automated reasoning—at large scale-will naturally, inevitably, indeed necessarily (per fiduciary requirements wrt shareholder interest), be exerted to secure and grow monopoly powers.

In the model of contemporary American capitalism, that translates directly into "enhancing and consolidating regulatory capture," i.e. de facto "control" of governance via determination of public discourse and electability.

None of this is conspiracy theory, it's not just open-book but crowed and championed, not least in insider circles discussing AI and its applications, such as gather here. It's just not the public face of AI.

There is however indeed going to be a period of potential for black swan disequilibrium, however; private application of AI may give early movers advantage in domains that may destabilize the existing power landscape. Which isn't so much an argument against Schneier, as an extension of the risk surface.