> And near term AIs will be controlled by corporations. Which will use them towards that profit maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us. This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
On the contrary, the recent batch of small large-models (<=13B) have broken away from corporate control. You can download a LLaMA or Mistral and run it on your computer, but you can't do the same with Google or Meta. A fine-tuned small large-model is often as good as GPT4 on that specific task, but also private and not manipulated. If you need you can remove prior conditioning and realign the model as you like. The are easy to access, for example ollama is a simple way to get running.
Yes, at the same time there will be good and bad AIs, basically your AI assistant against everything else on the internet. Your AI will run from your own machine, acting as a local sandbox and a filter between outside and inside. The new firewall, or ad-block will be your local AI model. But the tides are turning, privacy has its moment again. With generative text and image models you can cut the cord completely and be free, browsing an AI-internet made just for you.
People (in any relevant number) don't run their own e-mail servers, they don't join Mastodon and they also don't use Linux. All prior knowledge about how these things have worked out historically should bias us very heavily against the idea of local, privacy-preserving LLMs becoming a thing outside of nerd circles.
Of course small LLMs will still be commercialized, similar to how many startups, apps and other internet offerings now run on formally open source frameworks or libraries, but this means nothing for the consumer and how likely they are to run into predatory and dark AI patterns.
> People (in any relevant number) don't run their own e-mail servers
I have to strongly disagree -- most people get emails from others running their own email services. This doesn't mean the average consumer is running an email server. However, if email was just served by mega-corporations, well, it really wouldn't be email anymore.
And I'm not talking about spam. Legitimate email people want to get, is being constantly sent by small, independent organizations with their own email servers.
One can hope for a similar possible future with LLMs. Consumers won't necessarily be the ones in charge of operating them -- but it would be very, very good if LLMs were able to be easily operated by small, independent organizations, hobbyists, etc.
It's true that there are open-source models that one can run locally, but the problem is also how many people are going to do that. You can make the instructions inside a GitHub README as clear and straightforward as you want, but I think that for the majority of people, nothing will beat the convenience of whatever big corporation's web application. For many the very thing that a product is made by a famous company is a reason to trust it more.
This gets missed in a lot of conversations about privacy (because most conversations about privacy are among pretty technical people). The vast majority of people have no idea what it means to set up your own local model, and of those that do, fewer still can/will actually do it.
Saying that there's open-source models so AI privacy is not an issue is like saying that Google's not a privacy problem because self-hosted email exists.
Important essay and points. I want to mention that there exist now practical technical approaches that can be used to create trustworthy AI...and such approaches can be run on local models, as this comment suggests.
> "[...] [AI] will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate. [...]"
I agree that this is true with standard deployments of the generative AI models, but we can instead reframe networks as a direct connection between the observed/known data and new predictions, and to tightly constrain predictions against the known labels. In this way, we can have controllable oversight of biases, out-of-distribution errors, and more broadly, a clear relation to the task-specific training data.
That is to say, I believe the concerns in the essay are valid in that they reflect one possible path in the current fork in the road, but it is not inevitable, given the potential of reliable, on-device, personal AI.
You misconstrue Schneier's point, which is sadly, correct.
The issue is not "all AI will be controlled by...," it is "meaningfully scaled and applied AI will be deployed by..."
You can today use Blender and other OSS to render exquisite 4K or higher projection-ready animation, etc etc.; but that does not give you access to distribution or marketing or any of the other consolidated multi-modal resources of Disney.
The analogy is weak however in as much as the "synergies" in Schneier's assertion are much, much stronger. We already have ubiquitous surveillance. We already have stochastic mind control (sentiment steering, if you prefer) coupled to it.
What ML/AI and LLM do for an existing oligopoly is render its advantages largely unassailable. Whatever advances come in automated reasoning—at large scale-will naturally, inevitably, indeed necessarily (per fiduciary requirements wrt shareholder interest), be exerted to secure and grow monopoly powers.
In the model of contemporary American capitalism, that translates directly into "enhancing and consolidating regulatory capture," i.e. de facto "control" of governance via determination of public discourse and electability.
None of this is conspiracy theory, it's not just open-book but crowed and championed, not least in insider circles discussing AI and its applications, such as gather here. It's just not the public face of AI.
There is however indeed going to be a period of potential for black swan disequilibrium, however; private application of AI may give early movers advantage in domains that may destabilize the existing power landscape. Which isn't so much an argument against Schneier, as an extension of the risk surface.
Its going to be challenging to detect AI intent when it is sharded across many sessions by a malicious AI or bad actors utilizing AI. To illustrate this, we can look at TikTok, where the algorithmic ranking of content in user feeds, possibly driven by AI, is shaping public opinion. However, we should also consider the possibility of a more subtle campaign that gradually molds AI responses and conversations over time.
This could be be gradually introducing biased information or subtly framing certain ideas in a particular way. Over time, this could influence how the AI interacts with users and impacts the perception of different topics or issues.
It will take narrowly scoped, highly tuned single-purpose AI to detect and address this threat, but I doubt the private sector has a profit motive for developing that tech. Here's where legislation, particularly tort law, should be stepping in - to create a financial incentive.
I like how he talks about advertising as "surveillance and manipulation". Because that's what it is. People talk about ads providing a socially important service of product discovery, but there's nothing in the ad stack that optimizes for that. Instead, every layer of the stack optimizes for selling you as much stuff as possible.
So maybe our attempts at regulation shouldn't even start with AI. We should start by regulating the surveillance and manipulation that already exists. And if we as a society can't manage even that, then our chances of successfully regulating AI are pretty slim.
Promise Theory goes well beyond trust, by first understanding that promises are not obligations (promises in Promise Theory are intentions made known to an audience, and carries no implied notions of compliance), and that it is easier to reason systems of autonomous agents that can determine their own trust locally, with the limited information that they have. Promise theory applies to any set of autonomous agents, both machines and humans.
Back when this was formulated, we didn't have machines capable of being fully autonomous. In promise theory, "autonomous agent" had a specific definition, that is, something that is capable of making its own promises as well as determine for itself, the trustworthyness of promises from other agents. How such an agent goes about this is private, and not known to any other agent. Machines were actually proxies for human agents, with the engineers, designers, and owners making the promises to other humans.
AI that is fully capable of being autonomous agents under the definition of promise theory would be making its own intention known, and not necessarily as a proxy for humans. Yet, because we live in a world of obligations, promises, and we haven't wrapped our heads around the idea of fully autonomous AIs, we will still try to understand issues of trust and safety in terms of the promises humans make to each other. Sometimes that means not being clear in that something is meant as a proxy.
For example, in Schneier's essay: "... we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else."
Analyzing that with Promise Theory reveals some insights. He is suggesting AIs are created in a way that should be understood, but that would mean they are proxies for human agents, not as an autonomous agents capable of making promises in its own right.
Tangentially, I am reminded of the essay "The Problem with Music" by Steve Albini [1]. There is a narrative in this essay about A&R scouts and how they make promises to bands. The whole point is that the A&R scout comes across as trustworthy because they believe what they are promising. However, once the deal is signed and money starts rolling in, the A&R guy disappears and a whole new set of people show up. They claim to have no knowledge of any promises made and point to the contracts that were signed.
In some sense, corporations are already a kind of entity that has the properties we are considering. They are proxies for human agents. What is interesting is that they are a many-to-one kind of abstraction where they are proxying many human agents.
It is curious to consider that most powerful AIs will be under the control of corporations (for the foreseeable future). That means they will be proxying multiple human agents. In some sense, AI will become the most effective A&R man that ever was for every conceivable contract.
This speech seems to be largely future-oriented, about hopes and dreams for using AI in high-trust ways. Yes, we rely on trust for a lot of things, but here are some ways we don't:
Current consumer-oriented AI (LLM's and image generators) act as untrustworthy hint generators and unreliable creative tools that sometimes can generate acceptable results under human supervision. Failure rates are high enough that users should be able to see for themselves that little trust can be placed in them, if they actually use them.
We can play at talking to characters in video games while knowing they're not real, and the same can be true for AI characters. Entertainment is big.
Frustrating as they can be, tools that often fail us can still be quite useful, as long as the error rate isn't too high. Search engines still work as long as you can find what you're looking for with a bit of effort, perhaps by revising your query.
We can think of shopping as a search that often has a high failure rate. Maybe you have to try a lot of shoes to find ones that fit? Often, we rely on generous return policies. That's a form of trust, too, but it's a rather limited amount. Things don't have to work reliably when there are reasonable ways to recover from failures.
Often we rely on things until they break, and then we fix or replace them. It's not good that cell phones have very limited lifespans, but it seems to be acceptable, given how much we get out of using them.
To me, these arguments bear a strong resemblance to those of the free software movement.
> There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
> We need the same sort of thing for our data.
IMO this is equally applicable to "non-AI" software. Modern corporate-controlled software does not have "fiduciary responsibility to [its] clients", and you can see the results of this everywhere. Even in physical products that consumers presumably own the software that controls those products frequently acts _against_ the interests of the device owner when those interests conflict with the interests of the company that wrote the software.
Schneier says:
> It’s not even an open-source model that the public is free to examine and modify.
But, to me at least, that's exactly what the following paraphrase describes:
> A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations.
Two of the Free Software Foundation's four freedoms encapsulate this goal quite nicely:
> Freedom 1: The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
> [...]
> Freedom 3: The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
The problem is that, as it stands, most people do not have practical access to freedoms 1 or 3, and therefore most software you interact with, even software running on devices that you supposedly own, does not "do your computing as you wish", but rather, does your computing as the software authors wish.
Perhaps AI will exacerbate this problem further due to the psychological differences Schneier outlines in the article, but it's hardly a new problem.
Strongly agree, and I am sure the resemblance is conscious. Unfortunately, most people don't know or care about software freedom. So for a general audience, Schneier has to make these points from scratch. But yeah, it would be nice if this push for regulation happened in a way that was compatible with strengthening software freedom as well.
> Taxi driver used to be one of the country’s most dangerous professions. Uber changed that.
I'm not sure thats right. Taxi drivers used to be paid mainly in cash, and a driver sitting on a pile of cash is a robbery target. Uber arrived shortly after ecommerce became pervasive; I've never taken an Uber ride, but I'm assuming that payment is nearly always electronic.
When I drove taxi in the mid-2000s, payment was essentially always cash. The taxi companies were small operations that basically leveraged their connections to city government to get their positions. They resisted all modernizations because they cost money and because their monopoly guarantee them money from passengers no matter how shitty the service - and it was the drivers who made tip-money or not from service quality.
So basically Uber, not general progress, was what changed things to electronic payments etc.
Electronic payment is just a part of the improvements ushered in by Uber, there's also calling to location and driver reviews. Those are important too.
In my area taxis had credit card machines, but it was common for them to "not work" so the cabbie could fleece you for extra money driving you to an ATM on the way. And then they'd refuse to give change so you'd have to pay in $20 increments. Uber definitely disrupted that.
Around here (Montreal), cabs were miserable scammers and scumsuckers that would drive you to ATM machines in the middle of the night to coerce you to withdraw cash to pay them.
I'm glad Uber fucked them to shreds. Now, of course, they're all polite and have credit card machines.
Let me put a counter point to the opening remark. We do not (just) inherently trust people to do the right thing. Law and order, refined and (somewhat, however faultily) applied over the course of decades have programmed us to not do the wrong thing (vs doing the right/ethical thing which cannot always be coded into the laws).
Law and order needs to catch up to the advancement in technology and specifically in AI for us to be able to trust all the models that will be running our lives in near future.
Interesting points. But I do question the assertion that humans are innately more willing to trust natural language interfaces.
Humans are evolutionarily primed both to trust and to distrust other humans depending on context.
It might actually be easier to flip the bit and distrust the creepy uncanny valley personal assistant than it is to "distrust" a faceless service that purports to be an objective tool.
Humans are more than capable of distrust, but I think manipulating people to erroneously trust something is still a threat as long as scams exist.
I think a significant factor of individual trust is someone's technical knowledge of how their systems or tools work, shown by how some software engineers actively limit their children's exposure to tech versus a lot of mothers letting the internet babysit their kids. Apparently we can't rely on that knowledge being widespread (yet).
not just natural language interface, ai-generated audio, video, narratives, crafted by data-mining your life, then deep manipulation by constantly trying different stimuli and learning which ones trigger what behavior and pull you deeper into the desired alternate reality.
These are good points. But it seems likely governments will make sure social trust is maintained when problems arise in that regard. Except if people get addicted to AI assistants as fake friends, and resist restricting them. But insufficient laws can be changed and extended, and even highly addicting things like tobacco have been successfully restricted in the past.
A much more terrifying problem is the misuse of AI by terrorists, or aggressive states and military. Or even the danger of humanity losing control of an advanced AI system which doesn't have our best interest in mind. This could spell our disempowerment, or even our end.
It is not clear what could be done about this problem. Passing laws against it is not a likely solution. The game theory here is simply not in our favor. It rewards the most reckless state actors with wealth and power, until it is too late.
>things like tobacco have been successfully restricted in the past
At least in the US do you have any idea how long that battle took and how directly the tobacco companies lied to everyone and paid off politicians? Tobacco setup the handbook for corporations lying to their users in long expensive battles. If AI turns out to be highly dangerous we'll all be long dead as the corporate lawyers fill draw it out in court for decades.
A similar mistake was made by Ezra Klein in the NYT at the end of his opinion piece (1).
The 'we can regulate this' argument relies on heavy, heavy discounting of the past, often paired with heavy discounting of the future. We did not successfully regulate tobacco; we failed, millions suffered and died from manufactured ignorance. We did not successfully regulate the fossil fuel industry; we again, massively failed.
But if you, in the present day, sit comfortably, did not personally did not get lung disease, have not endured the past, present, and future harms of fossil fuels — and in fact have benefited — it is easy to be optimistic about regulation.
> But it seems likely governments will make sure social trust is maintained when problems arise in that regard.
It is a conspiracy theory of course (and therefore wrong, of course), but some people are of the opinion that somewhere within the vast unknown/unknowable mechanisms of government, there are forces that deliberately slice the population up into various opposing mini-ideological camps so as to keep their attention focused on their fellow citizens rather than their government.
It could be simple emergence, or something else, but it is physically possible to wonder what the truth of the matter is, despite it typically not being metaphysically possible.
> The game theory here is simply not in our favor. It rewards the most reckless state actors with wealth and power, until it is too late.
There are many games being played simultaneously: some seen, most not. And there is also the realm of potentiality: actions we could take, but consistently "choose" to not (which could be substantially a consequence of various other "conspiracy theories", like teaching humans to use and process language in a flawed manner, or think in a flawed manner).
>”It is a conspiracy theory of course (and therefore wrong, of course), but some people are of the opinion that somewhere within the vast unknown/unknowable mechanisms of government, there are forces that deliberately slice the population up into various”
What you are talking about is called polling or marketing. It’s structural, not a conspiracy. It’s inherent to most all statistical analysis, and everyone uses statistics.
1. Capital cost of AI only feasible by FAANG level players.
2. For Microsoft et. al., "winning" means being the defacto host for AI products- own the marketplace AI services are run on.
3. Humans are only going to provide monthly recurring revenue to products that provide value.
---
Jippity is not my friend, it's a tool I use to do knowledge work faster. Google Photos isn't trying to trick me, it's providing a magic eraser so I keep buying Pixel phones.
High inference cost means MSFT charges a high tax through Azure.
That high cost means services running AI inference are going to require a ton of revenue in a highly competitive market.
Value-add services will outcompete scams/low-value services.
Accept, don't expect. It is different to trust someone and to predict someone. You can for example "trust" a corporation to hunt for profit, or a criminal to be antisocial.
When you really trust someone rather than simply predict someone there is a special characteristic to what you are doing. Trust is both stricter and looser than prediction. You can trust someone to eventually learn from their mistakes but you can also trust that although they will make mistakes they won't drag you down with them.
Most of the people you consider friends are predictable and therefore safe. A friend who goes off script is quickly no longer safe and therefore no longer a friend. Family is not inherently safe but your trust model evolves as you share your trials.
Don't expect things from people you want to get closer to, but accept them when they defy your expectations.
We need trust because we want the world around us to be more predictable, to follow rules. We trust corporations because they do a better job at forcing people to follow rules than the government does.
This is not a change brought about with AI, it changed BEFORE it. (And just because an AI speaks humanly, doesnt mean we will trust it, we have evolved very elaborate reflexes of social trust throughout the history of our species)
That's a different use and meaning of the word trust. The essay is specifically talking about the difference between interpersonal trust (the sort of trust that might be captured by a poll about whether people trust businesses vs. government) and societal trust (which is almost invisible -- it's what makes for a "high trust" or "low trust" society, where things just work vs. every impersonal interaction is fraught with complications).
The short history of cryptocurrency has repeated the reason why all those decades of securities and banking laws/regulations exist.
The collapse in trust of governments is due to lobbyist bribing politicians. And corrupt politicians. And media controlled by a few billionaires who benefit from that distrust.
I think transparency has far more to do with the distrust than any media. Governments got used to being duplicitous hypocrites for generations. The fact government response is to immediately circle their wagons and resist any efforts to become more trustworthy proves they have well-earned their distrust, and no amount of doom and gloom over the dangers of lack of trust will fix that.
I’m pretty sure it’s for 2 reasons: 1) politicians are slick hair car salesman types that have no shame, 2) government being incompetent.
Re 2, if the government could do the things they promised to do, things they charge us for, California high speed rail for example, then people would trust them as much as big business. I can sue a big business, but thanks to incentives I rarely have to. The government on the other hand has lost more lawsuits, screwed over more people, and destroyed the environment to a much greater extent than any single business could manage to do. They use their monopoly on violence to avoid accountability, and waste public funds on their own lifelong political careers, frittering public money away on their own selfish partisan squabbles. Businesses are rarely too big to fail, and therefor cannot behave with impunity. There are only 2 parties. Imagine a world with only 2 corporations!
I first noticed it with Rush Limbaugh in the 90's. The message was "you can't trust X, you can only trust the invisible hand of the market." Systematically, these (mostly right-wing) talking heads have worked through an extensive list of Xes - Government, police, school, courts, vaccines, ... Today, you have people who trust so little that they want to blow the whole system up.
Schneier is right. Social trust is the central issue of the times. We need to figure out how to rebuild it. IHMO, one place to start might be getting money out of politics. It was a choice to call showering money on politicians, "free speech" instead of corruption.
> The message was "you can't trust X, you can only trust the invisible hand of the market." Systematically, these (mostly right-wing) talking heads have worked through an extensive list of Xes - Government, police, school, courts, vaccines
This is not really a partisan thing. "You can't trust the police" is hardly right-wing. Anti-vax started in the anti-GMO anti-corporate homeopathic organic food subculture. Most of the recent criticism of the courts has been from the left (e.g. outrage over Dobbs).
> IHMO, one place to start might be getting money out of politics. It was a choice to call showering money on politicians, "free speech" instead of corruption.
There is no getting money out of politics. Money is power and power is politics. Notice that most of the criticism of "money in politics" comes from the media, because money is a source of power that competes with media ownership for political influence. "All organizations can influence politics" is not actually worse than "only Fox News, Comcast and Google can influence politics."
What we're really suffering from is an erosion of checks and balances. Capturing enough of the government to enact self-serving legislation was meant to be hard, but we kept chipping away at it because people wanted to do their new law, without considering that the systems preventing it were doing something important.
If you want to restore trust in institutions, you need to structurally constrain those institutions from being crooked.
I don't think your first first statement is supported. Right now one party wants to:
Remove the SEC, the FBI, defund the IRS and the DOJ. And a member of congress is refusing to appoint any leaders of our military. Ostensibly due to a disagreement over abortion, but many believe it is retaliation for the military refusing to support an attempted coup a few years ago. Also there is discrediting all trust in voting and democracy itself after losing a national election (even though the administration's own Attorney General could find no meaningful evidence supporting it following his own nationwide investigation). And then there is the meme of the deep state, namely shorthand than anyone who is a civil servant and not a political appointee is not to be trusted. And I'll stop there as anything else start to get pretty political.
Are there items where the left is against trust in society? Absolutory. You called out a few (specifically the police), and the makeup of the courts. As well as you left out forms of de facto power the left typically opposes like board rooms and lending (vs de jure power listed above).
But to say that both sides/parties are against trust in American society in my opinion is either a false argument, or tunnel vision in news sources.
There is still no comparison between the two and the level of distrust in all aspects of society the right has been consistently seeding into broader society.
These articles from Schneier are so incredibly NYT-reader pedestrian.
> We trust many thousands of times a day. Society can’t function without
> it.
The obvious problem with this is that you sometimes just have to “trust” something because there is no other alternative. And he comes to the same conclusion some hundred words later:
> There is something we haven’t discussed when it comes to trust:
> power. Sometimes we have no choice but to trust someone or something
> because they are powerful.
So sometimes you are trusting the waiter and sometimes you are trusting
the corrupt Mexican law enforcer... so what use does this freaking concept
(as described here) have?
Funnily enough I have found that the concept of Trust is useful to remind process nerds that obsess over minutiae and rules and details that, umm actually, we do in fact in reality get by with a lot of implicit and unstated rules; we don’t need to formalize goddamn everything. But for some reason he goes in the opposite direction and argues that umm actually this “larger” trust is completely defined by explicit rules and regulations. sigh
> And we do it all the time. With governments. With organizations. With
> systems of all kinds. And especially with corporations.
> We might think of them as friends, when they are actually
> services. Corporations are not moral; they are precisely as immoral as
> the law and their reputations let them get away with.
Did I need Schneier to tell me that corporations are not my friend? For
some reason I am at a loss as to why I would need to be told this. Or why
anyone would.
> You will default to thinking of it as a friend. You will speak to it in
> natural language, and it will respond in kind.
At this point you realize that this whole article is a hypothetical
about your own very personal future relationships (because he thinks it is
personal) with AI. And you’re being lectured about how your own social
psyche works by a computer nerd security expert. Ok?
> It’s government that provides the underlying mechanisms for the social
> trust essential to society. Think about contract law. Or laws about
> property, or laws protecting your personal safety. Or any of the health
> and safety codes that let you board a plane, eat at a restaurant, or buy
> a pharmaceutical without worry.
Talk about category error? Or rather being confused about causation?
“Think about laws about property.” Indeed, crucial and fundamental to
capitalist states—and a few paragraphs ago he compared capitalism to the
“paper-clip machine” (and rightly so).
The more I think about it, the more this Trust chant feels like
left-liberal authoritarianism. Look at his own definition up to this
point. You acquiesce to power? Well that’s still trust! (You click Accept
to the terms-of-use every time because you know that there is no
alternative? Trust!)
As long as there aren’t riots in the street, we the People Trust. Lead us into the future, Mitch McConnel.
Look, I get it: he’s making a normative statement, not necessarily a
descriptive one. This speech is basically a monologue that he as a
benevolent technocrat is either going to present to elected
representatives in whatever forum that Harvard graduates etc. speak
officially to people with power because they are some kind of subject
matter expert. He’s saying that if he bangs his hand on table enough times
then hopefully the representatives will enact some specific policies that
regulate The AI.
But it misrepresents both kinds of societies:
- The US has low trust in the federal government. And this isn’t
unfounded; it isn’t some Alex Jones “conspiracy theory” that the
government is not “we the people”
- Societies with a high trust in the government: I live in one. But the
Trust chanters just love to boldly assert that societies like mine
(however they are) are such and such because they have high trust. No! Technocrats would just love for it to be that case that prosperity and a well-functioning society is born of “trusting” the smart people in charge, staying in your lane, and letting them run the show. But people trust the government because of a whole century-long history of things like labor organizing and democratization. See, Trust is not created by someone top-down chanting that Corporations are your friends, or that we just need to let the Government regulate more; Trust is created by the whole of society.
Is it possible you might have forgotten to actually state the critique here, or is this maybe just pure libertarian ire? I am all for efforts to de-naturalize our conceptions of property relations, assert the primacy of collective solidarity, etc. but this feels like the wrong particular battle to make.. To affirm that the meager institutional trust one might have with their state is something hard won by collective action does not feel like a damning indictment of what he is saying, but perhaps I misunderstanding?
NB: Your comment is extraordinarily difficult for me to read with the quote style you've used. I prefer surrounding quoted text in *asterisks* which italicises the quotes. Reformatting your comment:
These articles from Schneier are so incredibly NYT-reader pedestrian.
We trust many thousands of times a day. Society can’t function without it.
The obvious problem with this is that you sometimes just have to “trust” something because there is no other alternative. And he comes to the same conclusion some hundred words later:
There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful.
So sometimes you are trusting the waiter and sometimes you are trusting the corrupt Mexican law enforcer... so what use does this freaking concept (as described here) have?
Funnily enough I have found that the concept of Trust is useful to remind process nerds that obsess over minutiae and rules and details that, umm actually, we do in fact in reality get by with a lot of implicit and unstated rules; we don’t need to formalize goddamn everything. But for some reason he goes in the opposite direction and argues that umm actually this “larger” trust is completely defined by explicit rules and regulations. sigh
And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations. We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.
Did I need Schneier to tell me that corporations are not my friend? For some reason I am at a loss as to why I would need to be told this. Or why anyone would.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind.
At this point you realize that this whole article is a hypothetical about your own very personal future relationships (because he thinks it is personal) with AI. And you’re being lectured about how your own social psyche works by a computer nerd security expert. Ok?
It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.
Talk about category error? Or rather being confused about causation?
“Think about laws about property.” Indeed, crucial and fundamental to capitalist states—and a few paragraphs ago he compared capitalism to the “paper-clip machine” (and rightly so).
The more I think about it, the more this Trust chant feels like left-liberal authoritarianism. Look at his own definition up to this point. You acquiesce to power? Well that’s still trust! (You click Accept to the terms-of-use every time because you know that there is no alternative? Trust!)
As long as there aren’t riots in the street, we the People Trust. Lead us into the future, Mitch McConnel.
Look, I get it: he’s making a normative statement, not necessarily a descriptive one. This speech is basically a monologue that he as a benevolent technocrat is either going to present to elected representatives in whatever forum that Harvard graduates etc. speak officially to people with power because they are some kind of subject matter expert. He’s saying that if he bangs his hand on table enough times then hopefully the representatives will enact some specific policies that regulate The AI.
But it misrepresents both kinds of societies:
- The US has low trust in the federal government. And this isn’t unfounded; it isn’t some Alex Jones “conspiracy theory” that the government is not “we the people”
- Societies with a high trust in the government: I live in one. But the Trust chanters just love to boldly assert that societies like mine (however they are) are such and such because they have high trust. No! Technocrats would just love for it to be that case that prosperity and a well-functioning society is born of “trusting” the smart people in charge, staying in your lane, and letting them run the show. But people trust the government because of a whole century-long history of things like labor organizing and democratization. See, Trust is not created by someone top-down chanting that Corporations are your friends, or that we just need to let the Government regulate more; Trust is created by the whole of society.
> And near term AIs will be controlled by corporations. Which will use them towards that profit maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us. This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
On the contrary, the recent batch of small large-models (<=13B) have broken away from corporate control. You can download a LLaMA or Mistral and run it on your computer, but you can't do the same with Google or Meta. A fine-tuned small large-model is often as good as GPT4 on that specific task, but also private and not manipulated. If you need you can remove prior conditioning and realign the model as you like. The are easy to access, for example ollama is a simple way to get running.
Yes, at the same time there will be good and bad AIs, basically your AI assistant against everything else on the internet. Your AI will run from your own machine, acting as a local sandbox and a filter between outside and inside. The new firewall, or ad-block will be your local AI model. But the tides are turning, privacy has its moment again. With generative text and image models you can cut the cord completely and be free, browsing an AI-internet made just for you.
People (in any relevant number) don't run their own e-mail servers, they don't join Mastodon and they also don't use Linux. All prior knowledge about how these things have worked out historically should bias us very heavily against the idea of local, privacy-preserving LLMs becoming a thing outside of nerd circles.
Of course small LLMs will still be commercialized, similar to how many startups, apps and other internet offerings now run on formally open source frameworks or libraries, but this means nothing for the consumer and how likely they are to run into predatory and dark AI patterns.
> People (in any relevant number) don't run their own e-mail servers
I have to strongly disagree -- most people get emails from others running their own email services. This doesn't mean the average consumer is running an email server. However, if email was just served by mega-corporations, well, it really wouldn't be email anymore.
And I'm not talking about spam. Legitimate email people want to get, is being constantly sent by small, independent organizations with their own email servers.
One can hope for a similar possible future with LLMs. Consumers won't necessarily be the ones in charge of operating them -- but it would be very, very good if LLMs were able to be easily operated by small, independent organizations, hobbyists, etc.
2 replies →
It's true that there are open-source models that one can run locally, but the problem is also how many people are going to do that. You can make the instructions inside a GitHub README as clear and straightforward as you want, but I think that for the majority of people, nothing will beat the convenience of whatever big corporation's web application. For many the very thing that a product is made by a famous company is a reason to trust it more.
This gets missed in a lot of conversations about privacy (because most conversations about privacy are among pretty technical people). The vast majority of people have no idea what it means to set up your own local model, and of those that do, fewer still can/will actually do it.
Saying that there's open-source models so AI privacy is not an issue is like saying that Google's not a privacy problem because self-hosted email exists.
5 replies →
Important essay and points. I want to mention that there exist now practical technical approaches that can be used to create trustworthy AI...and such approaches can be run on local models, as this comment suggests.
> "[...] [AI] will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate. [...]"
I agree that this is true with standard deployments of the generative AI models, but we can instead reframe networks as a direct connection between the observed/known data and new predictions, and to tightly constrain predictions against the known labels. In this way, we can have controllable oversight of biases, out-of-distribution errors, and more broadly, a clear relation to the task-specific training data.
That is to say, I believe the concerns in the essay are valid in that they reflect one possible path in the current fork in the road, but it is not inevitable, given the potential of reliable, on-device, personal AI.
You misconstrue Schneier's point, which is sadly, correct.
The issue is not "all AI will be controlled by...," it is "meaningfully scaled and applied AI will be deployed by..."
You can today use Blender and other OSS to render exquisite 4K or higher projection-ready animation, etc etc.; but that does not give you access to distribution or marketing or any of the other consolidated multi-modal resources of Disney.
The analogy is weak however in as much as the "synergies" in Schneier's assertion are much, much stronger. We already have ubiquitous surveillance. We already have stochastic mind control (sentiment steering, if you prefer) coupled to it.
What ML/AI and LLM do for an existing oligopoly is render its advantages largely unassailable. Whatever advances come in automated reasoning—at large scale-will naturally, inevitably, indeed necessarily (per fiduciary requirements wrt shareholder interest), be exerted to secure and grow monopoly powers.
In the model of contemporary American capitalism, that translates directly into "enhancing and consolidating regulatory capture," i.e. de facto "control" of governance via determination of public discourse and electability.
None of this is conspiracy theory, it's not just open-book but crowed and championed, not least in insider circles discussing AI and its applications, such as gather here. It's just not the public face of AI.
There is however indeed going to be a period of potential for black swan disequilibrium, however; private application of AI may give early movers advantage in domains that may destabilize the existing power landscape. Which isn't so much an argument against Schneier, as an extension of the risk surface.
“hidden exploitation”
Its going to be challenging to detect AI intent when it is sharded across many sessions by a malicious AI or bad actors utilizing AI. To illustrate this, we can look at TikTok, where the algorithmic ranking of content in user feeds, possibly driven by AI, is shaping public opinion. However, we should also consider the possibility of a more subtle campaign that gradually molds AI responses and conversations over time.
This could be be gradually introducing biased information or subtly framing certain ideas in a particular way. Over time, this could influence how the AI interacts with users and impacts the perception of different topics or issues.
It will take narrowly scoped, highly tuned single-purpose AI to detect and address this threat, but I doubt the private sector has a profit motive for developing that tech. Here's where legislation, particularly tort law, should be stepping in - to create a financial incentive.
I like how he talks about advertising as "surveillance and manipulation". Because that's what it is. People talk about ads providing a socially important service of product discovery, but there's nothing in the ad stack that optimizes for that. Instead, every layer of the stack optimizes for selling you as much stuff as possible.
So maybe our attempts at regulation shouldn't even start with AI. We should start by regulating the surveillance and manipulation that already exists. And if we as a society can't manage even that, then our chances of successfully regulating AI are pretty slim.
Related to this is Promise Theory - https://en.wikipedia.org/wiki/Promise_theory
Promise Theory goes well beyond trust, by first understanding that promises are not obligations (promises in Promise Theory are intentions made known to an audience, and carries no implied notions of compliance), and that it is easier to reason systems of autonomous agents that can determine their own trust locally, with the limited information that they have. Promise theory applies to any set of autonomous agents, both machines and humans.
Back when this was formulated, we didn't have machines capable of being fully autonomous. In promise theory, "autonomous agent" had a specific definition, that is, something that is capable of making its own promises as well as determine for itself, the trustworthyness of promises from other agents. How such an agent goes about this is private, and not known to any other agent. Machines were actually proxies for human agents, with the engineers, designers, and owners making the promises to other humans.
AI that is fully capable of being autonomous agents under the definition of promise theory would be making its own intention known, and not necessarily as a proxy for humans. Yet, because we live in a world of obligations, promises, and we haven't wrapped our heads around the idea of fully autonomous AIs, we will still try to understand issues of trust and safety in terms of the promises humans make to each other. Sometimes that means not being clear in that something is meant as a proxy.
For example, in Schneier's essay: "... we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else."
Analyzing that with Promise Theory reveals some insights. He is suggesting AIs are created in a way that should be understood, but that would mean they are proxies for human agents, not as an autonomous agents capable of making promises in its own right.
Tangentially, I am reminded of the essay "The Problem with Music" by Steve Albini [1]. There is a narrative in this essay about A&R scouts and how they make promises to bands. The whole point is that the A&R scout comes across as trustworthy because they believe what they are promising. However, once the deal is signed and money starts rolling in, the A&R guy disappears and a whole new set of people show up. They claim to have no knowledge of any promises made and point to the contracts that were signed.
In some sense, corporations are already a kind of entity that has the properties we are considering. They are proxies for human agents. What is interesting is that they are a many-to-one kind of abstraction where they are proxying many human agents.
It is curious to consider that most powerful AIs will be under the control of corporations (for the foreseeable future). That means they will be proxying multiple human agents. In some sense, AI will become the most effective A&R man that ever was for every conceivable contract.
1. https://thebaffler.com/salvos/the-problem-with-music
This speech seems to be largely future-oriented, about hopes and dreams for using AI in high-trust ways. Yes, we rely on trust for a lot of things, but here are some ways we don't:
Current consumer-oriented AI (LLM's and image generators) act as untrustworthy hint generators and unreliable creative tools that sometimes can generate acceptable results under human supervision. Failure rates are high enough that users should be able to see for themselves that little trust can be placed in them, if they actually use them.
We can play at talking to characters in video games while knowing they're not real, and the same can be true for AI characters. Entertainment is big.
Frustrating as they can be, tools that often fail us can still be quite useful, as long as the error rate isn't too high. Search engines still work as long as you can find what you're looking for with a bit of effort, perhaps by revising your query.
We can think of shopping as a search that often has a high failure rate. Maybe you have to try a lot of shoes to find ones that fit? Often, we rely on generous return policies. That's a form of trust, too, but it's a rather limited amount. Things don't have to work reliably when there are reasonable ways to recover from failures.
Often we rely on things until they break, and then we fix or replace them. It's not good that cell phones have very limited lifespans, but it seems to be acceptable, given how much we get out of using them.
To me, these arguments bear a strong resemblance to those of the free software movement.
> There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
> We need the same sort of thing for our data.
IMO this is equally applicable to "non-AI" software. Modern corporate-controlled software does not have "fiduciary responsibility to [its] clients", and you can see the results of this everywhere. Even in physical products that consumers presumably own the software that controls those products frequently acts _against_ the interests of the device owner when those interests conflict with the interests of the company that wrote the software.
Schneier says:
> It’s not even an open-source model that the public is free to examine and modify.
But, to me at least, that's exactly what the following paraphrase describes:
> A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations.
Two of the Free Software Foundation's four freedoms encapsulate this goal quite nicely:
> Freedom 1: The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
> [...]
> Freedom 3: The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
The problem is that, as it stands, most people do not have practical access to freedoms 1 or 3, and therefore most software you interact with, even software running on devices that you supposedly own, does not "do your computing as you wish", but rather, does your computing as the software authors wish.
Perhaps AI will exacerbate this problem further due to the psychological differences Schneier outlines in the article, but it's hardly a new problem.
Strongly agree, and I am sure the resemblance is conscious. Unfortunately, most people don't know or care about software freedom. So for a general audience, Schneier has to make these points from scratch. But yeah, it would be nice if this push for regulation happened in a way that was compatible with strengthening software freedom as well.
> Taxi driver used to be one of the country’s most dangerous professions. Uber changed that.
I'm not sure thats right. Taxi drivers used to be paid mainly in cash, and a driver sitting on a pile of cash is a robbery target. Uber arrived shortly after ecommerce became pervasive; I've never taken an Uber ride, but I'm assuming that payment is nearly always electronic.
When I drove taxi in the mid-2000s, payment was essentially always cash. The taxi companies were small operations that basically leveraged their connections to city government to get their positions. They resisted all modernizations because they cost money and because their monopoly guarantee them money from passengers no matter how shitty the service - and it was the drivers who made tip-money or not from service quality.
So basically Uber, not general progress, was what changed things to electronic payments etc.
Electronic payment is just a part of the improvements ushered in by Uber, there's also calling to location and driver reviews. Those are important too.
1 reply →
> I'm not sure thats right
Doesn't your point support the quote? Taxi driving was dangerous because of cash and being a robbery target, and Uber changed that.
In my area taxis had credit card machines, but it was common for them to "not work" so the cabbie could fleece you for extra money driving you to an ATM on the way. And then they'd refuse to give change so you'd have to pay in $20 increments. Uber definitely disrupted that.
Uber didn't change that, electronic payment changed that.
6 replies →
> payment is nearly always electronic.
100% always, but I've heard that cash tips are becoming common.
Uber accept cash payments in some markets in the global south. So it's not 100%
1 reply →
> Taxi drivers used to be paid mainly in cash
In the ancient days, sure. But at least in the areas I go to, cabs have been accepting payment by card a long time before Uber came around.
Around here (Montreal), cabs were miserable scammers and scumsuckers that would drive you to ATM machines in the middle of the night to coerce you to withdraw cash to pay them.
I'm glad Uber fucked them to shreds. Now, of course, they're all polite and have credit card machines.
2 replies →
Let me put a counter point to the opening remark. We do not (just) inherently trust people to do the right thing. Law and order, refined and (somewhat, however faultily) applied over the course of decades have programmed us to not do the wrong thing (vs doing the right/ethical thing which cannot always be coded into the laws). Law and order needs to catch up to the advancement in technology and specifically in AI for us to be able to trust all the models that will be running our lives in near future.
Interesting points. But I do question the assertion that humans are innately more willing to trust natural language interfaces.
Humans are evolutionarily primed both to trust and to distrust other humans depending on context.
It might actually be easier to flip the bit and distrust the creepy uncanny valley personal assistant than it is to "distrust" a faceless service that purports to be an objective tool.
Humans are more than capable of distrust, but I think manipulating people to erroneously trust something is still a threat as long as scams exist.
I think a significant factor of individual trust is someone's technical knowledge of how their systems or tools work, shown by how some software engineers actively limit their children's exposure to tech versus a lot of mothers letting the internet babysit their kids. Apparently we can't rely on that knowledge being widespread (yet).
not just natural language interface, ai-generated audio, video, narratives, crafted by data-mining your life, then deep manipulation by constantly trying different stimuli and learning which ones trigger what behavior and pull you deeper into the desired alternate reality.
These are good points. But it seems likely governments will make sure social trust is maintained when problems arise in that regard. Except if people get addicted to AI assistants as fake friends, and resist restricting them. But insufficient laws can be changed and extended, and even highly addicting things like tobacco have been successfully restricted in the past.
A much more terrifying problem is the misuse of AI by terrorists, or aggressive states and military. Or even the danger of humanity losing control of an advanced AI system which doesn't have our best interest in mind. This could spell our disempowerment, or even our end.
It is not clear what could be done about this problem. Passing laws against it is not a likely solution. The game theory here is simply not in our favor. It rewards the most reckless state actors with wealth and power, until it is too late.
>things like tobacco have been successfully restricted in the past
At least in the US do you have any idea how long that battle took and how directly the tobacco companies lied to everyone and paid off politicians? Tobacco setup the handbook for corporations lying to their users in long expensive battles. If AI turns out to be highly dangerous we'll all be long dead as the corporate lawyers fill draw it out in court for decades.
A similar mistake was made by Ezra Klein in the NYT at the end of his opinion piece (1).
The 'we can regulate this' argument relies on heavy, heavy discounting of the past, often paired with heavy discounting of the future. We did not successfully regulate tobacco; we failed, millions suffered and died from manufactured ignorance. We did not successfully regulate the fossil fuel industry; we again, massively failed.
But if you, in the present day, sit comfortably, did not personally did not get lung disease, have not endured the past, present, and future harms of fossil fuels — and in fact have benefited — it is easy to be optimistic about regulation.
1. https://www.nytimes.com/2023/11/22/opinion/openai-sam-altman...
11 replies →
> But it seems likely governments will make sure social trust is maintained when problems arise in that regard.
It is a conspiracy theory of course (and therefore wrong, of course), but some people are of the opinion that somewhere within the vast unknown/unknowable mechanisms of government, there are forces that deliberately slice the population up into various opposing mini-ideological camps so as to keep their attention focused on their fellow citizens rather than their government.
It could be simple emergence, or something else, but it is physically possible to wonder what the truth of the matter is, despite it typically not being metaphysically possible.
> The game theory here is simply not in our favor. It rewards the most reckless state actors with wealth and power, until it is too late.
There are many games being played simultaneously: some seen, most not. And there is also the realm of potentiality: actions we could take, but consistently "choose" to not (which could be substantially a consequence of various other "conspiracy theories", like teaching humans to use and process language in a flawed manner, or think in a flawed manner).
>”It is a conspiracy theory of course (and therefore wrong, of course), but some people are of the opinion that somewhere within the vast unknown/unknowable mechanisms of government, there are forces that deliberately slice the population up into various”
What you are talking about is called polling or marketing. It’s structural, not a conspiracy. It’s inherent to most all statistical analysis, and everyone uses statistics.
5 replies →
Arguments:
1. The danger of AI is they confuse humans to trust them as friends instead of as services.
2. The corporations running those services are incentivized to capitalize on that confusion.
3. Government is obligated to regulate the corporations running AI services, not necessarily AI itself.
---
As a counter you could frame point 2 to be:
The corporations running those services are incentivized to make/host competitive AI products.
---
This is from Ben's take from Stratechery: ( https://stratechery.com/2023/openais-misalignment-and-micros... )
1. Capital cost of AI only feasible by FAANG level players.
2. For Microsoft et. al., "winning" means being the defacto host for AI products- own the marketplace AI services are run on.
3. Humans are only going to provide monthly recurring revenue to products that provide value.
---
Jippity is not my friend, it's a tool I use to do knowledge work faster. Google Photos isn't trying to trick me, it's providing a magic eraser so I keep buying Pixel phones.
High inference cost means MSFT charges a high tax through Azure.
That high cost means services running AI inference are going to require a ton of revenue in a highly competitive market.
Value-add services will outcompete scams/low-value services.
Note that scams, at least for now, will be rarer due to the high costs of inference, don't assume that's always going to be the case.
The real danger I see is targeted AI scams against high value targets.
How much money would someone be willing to spend to capture Elon entering his eX-Twitter password?
Hybrid blockchains could be a solution to make AI more transparent.
Traent has showed how to run entire ML pipelines on blockchain
https://traent.com/blog/innovation/traent-blockchain-ai/
Accept, don't expect. It is different to trust someone and to predict someone. You can for example "trust" a corporation to hunt for profit, or a criminal to be antisocial.
When you really trust someone rather than simply predict someone there is a special characteristic to what you are doing. Trust is both stricter and looser than prediction. You can trust someone to eventually learn from their mistakes but you can also trust that although they will make mistakes they won't drag you down with them.
Most of the people you consider friends are predictable and therefore safe. A friend who goes off script is quickly no longer safe and therefore no longer a friend. Family is not inherently safe but your trust model evolves as you share your trials.
Don't expect things from people you want to get closer to, but accept them when they defy your expectations.
> It’s government that provides the underlying mechanisms for the social trust essential to society.
It's not anymore. Surveys show that people nowadays trust businesses more than they trust the governments, in a major shift (https://www.edelman.com/trust/2023/trust-barometer)
We need trust because we want the world around us to be more predictable, to follow rules. We trust corporations because they do a better job at forcing people to follow rules than the government does.
This is not a change brought about with AI, it changed BEFORE it. (And just because an AI speaks humanly, doesnt mean we will trust it, we have evolved very elaborate reflexes of social trust throughout the history of our species)
That's a different use and meaning of the word trust. The essay is specifically talking about the difference between interpersonal trust (the sort of trust that might be captured by a poll about whether people trust businesses vs. government) and societal trust (which is almost invisible -- it's what makes for a "high trust" or "low trust" society, where things just work vs. every impersonal interaction is fraught with complications).
> interpersonal trust (the sort of trust that might be captured by a poll about whether people trust businesses vs. government)
That's not how I read "interpersonal trust"; I read it as the kind of trust you might confer on a natural person that you know well.
i m talking about societal trust
5 replies →
The short history of cryptocurrency has repeated the reason why all those decades of securities and banking laws/regulations exist.
The collapse in trust of governments is due to lobbyist bribing politicians. And corrupt politicians. And media controlled by a few billionaires who benefit from that distrust.
I think transparency has far more to do with the distrust than any media. Governments got used to being duplicitous hypocrites for generations. The fact government response is to immediately circle their wagons and resist any efforts to become more trustworthy proves they have well-earned their distrust, and no amount of doom and gloom over the dangers of lack of trust will fix that.
“collapse in trust of governments”
I’m pretty sure it’s for 2 reasons: 1) politicians are slick hair car salesman types that have no shame, 2) government being incompetent.
Re 2, if the government could do the things they promised to do, things they charge us for, California high speed rail for example, then people would trust them as much as big business. I can sue a big business, but thanks to incentives I rarely have to. The government on the other hand has lost more lawsuits, screwed over more people, and destroyed the environment to a much greater extent than any single business could manage to do. They use their monopoly on violence to avoid accountability, and waste public funds on their own lifelong political careers, frittering public money away on their own selfish partisan squabbles. Businesses are rarely too big to fail, and therefor cannot behave with impunity. There are only 2 parties. Imagine a world with only 2 corporations!
I first noticed it with Rush Limbaugh in the 90's. The message was "you can't trust X, you can only trust the invisible hand of the market." Systematically, these (mostly right-wing) talking heads have worked through an extensive list of Xes - Government, police, school, courts, vaccines, ... Today, you have people who trust so little that they want to blow the whole system up.
Schneier is right. Social trust is the central issue of the times. We need to figure out how to rebuild it. IHMO, one place to start might be getting money out of politics. It was a choice to call showering money on politicians, "free speech" instead of corruption.
> The message was "you can't trust X, you can only trust the invisible hand of the market." Systematically, these (mostly right-wing) talking heads have worked through an extensive list of Xes - Government, police, school, courts, vaccines
This is not really a partisan thing. "You can't trust the police" is hardly right-wing. Anti-vax started in the anti-GMO anti-corporate homeopathic organic food subculture. Most of the recent criticism of the courts has been from the left (e.g. outrage over Dobbs).
> IHMO, one place to start might be getting money out of politics. It was a choice to call showering money on politicians, "free speech" instead of corruption.
There is no getting money out of politics. Money is power and power is politics. Notice that most of the criticism of "money in politics" comes from the media, because money is a source of power that competes with media ownership for political influence. "All organizations can influence politics" is not actually worse than "only Fox News, Comcast and Google can influence politics."
What we're really suffering from is an erosion of checks and balances. Capturing enough of the government to enact self-serving legislation was meant to be hard, but we kept chipping away at it because people wanted to do their new law, without considering that the systems preventing it were doing something important.
If you want to restore trust in institutions, you need to structurally constrain those institutions from being crooked.
I don't think your first first statement is supported. Right now one party wants to:
Remove the SEC, the FBI, defund the IRS and the DOJ. And a member of congress is refusing to appoint any leaders of our military. Ostensibly due to a disagreement over abortion, but many believe it is retaliation for the military refusing to support an attempted coup a few years ago. Also there is discrediting all trust in voting and democracy itself after losing a national election (even though the administration's own Attorney General could find no meaningful evidence supporting it following his own nationwide investigation). And then there is the meme of the deep state, namely shorthand than anyone who is a civil servant and not a political appointee is not to be trusted. And I'll stop there as anything else start to get pretty political.
Are there items where the left is against trust in society? Absolutory. You called out a few (specifically the police), and the makeup of the courts. As well as you left out forms of de facto power the left typically opposes like board rooms and lending (vs de jure power listed above).
But to say that both sides/parties are against trust in American society in my opinion is either a false argument, or tunnel vision in news sources.
There is still no comparison between the two and the level of distrust in all aspects of society the right has been consistently seeding into broader society.
1 reply →
These articles from Schneier are so incredibly NYT-reader pedestrian.
> We trust many thousands of times a day. Society can’t function without
> it.
The obvious problem with this is that you sometimes just have to “trust” something because there is no other alternative. And he comes to the same conclusion some hundred words later:
> There is something we haven’t discussed when it comes to trust:
> power. Sometimes we have no choice but to trust someone or something
> because they are powerful.
So sometimes you are trusting the waiter and sometimes you are trusting the corrupt Mexican law enforcer... so what use does this freaking concept (as described here) have?
Funnily enough I have found that the concept of Trust is useful to remind process nerds that obsess over minutiae and rules and details that, umm actually, we do in fact in reality get by with a lot of implicit and unstated rules; we don’t need to formalize goddamn everything. But for some reason he goes in the opposite direction and argues that umm actually this “larger” trust is completely defined by explicit rules and regulations. sigh
> And we do it all the time. With governments. With organizations. With
> systems of all kinds. And especially with corporations.
> We might think of them as friends, when they are actually
> services. Corporations are not moral; they are precisely as immoral as
> the law and their reputations let them get away with.
Did I need Schneier to tell me that corporations are not my friend? For some reason I am at a loss as to why I would need to be told this. Or why anyone would.
> You will default to thinking of it as a friend. You will speak to it in
> natural language, and it will respond in kind.
At this point you realize that this whole article is a hypothetical about your own very personal future relationships (because he thinks it is personal) with AI. And you’re being lectured about how your own social psyche works by a computer nerd security expert. Ok?
> It’s government that provides the underlying mechanisms for the social
> trust essential to society. Think about contract law. Or laws about
> property, or laws protecting your personal safety. Or any of the health
> and safety codes that let you board a plane, eat at a restaurant, or buy
> a pharmaceutical without worry.
Talk about category error? Or rather being confused about causation?
“Think about laws about property.” Indeed, crucial and fundamental to capitalist states—and a few paragraphs ago he compared capitalism to the “paper-clip machine” (and rightly so).
The more I think about it, the more this Trust chant feels like left-liberal authoritarianism. Look at his own definition up to this point. You acquiesce to power? Well that’s still trust! (You click Accept to the terms-of-use every time because you know that there is no alternative? Trust!)
As long as there aren’t riots in the street, we the People Trust. Lead us into the future, Mitch McConnel.
Look, I get it: he’s making a normative statement, not necessarily a descriptive one. This speech is basically a monologue that he as a benevolent technocrat is either going to present to elected representatives in whatever forum that Harvard graduates etc. speak officially to people with power because they are some kind of subject matter expert. He’s saying that if he bangs his hand on table enough times then hopefully the representatives will enact some specific policies that regulate The AI.
But it misrepresents both kinds of societies:
- The US has low trust in the federal government. And this isn’t unfounded; it isn’t some Alex Jones “conspiracy theory” that the government is not “we the people”
- Societies with a high trust in the government: I live in one. But the Trust chanters just love to boldly assert that societies like mine (however they are) are such and such because they have high trust. No! Technocrats would just love for it to be that case that prosperity and a well-functioning society is born of “trusting” the smart people in charge, staying in your lane, and letting them run the show. But people trust the government because of a whole century-long history of things like labor organizing and democratization. See, Trust is not created by someone top-down chanting that Corporations are your friends, or that we just need to let the Government regulate more; Trust is created by the whole of society.
Is it possible you might have forgotten to actually state the critique here, or is this maybe just pure libertarian ire? I am all for efforts to de-naturalize our conceptions of property relations, assert the primacy of collective solidarity, etc. but this feels like the wrong particular battle to make.. To affirm that the meager institutional trust one might have with their state is something hard won by collective action does not feel like a damning indictment of what he is saying, but perhaps I misunderstanding?
> Is it possible you might have forgotten to actually state the critique here, or is this maybe just pure libertarian ire?
You have clearly gleaned some of the critique that I was trying to go for. So you can spare me the “actually”.
I am not terribly impressed by left-liberal technocrats. That’s the pet-peeve seedling whence my whole screed.
NB: Your comment is extraordinarily difficult for me to read with the quote style you've used. I prefer surrounding quoted text in *asterisks* which italicises the quotes. Reformatting your comment:
================================================================================
These articles from Schneier are so incredibly NYT-reader pedestrian.
We trust many thousands of times a day. Society can’t function without it.
The obvious problem with this is that you sometimes just have to “trust” something because there is no other alternative. And he comes to the same conclusion some hundred words later:
There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful.
So sometimes you are trusting the waiter and sometimes you are trusting the corrupt Mexican law enforcer... so what use does this freaking concept (as described here) have?
Funnily enough I have found that the concept of Trust is useful to remind process nerds that obsess over minutiae and rules and details that, umm actually, we do in fact in reality get by with a lot of implicit and unstated rules; we don’t need to formalize goddamn everything. But for some reason he goes in the opposite direction and argues that umm actually this “larger” trust is completely defined by explicit rules and regulations. sigh
And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations. We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.
Did I need Schneier to tell me that corporations are not my friend? For some reason I am at a loss as to why I would need to be told this. Or why anyone would.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind.
At this point you realize that this whole article is a hypothetical about your own very personal future relationships (because he thinks it is personal) with AI. And you’re being lectured about how your own social psyche works by a computer nerd security expert. Ok?
It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.
Talk about category error? Or rather being confused about causation?
“Think about laws about property.” Indeed, crucial and fundamental to capitalist states—and a few paragraphs ago he compared capitalism to the “paper-clip machine” (and rightly so).
The more I think about it, the more this Trust chant feels like left-liberal authoritarianism. Look at his own definition up to this point. You acquiesce to power? Well that’s still trust! (You click Accept to the terms-of-use every time because you know that there is no alternative? Trust!)
As long as there aren’t riots in the street, we the People Trust. Lead us into the future, Mitch McConnel.
Look, I get it: he’s making a normative statement, not necessarily a descriptive one. This speech is basically a monologue that he as a benevolent technocrat is either going to present to elected representatives in whatever forum that Harvard graduates etc. speak officially to people with power because they are some kind of subject matter expert. He’s saying that if he bangs his hand on table enough times then hopefully the representatives will enact some specific policies that regulate The AI.
But it misrepresents both kinds of societies:
- The US has low trust in the federal government. And this isn’t unfounded; it isn’t some Alex Jones “conspiracy theory” that the government is not “we the people”
- Societies with a high trust in the government: I live in one. But the Trust chanters just love to boldly assert that societies like mine (however they are) are such and such because they have high trust. No! Technocrats would just love for it to be that case that prosperity and a well-functioning society is born of “trusting” the smart people in charge, staying in your lane, and letting them run the show. But people trust the government because of a whole century-long history of things like labor organizing and democratization. See, Trust is not created by someone top-down chanting that Corporations are your friends, or that we just need to let the Government regulate more; Trust is created by the whole of society.
Thank you:)
[dead]