Comment by crote
1 day ago
Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
What’s the “Ender’s Game Approach “? I’ve read the book but I’m not sure which part you’re referring to.
2 replies →
>An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human.
"Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS.
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".
Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.
17 replies →
The longstanding existence of religions and the continual birth of new cults, the popularity of extremist political groups of all types, and the ubiquity of fortune-telling across cultures, seem to stand in opposition to your assertion that people hate being manipulated. At least, people enjoy belonging to something far more than they hate being manipulated. Most successful versions of fortune-telling, religious conversion, and cult recruitment do utilize confirmation bias affirmation, love-bombing, and other techniques to increase people's agreeableness before getting to the manipulation part, but they still successfully do that. It's also like saying that advertising is pointless because it's manipulating people into buying things, and while people dislike ads it's also still a very successful part of getting people to buy products or else corporations wouldn't still spend vast amounts of money on marketing.
People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.
Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.
17 replies →
No they hate feeling manipulated. They not only expect social manipulation they think you are downright rude, unsocialized, and untrustworthy if you don't manipulate them reflexively. Just look at mirroring alone.
https://en.wikipedia.org/wiki/Mirroring
I hated to come to this conclusion, but the average neurotypical person is fundamentally so batshit insane they think that not manipulating them is a sign you aren't trustworthy and ability to conceal your emotions and put on an appropriate emotional kabuki dance is a sign of trustworthiness.
The crux is whether the signal of abnormality will be perceived as such in society.
- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.
- People are easily distracted, you don't have to feed them much.
- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.
TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives of those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.
4 replies →
Knowing one is manipulated, requires having some trusted alternate source to verify against.
If all your trusted sources are saying the same thing, then you are safe.
If all your untrusted sources are telling you your trusted sources are lying, then it only means your trusted sources are of good character.
Most people are wildly unaware of the type of social conditioning they are under.
1 reply →
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along.
As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot.
That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor.
But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time.
1 reply →
On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me.
However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it.
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.
4 replies →
I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.
When we were kids, you had either direct speech, hand-written words, or printed words.
The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.
Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.
7 replies →
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
Wow, that is interesting! We used to go to elders, oracles, and priests. We have totally outsourced our humanity.
Well, I prefer this to people who bag up the poop and then throw the bag in the bushes, which seems increasingly common. Another popular option seems to be hanging the bag on a nearby tree branch, as if there's someone who's responsible for coming by and collecting it later.
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.
This is how you get things like the colossal Facebook writeoff of "metaverse".
Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view
Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it
1 reply →
I’ve seen this result. I wonder if it’s because LLMs are (grok notwithstanding) deliberately middle-of-the-road in their stances, and accurately and patiently report the facts? In which case a hypothetical liar LLM would not be as persuasive.
Or is it because they are super-human already in some persuasion skills, and they can persuade people even of falsehoods?
The evening news was once a trusted source. Wikipedia had its run. Google too. Eventually, the weight of all the the thumbs on the scale will be felt and trust will be lost for good and then we will invent a new oracle.
AI is wrong so often that anyone who routinely uses one will get burnt at some point.
Users having unflinching trust in AI? I think not.
> Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical.
The paper is now published but it's behind paywall so I shared the working paper link.
[1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report...
And just see all of history where totalitarians or despotic kings were in power.
I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses.
Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable.
So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign".
"someone with 100k dollars" is not exactly "the masses". It is a larger set, but it's just more rich/powerful people. Which I would not describe as the "masses".
I know what you mean, but that descriptor seems off
Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time.
…Also partially because it’s better then most other sources
>people trust the output of LLMs more than other
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
LLMs haven't been caught actively lying yet, which isn't something that can be said for anything else.
Give it 5yr and their reputation will be in the toilet too.
LLMs can't lie: they aren't alive.
The text they produce contains lies, constantly, at almost every interaction.
2 replies →
> LLMs haven't been caught actively lying yet…
Any time they say "I'm sorry" - which is very, very common - they're lying.
When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
[0]: https://smartmic.bearblog.dev/enforced-conformity/
People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale."
The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions.
>Once people perceive a social stale mate they either avoid
This is called the Firehose of Falsehood and it's a very effective way of killing public participation.
>use their personal experiences to make their decisions
Yes they can if they have them. But people use other peoples personal experiences when they don't, which means all you have to do is become their facebook friend and then tell them that 'trans mexican aliens from mars stole their job' and they'll start repeating it as a personal experience.
See also https://english.elpais.com/society/2025-03-23/why-everything...
https://medium.com/knowable/why-everything-looks-the-same-ba...
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Social networks are not a prerequisite for sentiment shaping by AI.
Every time you interact with an AI, its responses and persuasive capabilities shape how you think.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
> If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
Evolution by natural selection suggests that this might be a filter that yield future generations of humans that are more robust and resilient.
3 replies →
> Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged
Then that doesn’t seem like a (counter) movement.
There are also many “grass roots movements” that I don’t like and it doesn’t make them “good” just because they’re “grass roots”.
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
2 replies →
Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well.
"Quantity has a quality of its own."
But the entire promise of AI is that things that were expensive because they required human labor are now cheap.
So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.
>Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
That's the entire point, that AI cheapens the cost of persuassion.
A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing.
AI is the force multiplier in this case.
That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era.
Cost matters.
Let's look at a piece of tech that literally changed humankind.
The printing press. We could create copies of books before the printing press. All it did was reduce the cost.
That's an interesting example. We get a new technology, and cost goes down, and volume goes up, and it takes a couple generations for society to adjust.
I think of it as the lower cost makes reaching people easier, which is like the gain going up. And in order for society to be able to function, people need to learn to turn their own, individual gain down - otherwise they get overwhelmed by the new volume of information, or by manipulation from those using the new medium.
Propaganda with books could start wars in decades.
Propaganda with radio could start wars in years.
Propaganda with TV could start wars in months.
Propaganda with Internet/AI could start war in _____?
Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.
Well well... recent "feature" of X revealing the actual "actors" location of operation shows how much "Russian troll armies" are there.. turns out there're rather overwhelming Indian and Bangladesh armies working hard for who? Common, say it! And despite of that, while cheap, not that cheaper compared to when the "agentic" approach enters the game.
I really wish people would stop fixating on one nation-state or other entity when it comes to the astroturfing problem. It's something that's going to have all sorts of hands stirring the pot since it's basically just a very pernicious new form of marketing and propaganda. Any sizeable countries or corporations are going to be utilizing this new tool of manipulation, regardless of how scummy that may be.
That's one of those "nothing to see here, move along" comments.
First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.
Come the next election, see how many people ask AI "who to vote for", and see whether each AI has a distinct suggestion...
> nothing in the article is AI-specific
Timing is. Before AI this was generally seen as crackpot talk. Now it is much more believable.
You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own.
What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk.
I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it.
This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen).
5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both.
Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016
I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists.
It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go.
It’s been pretty transparently happening for years in most online communities.
And yet denied massively. This said, if I were one of the people performing mass manipulation, I would also make bots to say mass manipulation is not really.
The cheapest method by far is still TV networks. As a billionaire you can buy them without putting any of your own money, so it's effectively free. See Sinclair Broadcast Group and Paramount Skydance (Larry Ellison).
As shown in "Network Propaganda", TV still influences all other media, including print media and social media, so you don't need to watch TV to be influenced.
What makes AI a unique new threat is that it do a new kind of both surgical and mass attack: you can now generate the ideal message per target, basically you can whisper to everyone, or each group, at any granularity, the most convincing message. It also removes a lot of language and culture barriers, for ex. Russian or Chinese propaganda is ridiculously bad when it crosses borders, at least when targeting the english speaking world, this is also a lot easier/cheaper.
> Note that nothing in the article is AI-specific
No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.
AI (LLM) is a force multiplier for troll armies. For the same money bad actors can brainwash more people.
Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
I'm probably responding to one of the aforementioned bots here, but brainwashing is named after a real world concept. People who pioneered the practice named it themselves. [1] Real brainwashing predates fictional brainwashing.
[1] https://en.wikipedia.org/wiki/Brainwashing#China_and_the_Kor...
3 replies →
So your thesis is that marketing doesn't work?
4 replies →
[dead]
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
Making doing bad things way cheaper _is_ a problem, though.
It has been practiced by populist politicians for millennia, e.g. pork barelling.
The thread started with your reasonable observation but degenerated into the usual red-vs-blue slapfight powered by the exact "elite shaping of mass preferences" and "cheaply generated propaganda" at issue.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
I'm disappointed.
I'm pretty much always disappointed these days reading online discussions, and I sometimes think about how intentionally devolving most online conversations into petty slapfights is one of the very effective astroturfing techniques. It's basically signal jamming anything substantive or cooperative because people get tired sifting through all the noise and get mad reading all the bad takes. Though I have no doubt that many of them are still 100% genuine foolish humans.
> Note that nothing in the article is AI-specific
This is such a tired counter argument against LLM safety concerns.
You understand that persuasion and influence are behaviors on a spectrum. Meaning some people, or in this case products, are more or less or better or worse at persuading and influencing.
In this case people are concerned with LLM's ability to influence more effectively than other modes that we have had in the past.
For example, I have had many tech illiterate people tell me that they believe "AI" is 'intelligent' and 'knows everything' and trust its output without question.
While at the same time I've yet to meet a single person who says the same thing about "targeted Facebook ads".
So depressing watching all of you do free propo psy ops for these fascist corpos.
Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.
Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.
> You don't need any AI for this.
AI accelerates it considerably and with it being pushed everywhere, weaves it into the fabric of most of what you interact with.
If instead of searches you now have AI queries, then everyone gets the same narrative, created by the LLM (or a few different narratives from the few models out there). And the vast majority of people won't know it.
If LLMs become the de-facto source of information by virtue of their ubiquity, then voila, you now have a few large corporations who control the source of information for the vast majority of the population. And unlike cable TV news which I have to go out of my way to sign up and pay for, LLMs are/will be everywhere and available for free (ad-based).
We already know models can be tuned to have biases (see Grok).
While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
Your example is weird tbh. Gates was doing capitalist things that were evil. His philanthropy is good. There is no contradiction here. People can do good and bad things.
The "philanthropy" worked on you.
Also I think AI at least in its current LLM form may be a force against polarisation. Like if you go on X/twitter and type "Biden" or "Biden Crooked" in the "Explore" thing in the side menu you get loads of abusive stuff including the president slagging him off. Type into "Grok" about those it says Biden was a decent bloke and more "there is no conclusive evidence that Joe Biden personally committed criminal acts, accepted bribes, or abused his office for family gain"
I mention Grok because being owned by a right leaning billionaire you'd think it'd be one of the first to go.
It is worth pointing out that ownership of AI is becoming more and more consolidated over time, by elites. Only Elon Musk or Sam Altman can adjust their AI models. We recognize the consolidation of media outlets as a problem for similar reasons, and Musk owning grok and twitter is especially dangerous in this regard. Conversely, buying facebook ads is more democratized.
[dead]
[flagged]
Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?
You appear to be exactly the kind of person the article is talking about. What exactly makes LLMs have "better" opinions than others?
LLMs don't have "opinions" [0] because they don't actually think. Maybe we need to move past the ignorance surrounding how LLMs actually work, first.
[0] https://www.theverge.com/ai-artificial-intelligence/827820/l...
"Russian troll armies.." if you believe in "Russian troll armies", you are welcome to believe in flying saucers as well..
Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?
Russian mass influence campaigns are well documented globally and have been for more than a decade.
It is also right in their military strategy text that you can read yourself.
Even beyond that, why would an adversarial nation state to the US not do this? It is extremely asymmetrical, effective and cheap.
The parent comment shows how easy it is to manipulate smart people away from their common sense into believing obvious nonsense if you use your brain for 2 seconds.
Of course, of course.. still, strangely I see online other kinds of "armies" much more often.. and the scale, in this case, is indeed of armies..
2 replies →
Going by your past comments, you're a great example of a russian troll.
https://en.wikipedia.org/wiki/Internet_Research_Agency
Here's a recent example
https://www.justice.gov/archives/opa/pr/justice-department-d...
This is well-documented, as are the corresponding Chinese ones.