Comment by Aurornis
7 days ago
> Participants weren’t lazy. They were experienced professionals.
Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.
In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.
> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart
I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.
> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources
He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.
Having a surface level understanding of what you're looking at is a huge part of OSINT.
These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.
At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.
The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.
[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law
2 replies →
[dead]
The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.
> people who outsource their thinking to LLMs.
OSINT I immagine would be kind of useless to analize with LLMs because the kind of information you're interested in is very new so not enough sources for the LLMs to regurgitate.
As an example -- I read some defence articles about Romania operating in the future 70 F-16 and it immediately caught my eye because I was expecting in the 40s range. Apparently the Nerherlands will leave those 18 F-16s to Romania -- but I'm not that curious to dig into enough -- I was expecting those would go to Ukraine.
So just for fun I asked the question -- to Gemini 2.5 and Chat gpt -- "How many F-16s will Romania eventually operate" -- they all regurgitated the 40s number. I explicitly asked Gemini about the 18 F-16s from the Nerherlands and it kept its number estimate, saying those are for training purposes.
Only after I explicitly explained it my own knowledge did Gemini google it and confirm it.
Or I asked about the tethered FPVs in Ukraine and it told me those have very little impact. Only after I explicitly mentioned the recent russian successful Kursk counter-offensive did it acknowledge them.
And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.
I’ll be one to raise my hand and say this has been dramatically not the case for anyone I’ve introduced AI to or myself.
Significantly more informed and reasoned.
Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.
It's always more comfortable for people to blame the thing rather than the person.
More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.
Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.
7 replies →
Worse than enterprising humans are authoritarian humans who want to tell others how they should live, usually also exempting themselves from their rules.
They also prey on the weaknesses of humans and social appearances to do things for a "greater good".
There's a problem and we 'must do something' and if you're against doing the something I propose youre evil and I'll label you.
The real mindfuck is that sometimes, an unscrupulous entrepreneur only has to play your "societal harm fighting" game through politicians and they get their way and we lose.
I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.
If you are in the news business you basically have to.
I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.
Marketing has a powerful effect. Look at how the decrease in smoking coincided with the decrease in smoking advertisement (and now look at the uptick in vaping due to the marketing as a replacement for smoking).
Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.