Comment by BrenBarn
7 days ago
It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."
It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.
To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.
It’s already becoming politicized, in the lowercase-p sense of the word. One is assumed to be either pro- or anti-AI, and so you gotta do your best to signal to the reader where you lie.
> so you gotta do your best to signal to the reader where you lie
Or what?
Or the reader will put you into a category yourself and won't be willing to look at the essence of the argument.
I'd say the better word for that is polarising than political, but they synonims these days.
The other thing is that the second anyone even perceives an opinion to be "anti-AI" they bombard you with "people thought the printing press lowered intellect too!" Or radio or TV or video games, etc.
No one ever considers that maybe they all did lower our attention spans, prevent us from learning as well as we used to, etc. and now we are at a point we can't afford to keep losing intelligence and attention span
I think people don't consider that because the usual criticism of television and video games is that people spend too long paying attention to them.
One of the famous Greek philosophers complained that books were hurting people's minds because they no longer memorized information, so this kind of complaint is as old as civilization itself. There is no evidence that we would be on Mars by now already if we had never invented books or television.
Pluto? Plotto? Platti?
Seriously though, that's a horrible bowdlerization of the argument in the Phaedrus. It's actually very subtle and interesting, not just reactionary griping.
2 replies →
But it possibly did lower our ability to memorize? We may be didn't need that ability to be so high to go to Mars or whatever.
I'm just saying, it's possible that reliving our minds of various tasks worked incrementally each time until a point it didn't anymore. Same way our liberal and egalitarian progress as a society worked great until all our countries started having birth rate problems? I mean, not trying to start another argument. The point is. Something (technological progress, social progress, even financial progress) can be great until we hit a point of no return where things collapse.
That’s a much harder claim to prove. The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?
If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?
I suspect that as the problem spaces diverge enough you’ll have two skill sets. Who can solve n problems the fastest and who can determine which k problems require deep thought and narrow direction. Right now we have the same group of people solving both.
> The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?
Gell-Mann Amnesia. Attention span limits the amount information of information we can process and with attention spans decreasing, increases to information flow stop having a positive effect. People simply forget what they started with even if that contradicts previous information.
> If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?
You don't end up solving the problem in near constant time, you end up applying the last suggested solution. There's a difference.
Well I mean, nitpick, but Fentanyl is a useful medication in the right context. It's not inherently evil.
I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?)
But I actually think AI (not AGI) as an assistant can be helpful.
> I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. [...] (not AGI)
Speaking of Wisdom and a different "AGI", I think there's an old Dungeons and Dragons joke that can be reworked here:
Intelligence is knowing than an LLM uses vector embeddings of tokens.
Wisdom is knowing LLMs shouldn't be used for business rules.
Are we talking about structural things or about individual perspective things?
At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.
At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.
Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.
> build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner
A few thousand years of pre-LLM primary sources remain available for evaluation by humans and LLMs.
11 replies →
Honestly this post seems like misplaced wisdom to me: your concern is the development of AGI displacing jobs and not the numerous reliability problems with the analytic use of AI tools in particular the overestimate of LLM capabilities because they're good at writing pretty prose?
If we were headed straight to the AGI era then hey, problem solved - intelligent general machines which can advance towards solutions in a coherent if not human like fashion is one thing but that's not what AI is today.
AI today is enormously unreliable and very limited in a dangerous way - namely it looks more capable then it is.
What evidence is there that tarrif policy was LLM generated?
There are uninhabited islands on the list.
4 replies →
There are people who asked several AI engines (ChatGPT, Grok etc.) “what should the tariff policy be to bring the trade balance to zero?” (quoting from memory) an the answer was the formula used by the Trump administration. If I find the references I will post them as a follow-up.
Russia, North Korea and handful of other countries were spared, likely because they sided with the US and Russia at the UN General Assembly on Feb 24 of this year, in voting against “Advancing a comprehensive, just and lasting peace in Ukraine.” https://digitallibrary.un.org/record/4076672
EDIT: Found it: https://nitter.net/krishnanrohit/status/1907587352157106292
Also discussed here: https://www.latintimes.com/trump-accused-using-chatgpt-creat...
The theory was first floated by Destiny, a popular political commentator. He accused the administration of using ChatGPT to calculate the tariffs the U.S. is charged by other countries, "which is why the tariffs make absolutely no fucking sense."
"They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater," Destiny, who goes by @TheOmniLiberal on X, shared in a post on Wednesday.
> I think they asked ChatGPT to calculate the tariffs from other countries, which is why the tariffs make absolutely no fucking sense.
> They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater. https://t.co/Rc45V7qxHl pic.twitter.com/SUu2syKbHS
> — Destiny | Steven Bonnell II (@TheOmniLiberal) April 2, 2025
He attached a screenshot of his exchange with the AI bot. He started by asking ChatGPT, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on even-playing fields when it comes to trade deficit? Set minimum at 10%."
"To calculate tariffs that help level the playing field in terms of trade deficits (with a minimum tariff of 10%), you can use a proportional tariff formula based on the trade deficit with each country. The idea is to impose higher tariffs on countries with which the U.S. has larger trade deficits, thus incentivizing more balanced trade," the bot responded, along with a formula to use.
John Aravosis, an influencer with a background in law and journalism, shared a TikTok video that then outlined how each tariff was calculated; by essentially taking the U.S. trade deficit with the country divided by the total imports from that country to the U.S.
"Guys, they're setting U.S. trade policy based on a bad ChatGPT question that got it totally wrong. That's how we're doing trade war with the world," Aravosis proclaimed before adding the stock market is "totally crashing."
It’s a rant against the wrong usage of a tool not the tool as such.
It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.
My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM.
If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent."
The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful".
Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality."
That's the real danger of AI.
The false promises of the AI companies and the false expectations of the management and users.
Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data.
They trust AI before it's even there and don't even consider a transition period where they check if the result are correct.
Like with security convenience prevails.
2 replies →
> All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.
If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude.
Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.
This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations.
4 replies →
Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.
It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.
It’s not a rant against fentanyl, it’s a rant against irresponsible use of fentanyl.
Just like this is a rant against irresponsible use of AI.
Hope this helps
Yes, that makes much more sense.
TFA makes the point pretty clear IMHO: they aren’t opposed to AI, they’re opposed to over-reliance on AI.
Because "rant" is irrational, and the author wants to be seen as staking out a rational opposition.
Of course, every ranter wants to be seen that way, and so a protest that something isn't a rant against X is generally a sign that it absolutely is a rant against X that the author is pre-emptively defending.
The classic hallmark of rant is picking some study, not reading the methodology etc and making wild conclusion on it. For example for a study it says:
> The study revealed a clear pattern: the more confidence users had in the AI, the less they thought critically
And the study didn't even checked that. They just plotted the correlation between how much user think they rely on AI vs how much effort they think they saved. Isn't it expected to be positive even if they think as critically.
[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
I've rarely read a rant that didn't consist of some good logical points
Doesn‘t mean listing logical points makes it a rant
2 replies →
The difference is that between a considered critique and unhinged venting.
Reminds me of people who say “there is nothing wrong with capitalism but…”
You shall not criticize the profit!
[dead]
They have to preface their articles with "This isn’t a rant against AI." because there are a lot of rants against AI out there, such as your comment.
Both substances and AI can be used responsibly. It is not the fault of substances nor AI.
People is why we can't have anything nice. It sucks.
I have medical reasons to take opioids, but in the eyes of people, I am a junkie. I would not be considered a junkie if I kept popping ibuprofen. It is silly. Opioids do not even make me high to begin with (it is complicated).
I bet the downvotes are done by people who have absolutely no need to take any medications, or have no clue what it is like to be called a junkie for the rest of your life for taking medications that were prescribed to begin with.
Or if not, then what, is it not true that both substances and AI can be used responsibly, and irresponsibly?
"People is why we can't have anything nice. It sucks." is also true, applies to many things, just consider vending machines alone, or bags in public (for dog poop) and anything of the sort. We no longer have bags anymore, because people stole it. A great instance of "this is why we can't have nice things". Pretty sure you can think of more.
Make the down-votes make sense, please.
(I do not care about the down-votes per se, I care about why I am being disagreed with without any responses.)