Comment by delichon
6 months ago
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
If you claimed that AI was inevitable in the 80s and invested, or claimed people would be inevitably moving to VR 10 years ago - you would be shit out of luck. Zuck is still burning billions on it with nothing to show for it and a bad outlook. Even Apple tried it and hilariously missed the demand estimate. The only potential bailout for this tech is AR, but thats still years away from consumer market and widespread adoption, and probably will have very little to do with shit that is getting built for VR, because its a completely different experience. But I am sure some of the tech/UX will carry over.
Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?
Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.
None of the "failed" innovations you cited were even near the adoption rate of current LLMs.
As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.
VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.
That's a big difference
The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings. It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.
6 replies →
> They can become even better, but even if they don't there are plenty of use cases for them.
If they don't become better we are left with a big but not huge change. Productivity gains of around 10 to 20 percent in most knowledge work. That's huge for sure but in my eyes the internet and pc revolution before that were more transformative than that. If LLMs become better, get so good they replace huge chunks of knowledge workers and then go out to the physical world then yeah ...that would be the fastest transformation of the economy in history imo.
3 replies →
OK but what does adoption rate vs. real world impact tell here ?
With all the insane exposure and downloads how many people cant even be convinced to pay 20$/month for it ? The value proposition to most people is that low. So you are basically betting on LLMs making a leap in performance to pay for the investments.
> None of the "failed" innovations you cited were even near the adoption rate of current LLMs.
The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.
The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.
1 reply →
I don’t see this as that big a difference, of course AI/LLMs are here to stay, but the hundreds in billions of bets on LLMs don’t assume linear growth.
The people claiming that AI in the 80s or VR or robotaxis or self-driving cars in the 2010s were inevitable weren't doing it on the basis of the tech available at that point, but on the assumed future developments. Just a little more work and they'd be useful, we promise. You just need to believe hard enough.
With the smartphone in 2009, the web in the late 90s or LLMs now, there's no element of "trust me, bro" needed. You can try them yourself and see how useful they are. You didn't need to be a tech visionary to predict the future when you're buying stuff from Amazon in the 90s, or using YouTube or Uber on your phone in 2009, or using Claude Code today. I'm certainly no visionary, but both the web and the smartphone felt different from everything else at the time, and AI feels like that now.
LLM inevitablists definitely assume future developments will improve their current state.
7 replies →
> Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.
But that isn't the argument. The article isn't arguing about something failing or succeeding based on merit, they seem to have already accepted strong AI has "merit" (in the utility sense). The argument is that despite the strong utility incentive, there is a case to be made that it will be overall harmful so we should be actively fighting against it, and it isn't inevitable that it should come to full fruition.
That is very different than VR. No-one was trying to raise awareness of the dangers of VR and fight against it. It just hasn't taken off because we don't really like it as much as people thought we would.
But for the strong AI case, my argument is that it is virtually inevitable. Not in any predestination sense, but purely because the incentives for first past the post are way too strong. There is no way the world is regulating this away when competitive nations exist. If the US tries, China won't, or vice versa. It's an arms race, and in that sense is inevitable.
https://www.youtube.com/watch?v=zhr6fHmCJ6k (1min video, 'Elon Musk's broken promises')
Musk's 2014/2015 promises are arguably delivered, here in 2025 (took a little more than '1 month' tho), but the promises starting in 2016 are somewhere between 'undelivered' and 'blatant bullshit'.
I mean no argument here - but the insane valuation was at some point based on a fleet of self driving cars based on cars they don't even have to own - overtaking Uber. I don't think they are anywhere close to that. (It's hard to keep track what it is now - robots and AI ?) Kudos for hype chasing all these years tho. Only beaten by Jensen on that front.
What are you on? The only potential is AR? What?!!! The problem is AR is not enough innovation and high cost. That's not the case with AI. All it needs is computing, not some ground breaking new technology.
The only potential for VR is salvaging the investment by pivoting to AR.
And if it's only compute why we are seeing teams with limited compute from china reach SOTA model performance, and teams with a bunch of compute available (Meta) fail ?
>Tesla stock has been riding on the self driving robo-taxies meme for a decade now
We do have self-driving taxis now, and they are so good that people will pay extra to take them. It's just not Tesla cars doing it.
Is still there some indian online to guide them through difficult intersections?
Yes, and yet the rate of development and deployment is substantially slower than people like me were expecting.
Back in 2009, I was expecting normal people to be able to just buy a new vehicle with no steering wheel required or supplied by 2019, not for a handful of geo-fenced taxis that slowly expanded over the 6 years from 2019 to 2025.
Ironically, this is exactly the technique for arguing that the blog mentions.
Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?
> Remember ...
No, I don't remember it like that. Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?
You don't. I love the argument ad absurdum more than most but you've taken it a teensy bit too far.
People genuinely did suggest that we were going to redesign our cities because of the Segway. The volume and duration of the hype were smaller (especially once people saw how ugly the thing was) but it was similarly breathless.
3 replies →
> Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?
LLM are more useful than Segway, but it can still be overhyped because the hype is so much larger. So its comparable, as you say LLM is so much more hyped doesn't mean it can't be overhyped.
1 reply →
1. The Segway had very low market penetration but a lot of PR. LLMs and diffusion models have had massive organic growth.
2. Segways were just ahead of their time: portable lithium-ion powered urban personal transportation is getting pretty big now.
Massive, organic, and unprofitable. And as soon as it's no longer free, as soon as the VC funding can no longer sustain it, an enormous fraction of usage and users will all evaporate.
The Segway always had a high barrier to entry. Currently for ChatGPT you don't even need an account, and everyone already has a Google account.
29 replies →
> LLMs and diffusion models have had massive organic growth.
I haven't seen that at all. I've seen a whole lot of top-down AI usage mandates, and every time what sounds like a sensible positive take comes along, it turns out to have been written by someone who works for an AI company.
That's funny, I remember seeing "IT" penetrate Mr. Garrison.
https://www.youtube.com/watch?v=SK362RLHXGY
Hey, it still beats what you go through at the airports.
I think about the Segway a lot. It's a good example. Man, what a wild time. Everyone was so excited and it was held in mystery for so long. People had tried it in secret and raved about it on television. Then... they showed it... and... well...
I got to try one once. It was very underwhelming...
Problem with Segway was that it was made in USA and thus was absurdly, laughably expensive, it cost the same as a good used car and top versions, as a basic new car. Once a small bunch of rich people all bought one, it was over. China simply wasn't in position at a time yet to copycat and mass-produce it cheaply, and hype cycles usually don't repeat so by the time it could, it was too late. If it was invented 10 years later we'd all ride $1000-$2000 Segways today.
1 reply →
I'm going to hold onto the Segway as an actual instance of hype the next time someone calls LLMs "hype".
LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.
Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.
I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.
5 replies →
ChatGPT has something 300 million monthly users after less than three years and I don't think has Segway sold a million scooters, even though their new product lines are sick.
I can totally go about my life pretending Segway doesn't exist, but I just can't do that with ChatGPT, hence why the author felt compelled to write the post in the first place. They're not writing about Segway, after all.
Doubting LLMs because Segway was also trendy yet failed is so funny
2 replies →
> Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?
Counterpoint: That's how I feel about ebikes and escooters right now.
Over the weekend, I needed to go to my parent's place for brunch. I put on my motorcycle gear, grabbed my motorcycle keys, went to my garage, and as I was about to pull out my BMW motorcycle (MSRP ~$17k), looked at my Ariel ebike (MSRP ~$2k) and decided to ride it instead. For short trips they're a game changing mode of transport.
Even for longer trips if your city has the infrastructure. I moved to the Netherlands a few years ago, that infrastructure makes all the difference.
9 replies →
I remember the Segway hype well. And I think AI is to Segway as nuke is to wet firecracker.
> AI is to Segway as nuke is to wet firecracker
wet firecracker won’t kill you
That was marketing done before the nature of the device was known. The situation with LLMs is very different, really not at all comparable.
Trend vs single initiative. One company failed but overall personal electric transportation is booming is cities. AI is the future, but along the way many individual companies doing AI will fail. Cars are here to stay, but many individual car companies have and will fail, same for phones, everyone has a mobile phone, but nokia still failed…
Nobody is riding Segways around any more, but a huge percentage of people are riding e-bikes and scooters. It’s fundamentally changed transportation in cities.
1 reply →
Oh yeah I totally remember Segway hitting a 300B valuation after a couple of years.
> Ironically, this is exactly the technique for arguing that the blog mentions.
So? The blog notes that if something is inevitable, then the people arguing against it are lunatics, and so if you can frame something as inevitable then you win the rhetorical upper-hand. It doesn't -- however -- in any way attempt to make the argument that LLMs are _not_ inevitable. This is a subtle straw man: the blog criticizes the rhetorical technique of inevitabilism rather than engaging directly with whether LLMs are genuinely inevitable or not. Pointing out that inevitability can be rhetorically abused doesn't itself prove that LLMs aren't inevitable.
The Segway hype was before anyone knew what it was. As soon as people saw the Segway it was obvious it was BS.
Feels somewhat like a self fulfilling prophecy though. Big tech companies jam “AI” in every product crevice they can find… “see how widely it’s used? It’s inevitable!”
I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.
We haven't even barely extracted the value from the current generation of SOTA models. I would estimate less then 0.1% of the possible economic benefit is currently extracted, even if the tech effectively stood still.
That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.
I keep hearing this over and over. Some llm toiling away coding personal side projects, and utilities. Source code never shared, usually because it’s “too specific to my needs”. This is the code version of slop.
When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.
18 replies →
Big Tech can jam X everywhere and not get actual adoption though, it's not magic. They can nudge people but can't force them to use it. And yes a lot of AI jammed everywhere is getting the Clippy reaction.
The thing a lot of people haven't yet realized is: all those AI features jammed into your consumer products, aren't for you. They're for investors.
We saw the same thing with blockchain. We started seeing the most ridiculous attempts to integrate blockchain, by companies where it didn't even make any sense. But it was all because doing so excited investors and boosted stock prices and valuations, not because consumers wanted it.
If you told someone in 1950 that smartphones would dominate they wouldn't have a hard time believing you. Hell, they'd add it to sci-fi books and movies. That's because the utility of it is so clear.
But if you told them about social media, I think the story would be different. Some would think it would be great, some would see it as dystopian, but neither would be right.
We don't have to imagine, though. All three of these things have captured people's imaginations since before the 50's. It's just... AI has always been closer to imagined concepts of social media more than it has been to highly advanced communication devices.
the idea that we could have a stilted and awkward conversation with an overconfident robot would not have surprised a typical mid-century science fiction consumer
Honestly, I think they'd be surprised that it wasn't better. I mean... who ever heard of that Asimov guy?
> Some would think it would be great, some would see it as dystopian, but neither would be right.
No, the people saying it’s dystopian would be correct by objective measure. Bombs are nothing next to Facebook and TikTok.
I don't blame people for being optimistic. We should never do that. But we should be aware how optimism, as well as pessimism, can so easily blind us. There's a quote a like by Feynman
There is something of a balance. Certainly, Social Media does some good and has the potential to do more. But also, it certainly has been abused. Maybe so much that it become difficult to imagine it ever being good.
We need optimism. Optimism gives us hope. It gives us drive.
But we also need pessimism. It lets us be critical. It gives us direction. It tells us what we need to fix.
But unfettered optimism is like going on a drive with no direction. Soon you'll fall off a cliff. And unfettered pessimism won't even get you out the door. What's the point?
You need both if you want to see and explore the world. To build a better future. To live a better life. To... to... just be human. With either extreme, you're just a shell.
You really think that Hiroshima would have been worse if instead of dropping the bomb the USA somehow got people addicted to social media ?
4 replies →
> But if you told them about social media, I think the story would be different.
It would be utopian, like how people thought of social media in the oughts. It's a common pattern through human history. People lack the imagination to think of unintended side effects. Nuclear physics leading to nuclear weapons. Trains leading to more efficient genocide. Media distribution and printing press leading to new types of propaganda and autocracies. Oil leading to global warming. IT leading to easy surveillance. Communism leading to famine.
Some of that utopianism is wilful, created by the people with a self-interested motive in seeing that narrative become dominant. But most of it is just a lack of imagination. Policymakers taking the path of local least resistance, seeking to locally (in a temporal sense) appease, avoiding high-risk high-reward policy gambits that do not advance their local political ambitions. People being satisfied with easy just-so stories rather than humility and a recognition of the complexity and inherent uncertainty of reality.
AI, and especially ASI, will probably be the same. The material upsides are obvious. The downsides harder to imagine and more speculative. Most likely, society will be presented with a fait accompli at a future date, where once the downsides are crystallized and real, it's already too late.
People wrote about this. We know the answer! I stated this, so I'm caught off guard as it seems you are responding to someone else, but at the same time, to me.
London Times, The Naked Sun, Neuromancer, The Sockwave Rider, Stand on Zanzibar, or The Machine Stops. These all have varying degrees of ideas that would remind you of social media today.
Are they all utopian?
You're right, the downsides are harder to imagine. Yet, it has been done. I'd also argue that it is the duty of any engineer. It is so easy to make weapons of destruction while getting caught up in the potential benefits and the interesting problems being solved. Evil is not solely created by evil. Often, evil is created by good men trying to do good. If only doing good was easy, then we'd have so much more good. But we're human. We chose to be engineers, to take on these problems. To take on challenging tasks. We like to gloat about how smart we are? (We all do, let's admit it. I'm not going to deny it) But I'll just leave with a quote: "We choose to go to the Moon in this decade and do the other things not because they are easy, but because they are hard"
All of this is a pretty ignorant take on history. You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb? And Communism didn't lead to famine - Soviet and Maoist policies did. Communism was immaterial to that. And it has nothing to do with utopianism. Trains were utopian? Really? It's just that new technology can be used for good things or bad things, and this goes back to when Grog invented the club. It's has zero bearing on this discussion.
Your ending sentence is certainly correct: we aren't imagining the effects of AI enough, but all of your examples are not only unconvincing, they're easy ways to ignore what downsides of AI there might be. People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.
2 replies →
Literally from the article
--- start quote ---
Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
--- end quote ---
Mass adoption is not inevitable. Everyone will drop this "faster harder" tech like a hot potato when (not if) it fails to result in meaningful profits.
Oh, there will be forced mass adoption alright. Have you tried Gemini? Have you? Gemini? Have you tried it? HAVE YOU? HAVE YOU TRIED GEMINI?!!!
Or Copilot.
It's actions like this that are making me think seriously about converting my gaming PC to Linux - where I don't have to eat the corporate overlord shit.
Do it. Proton is really, really, really good now.
what i like about your last jokey comment is that discussions about ai, both good and bad, are incredibly boring
went to some tech meetups earlier this year and when the topic came up, one of the organizers politely commented to me that pretty much everything said about ai has been said. the only discussions worth having are introductions to the tools then leaving an individual to decide for themselves whether or not its useful to them. those introductions should be brief and discussions of the applications are boring
back in the bar scene days discussing work, religion, and politics were social faux pas. im sensing ai is on that list now
> what i like about your last jokey comment
We use probably all of Google's products at work, and sadly the comment is not even a joke. Every single product and page still shows a Gemini upsell even after you've already dismissed it fifteen times
Back in 1950s nuclear tech was seen as inevitable. Many people had even bought plates made from uranium glass. They still glow somewhere in my parents' cabinet or maybe I broke them
Well there are like 500 nuclear powerplants online today supplying 10% of the world's power, so it wasn't too far off. Granted it's not the Mr. Fusion in every car as they imagined it back then. We probably also won't have ASI taking over the world like some kind of vengeful comic book villain as people imagine it today.
Oh boy. People were expecting nuclear toothbrushes, nuclear school backpacks, nuclear stoves and nuclear fridges, nuclear grills, nuclear plates, nuclear medicine, nuclear sunglasses and nuclear airplanes.
Saying well, we got 500 nuclear power plants is like saying “well, we got excellent `npx create-app` style templates from AI. That’s pretty huge impact. I don’t know a single project post 2030 that didn’t start as an AI scaffolded project. That’s pretty huge dude”
1 reply →
The comparison is apt because nuclear would have been inevitable if it wasn't for doomerism and public opinion turning against it after 3 mile Island / Chernobyl
Exactly. Anyone who has learned to use these tools to your ultimate advantage (not just short term perceived one, but actually) knows their value.
This is why I've been extremely suspicious of the monopolisation of the LLM services by single business/country. They may well be loosing billions on training huge models now. But once the average work performance shifts up sufficiently so as to leave "non AI enhanced" by the wayside we will see huge price increases and access to these AI tools being used as geopolitics leverage.
Oh, you do not want to accept "the deal" where our country can do anything in your market and you can do nothing? Perhaps we put export controls on GPT5 against your country. And from then on its as if they disconnected you from the Internet.
For this reason alone local AI is extremely important and certain people will do anything possible to lock it in a datacenter (looking at you Nvidia).
It's weird noone can measure and show us the numbers of this ultimate advantage. Is "ultimate advantage" in the room right now?
I’ve tried to use AI for “real work” a handful of times and have mostly come away disappointed, unimpressed, or annoyed that I wasted my time.
Given the absolutely insane hard resource requirements for these systems that are kind of useful, sometimes, in very limited contexts, I don’t believe its adoption is inevitable.
Maybe one of the reasons for that is that I work in the energy industry and broadly in climate tech. I am painfully aware of how much we need to do with energy in the coming decades to avoid civilizational collapse, and how difficult all of that will be, without adding all of these AI data centers into the mix. Without several breakthroughs in one or more hard engineering disciplines, the mass adoption of AI is not currently physically possible.
That's how people probably felt about the first cars, the first laptops, the first <anything>.
People like you grumbled when their early car broke down in the middle of a dirt road in the boondocks and they had to eat grass and shoot rabbits until the next help arrived. "My horse wouldn't have broken down", they said.
Technologies mature over time.
We actually don’t know whether or not meaningful performance gains with LLMs are available using current approaches, and we do know that there are hard physical limits to electricity generation. Yes, technologies mature over time. The history of most AI approaches since the 60s is a big breakthrough followed by diminishing returns. I have not seen any credible argument that this time is different.
There is a weird combination of "this is literal magic and everybody should be using them for everything immediately and the bosses can fire half their workforce and replace them with LLMs" and "well obviously the early technology will be barely functional but in the future it'll be amazing" in this thread.
The first car and first laptop were infinitely better than no car and no laptop. LLMs is like having a drunk junior developer, that's not an improvement at all.
We have been in the phase of diminishing returns for years with LLMs now. There is no more data to train them on. The hallucinations are baked in at a fundamental level and they have no ability to emulate "reasoning" past what's already in their training data. This is not a matter of opinion.
> It's coming faster and harder than any tech in history.
True; but how is that not expected?
We have more and more efficient communication than any point in history, this is a software solution with a very low bar to the building blocks and theory.
Software should be expected to move faster and faster.
I’m not sure who is wishing it away. No one wanted to wish away search engines, or dictionaries or advice from people who repeat things they read.
It’s panic top to bottom on this topic. Surely there are some adults around that can just look at a new thing for what it is now and not what it could turn into in a fantasy future?
they said the same about VR glasses, about cryptocurrency ...
If you are seriously equating these two with AI, then you have horrible judgements and should learn to think critically, but unfortunately for you, I don't think critical thinking can be learned despite what people say.
Note that I'm not even going to bother arguing against your point and instead resort to personal attacks,because I believe it would be a waste of time to argue against people with poor judgment.
You're significantly stupider than you think you are.
Notice how I did that too?
While we can't wish it away we can shun it, educate people why it shouldn't be used, and sabotage efforts to included it in all parts of society.
> If in 2009…
…is exactly inevitablist framing. This claims perfect knowledge of the future based on previous uncertain knowledge of the future (which is now certain). You could have been making the same claims about the inevitability of sporks in the late 19th century and how cutlery drawers should adapt to the inevitable single-utensil future.
Smartphones are different. People really wanted them since the relatively primitive Nokia Communicator.
"AI" was introduced as an impressive parlor trick. People like to play around, so it quickly got popular. Then companies started force-feeding it by integrating it into every existing product, including the gamification and bureaucratization of programming.
Most people except for the gamers and plagiarists don't want it. Games and programming fads can fall out of fashion very fast.
Chatgpt Has 800 million weekly active users. That's roughly 10% of the planet.
I get that it's not the panacea some people want us to believe it is, but you don't have to deny reality just because you don't like it.
There are all sorts of numbers floating around:
https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...
This one claims 20m paying subscribers, which is not a lot. Mr. Beast has 60m views on a single video.
A lot of weekly active users will use it once a week, and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.
1 reply →
Sure, because it's free. I doubt most users of LLMs would want to even pay $1/month for them.
3 replies →
> Most people except for the gamers and plagiarists don't want it.
As someone who doesn't actually want or use AI, I think you are extremely wrong here. While people don't necessarily care about the forced integrations of AI into everything, people by and large want AI massively.
Just look at how much it is used to do your homework, or replaces Wikipedia & Google in day to day discussions. How much it is used to "polish" emails (spew better sounding BS). How much it is used to generate meme images instead of trawling the web for them. AI is very much a regular part of day to day life for huge swaths of the population. Not necessarily in economically productive ways, but still very much embedded and unlikely to be removed - especially since it's current capabilities today are already good enough for these purposes, they don't need smarter AI, just keep it cheap enough.
I still can't make some of the things in my imagination so I'm going to keep coding, using whatever is at my disposal including LLMs if I must.
Except there is a perverse dynamic in that the more AI/LLM is used, the less it will be used.
For the way you speak you seem to be fairly certain that they still gonna need you as it's user, that they aren't going to find a better monetization than selling it to people like you (or even small companies in general), I wouldn't be so sure, remember we are talking about the machine that is growing with the aim of being able to do do every single white-collar job.
And with everyone constantly touting robotics as the next next frontier, every blue collar job as well.
We might not be able to wish it away, but we can, as a society, decide to not utilize it and even actively eradicate it. I honestly believe that llm's/ai are a net negative to society and need to be ripped out root and stem. If tomorrow all of us decided to do that, nothing bad would happen, and we'd all be ok.