Comment by JimmaDaRustla
6 months ago
The author seems to imply that the "framing" of an argument is done so in bad faith in order to win an argument but only provides one-line quotes where there is no contextual argument.
This tactic by the author is a straw-man argument - he's framing the position of tech leaders and our acceptance of it as the reason AI exists, instead of being honest, which is that they were simply right in their predictions: AI was inevitable.
The IT industry is full of pride and arrogance. We deny the power of AI and LLMs. I think that's fair, I welcome the pushback. But the real word the IT crowd needs to learn is "denialism" - if you still don't see how LLMs is changing our entire industry, you haven't been paying attention.
Edit: Lots of denialists using false dichotomy arguments that my opinion is invalid because I'm not producing examples and proof. I guess I'll just leave this: https://tools.simonwillison.net/
The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility. AI could solve a lengthy list of outstanding architectural and technical problems and go full AGI next year.
If we're talking about changing the industry, we should see some clear evidence of that. But despite extensive searching myself and after asking many proponents (feel free to jump in here), I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI. If this is so foundationally groundbreaking, that should be a clear signal. Personally, I would expect to see an explosion of this even if the hype is taken extremely conservatively. But I can't even track down a few solid examples. So far my searching only reveals one-off pull requests that had to be laboriously massaged into acceptability.
> If we're talking about changing the industry, we should see some clear evidence of that.
That’s a great point...and completely incompatible with my pitch deck. I’m trying to raise a $2B seed round on vibes, buzzwords, and a slightly fine-tuned GPT-3.5.. You are seriously jeopardizing my path to an absurdly oversized yacht :-))
> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.
That's because using AI to write code is a poor application of LLM AIs. LLMs are better suited to summary, advice, and reflection than forced into a Rube Goldberg Machine. Use your favorite LLM as a Socratic advisor, but not as a coder, and certainly not as an unreliable worker.
The entire hype for LLMs is that they can *do* anything. Even if only writing code, that could justify their hype. If LLMs mean Grammarly is now a lot better (and offered by big tech) then it’ll be very disappointing (economically speaking)
1 reply →
I support this comment. AI for coding does still involve much prodding and redirecting in my limited experience. Try getting Claude to produce even a simple SVG for a paper is a struggle in my experience.
But for helping me as a partner in neurophilosophy conversations Claude is unrivaled even compared to my neurophilosophy colleagues—speed and the responsivness is impossible to beat. LLMs are at pushing me to think harder. They provides the wall against which to bounce ideas, and those bounces often come from surprising and helpful angles.
It's absolutely hilarious reading all these "you're holding it wrong" arguments because every time I find one it contradicts the previous ones.
> That's because using AI to write code is a poor application of LLM AIs
Then why is that exact usecase being talked about ad nauseam by many many many "influencers", including "big names" in the industry? Why is that exact usecase then advertised by leading companies in the industry?
1 reply →
Agreed. That argument they made was a straw-man which doesn't really pertain to where LLMs are being leveraged today.
1 reply →
> Use your favorite LLM as a Socratic advisor
Can you give an example of what you mean by this?
10 replies →
> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI
This popular repo (35.6k stars) documents the fraction of code written by LLM for each release since about a year ago. The vast majority of releases since version 0.47 (now at 0.85) had the majority of their code written by LLM (average code written by aider per release since then is about 65%.)
https://github.com/Aider-AI/aider
https://github.com/Aider-AI/aider/releases
I think we need to move the goalposts to "unrelated to/not in service of AI tooling" to escape easy mode. Replace some core unix command-line tool with something entirely vibecoded. Nightmare level: do a Linux graphics or networking driver (in either Rust or C).
1 reply →
One part of the code generation tools is that they devalue code at the same time as produce low quality code (without a lot of human intervention.)
So a project that mostly is maintained by people who care about their problem/code (OSS) would be weird to be "primarily maintained by AI" in a group setting in this stage of the game.
Exactly the problem. It doesn't need to be good enough to work unsupervised in order to gain real adoption. It just needs to be a performance or productivity boost while supervised. It just needs to be able to take an AI-friendly FOSS dev (there are many), and speed them along their way. If we don't have even that, then where is the value (to this use case) that everyone claims it has? How is this going to shake the foundations of the IT industry?
3 replies →
This has been researched, and while the existing research is young and inconclusive, the outlook is not so good for the AI industry, or rather for the utility of their product, and the negative effects it has on their users.
https://news.ycombinator.com/item?id=44522772
> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.
Not only is that a could, I'd argue they already are. The huge new "premier" models are barely any better than the big ticket ones that really kicked the hype into overdrive.
* Using them as a rubber duck that provides suggestions back for IT problems and coding is huge, I will fully cosign that, but it is not even remotely worth what OpenAI is valued at or would need to charge for it to make it profitable, let alone pay off it's catastrophic debt. Meanwhile every other application is a hard meh.
* The AI generated video ads just look like shit and I'm sorry, call me a luddite if you will, but I just think objectively less of companies that leverage AI video/voices/writing in their advertisements. It looks cheap, in the same way dollar store products have generic, crappy packaging, and makes me less willing to open my wallet. That said I won't be shocked at all if that sticks around and bolsters valuations, because tons of companies worldwide have been racing to the bottom for decades now.
* My employer has had a hard NO AI policy for both vetting candidates and communicating with them for our human resources contracting and we've fired one who wouldn't comply. It just doesn't work, we can tell when they're using bots to review resumes because applicants get notably, measurably worse.
LLMs are powerful tools that have a place, but there is no fucking UNIVERSE where they are the next iPhone that silicon valley is utterly desperate for. They just aren't.
> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.
The iPhone and subsequent growth of mobile (and the associated growth of social media which is really only possible in is current form with ubiquitous mobile computing) are evidence it did change everything. Society has been reshaped by mobile/iPhone and its consequences.
NFTs were never anything, and there was never an argument they were. The were a financial speculative item, and it was clear all the hype was due to greater fools and FOMO. To equate those two is silly. That's like arguingsome movie blockbuster like Avengers Endgame was going to "change everything" because it was talked about and advertised. It was always just a single piece of entertainment.
Finally for LLMs, a better comparison for them would have been the 80's AI winter. The question should be "why will this time not be like then?" And the answer is simple: If LLMs and generative AI models never improve an ounce - If they never solve another problem, nor get more efficient, nor get cheaper - they will still drastically change society because they are already good enough today. They are doing so now.
Advertising, software engineering, video making. The tech is already for enough that it is changing all of these fields. The only thing happening now is the time it takes for idea diffusion. People learning new things and applying it are the slow part of the loop.
You could have made your argument pre-chatgpt, and possibly could have made that argument in the window of the following year or two, but at this point the tech at the level to change society exists, it just needs time to spread. All it need are two things: tech stays the same, prices roughly stay the same. (No improvements required)
Now there still is a perfectly valid argument to make against the more extreme claims we hear of: all work being replaced..., and that stuff. And I'm as poorly equipped to predict that future as you (or anyone else) so won't weigh in - but that's not the bar for huge societal change.
The tech is already bigger than the iPhone. I think it is equivalent to social media, (mainly because I think most people still really underestimate how enormous the long term impact of social media will be in society: Politics, mental health, extremism, addiction. All things they existed before but now are "frictionless" to obtain. But that's for some other post...).
The question in my mind is will it be as impactful as the internet? But it doesn't have to be. Anything between social media and internet level of impact is society changing. And the tech today is already there, it just needs time to diffuse into society.
You're looking at Facebook after introducing the algorithm for engagement. It doesn't matter that society wasn't different overnight, the groundwork had been laid.
> LLMs could hit an unsolvably hard wall next year and settle into a niche of utility
LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.
> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI
Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?
> If this is so foundationally groundbreaking, that should be a clear signal.
As I said, you haven't been paying attention.
Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid
> Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?
The money and the burden of proof are on the side of the pushers. If LLM code is as good as you say it is, we won't be able to tell that it's merged. So, you need to show us lots of examples of real world LLM code that we know is generated, a priori, to compare
So far most of us have seen ONE example, and it was that OAuth experiment from Cloudflare. Do you have more examples? Who pays your bills?
2 replies →
> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.
That is an exaggeration, it is integrated into some workflows, usually in a provisional manner while the full implications of such integrations are assessed for viability in the mid to long term.
At least in the fields of which i have first hand knowledge.
> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?
Straw man rebuttal, presenting an imaginary position in which this statement is doesn't apply doesn't invalidate the statement as a whole.
> As I said, you haven't been paying attention.
Or alternatively you've been paying attention to a selective subset of your specific industry and have made wide extrapolations based on that.
> Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid
What's the one where you claim strong proof or evidence while only providing anecdotal "trust me bro" ?
> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.
Having a niche is different from being niche. I also strongly believe you overstate how integrated they are.
> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?
As mentioned, I have looked. I told you what I found when I looked. And I've invited others to look. I also invited you. This is not a straw man argument, it's making a prediction to test a hypothesis and collecting evidence. I know I am not all seeing, which is why I welcome you to direct my eyes. With how strong your claims and convictions are, it should be easy.
Again: You claim that AI is such a productivity boost that it will rock the IT industry to its foundations. We cannot cast our gaze on closed source code, but there are many open source devs who are AI-friendly. If AI truly is a productivity boost, some of them should be maintaining widely-used production code in order to take advantage of that.
If you're too busy to do anything but discuss, I would instead invite you to point out where my reasoning goes so horrendously off track that such examples are apparently so difficult to locate, not just for me, but for others. If one existed, I would additionally expect that it would be held up as an example and become widely known for it with as often as this question gets asked. But the world's full of unexpected complexities, if there's something that's holding AI back from seeing adoption reflected in the way I predict, that's also interesting and worth discussion.
> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.
As I stated, you haven't been paying attention.
A better-faith response would be to point out an example of such an open source codebase OR tell why that specific set of restrictions (open-source, active production, primarily AI) is unrealistic.
For instance, one might point out that the tools for really GOOD AI code authoring have only been available for about 6 months so it is unreasonable to expect that a new project built primarily using such tools has already reached the level of maturity to be relied on in production.
9 replies →
I don’t find it fair that you point out straw man in your parent comment and then use ad hominem in this comment. I would love to see you post some examples. I think you’d have a chance of persuading several readers to at least be more open minded.
> But despite extensive searching myself and after asking many proponents
At least we know you are human, since you are gaslighting us instead of citing a random link, that leads to a 404 page. An LLM would have confidently hallucinated a broken reference by now.
2 replies →
So… which ones?
2 replies →
At the heart of it all is language. Logic gates to assembly to high level programming languages are a progression of turning human language into computed processes. LLMs need to be tuned to recognize ambiguity of intention in human language instructions, following up with clarifying questions. Perhaps quantum computing will facilitate the process, the AI holding many fuzzy possibilities simultaneously, seeking to "collapse" them into discrete pathways by asking for more input from a human.
> AI was inevitable.
This is post hoc ergo propter hoc. AI exists thus it must have been inevitable.
You have no proof it was inevitable.
(Also AI means something wildly different than it meant a few years ago - I remember when AI meant AGI, the salesmen have persuaded you the emperor has clothes because they solved a single compelling test).
I keep seeing the "AI of the gaps" argument, where AI is whatever computers currently can't do. I wonder when I'll stop seeing it.
Well, a few years ago I was a student in CS and my formation had the AI label stamped on it. We talked about machine learning, neural network and stuff like that and we called that AI. There was never a mention of AGI. I don't know if it's a translation thing but AI = AGI never was a thing for me. As long as there is no clear definition for it people will keep on arguing because we each have our own blurry picture.
> This is post hoc ergo propter hoc. AI exists thus it must have been inevitable.
The problem with that statement is that it doesn't say anything about why AI will take over pretty much everything.
The actual answer to that is that AI is not limited by a biological substrate and can thus:
1. Harness (close to) the speed of light for internal signals; Biology is limited to about 200m/s, 6 orders of magnitude less. AI has no such limitations.
2. Scale very easily. Human brains are limited in how big they can get due to silly things such as the width of the birth canal and being on top of a (bipedal) body that uses organic mass to inefficiently generate power. Scaling a human brain beyond its current size and the ~20 watts it draws is an incredibly hard engineering challenge. For AI scaling is trivial by comparison.
I'm not saying it's going to be LLMs, but longterm we can say that the intelligent entities that will surpass us will not have the same biological and physical limitations as we do. That means they very, very probably have to be 'artificial' and thus, that AI taking over everything is 'inevitable'.
> I remember when AI meant AGI
Interestingly I had the same definition, and at the same time there's always been multiple definitions. I have always called whatever animated NPC in games "AI", even when the thing is hard coded and not very intelligent at all. I guess that calling AI a category of tools that are artificial and somewhat intelligent is fair.
I also anticipate that what we call AGI will be fluid, and that marketing being marketing, we'll start calling actual products AGI before it would be genuine.
Good aside - this is really the hoverboard rebranding of the 2020s.
> LLMs is changing our entire industry,
- So far, the only ones making real money are the "shovel sellers": Nvidia, AWS, and the GPU resale hustlers. Everyone else is still trying to figure out how to turn the parrots into profit.
- Probabilistic code generators are not the dawn of a new scientific era that will propel us to the Stars. Just autocomplete on steroids, impressive, but not what will launch humanity into post-scarcity.
- So far what you have is a glorified compression algorithm. A remix of Reddit, StackOverflow, and Wikipedia...With the confidence of a TED speaker and the understanding of a parrot.
- If LLMs are truly the road to AGI, try sending one to MIT. No internet, no textbook ingestion, no Leetcode prep. Just cold start intelligence. If it graduates...we might have something....
Right now, this is just confusing correlation for cognition. Compression for comprehension. And mimicry for mastery. The revolution may come, but not the next four quarters. What it is bad news if you are VC....or Mark Zuckerberg...
> you haven't been paying attention.
Still no falsifiable claim in sight....
It is probably both inevitable that the LLM technology we have now would be invented and inevitable that there would be a major pushback against it. In any world, this would be a technology that takes from some to give to others.
Given that, nothing about the future seems inevitable to me. The law isn't settled. Public opinion isn't settled. Even a great deal of the hype keeping the bubble from popping is still founded on talk of AGI that I now consider absurd...
The problem is that such “tech leaders” get their mouths full of AI with one goal only: to reduce their workforce to the minimum and maximize their profits. Sure, they are companies and yada yada, but I would like to see a better argument on to why we all should embrace AI. So far, as much as AI is intrinsically amazing, it’s getting bad rep because its main and more lousy supporters are tech billionaires.
Reading between the lines of the OP, the author seems to think that the future of LLMs will be determined by debate and that he can win that debate by choosing the framing of the debate.
The whole meat of his article is about this debate technique, ostensibly saying that's what the other guys are doing, but really he's only described what he himself is doing.
I didn't read that. I understood it as the fact that tech companies are currently framing the narrative as "inevitable", and that you should ask yourself the other questions, such as do I want it
The question of whether any individual wants it ultimately won't matter. Now that the technology exists and has found traction, it continuing to exist and have traction until eventually being superseded by something even more useful is inevitable.
The author seems to think that the existence of the technology can be decided by debate to sway people one way or the other, but that's not how it works. Real life doesn't work like a debate club. The people who are saying that the technology is inevitable aren't trying to do a debate persuasion thing to make it inevitable, that's just the way the author wants to see it because that framing makes it negotiable. But there's no negotiating with the course of technological development.
I think the author is encouraging out of the box thinking. The framing of "inevitable" is a box (an assumption) that we can include in our analysis or judgment, rather than assume it to be true.
But to your point, his debate analogy does imply that tech enthusiasts are arguing in bad faith in order to win an argument, because the goal of winning a debate has no good faith aspect to it (good faith in debate terms is seeking the truth, bad faith is winning an argument).
But just because he is wrong doesn't mean he isn't useful.
> The author seems to imply that the "framing" of an argument is done so in bad faith in order to win an argument (...) This tactic by the author is a straw-man argument
This is what I was expecting from the title, but not really what I found in the content in the end. Instead, to me it read to be more about argumentation and inevitibilism in general, than about LLMs specifically. From my perspective, to claim and ride it otherwise rings as mischaracterization.
... Which is also an acknowledgement I missed from the article. The use of inevitability as a framing device is just one of the many forms of loaded language, and of the encoding of shared assumptions without preestablishing that the other person actually shares them. Notice how I didn't say that you're mischaracterizing the article outright - we clearly read what was written differently. To assert my interpretation as correct by encoding it as framing would be pretty nasty. Sadly not uncommon though, and it's one of those things that if you try to actually control for, writing in a digestable way in general becomes very hard to impossible.
The "framing" is a tactic called "assuming the sale" - where statements are made as-if they are already true and the burden is placed on the other side to negate. Combine that with other tactics like fomo, scarcity, and authority and you will have people practically begging for these tools. As an example..
"Edison of our times Elon Musk (authority) believes that the AI agents are the future (assuming the sale), and most developers are already using it to improve productivity (fomo, social-proof). MCP agents are in short supply due to tariff driven bottlenecks, so buy them while supplies last (scarcity)".
This sort of influencing is accelerated by social-media, and is all around us, and short-circuits critical-thinking in most of us.
So far what I've seen from LLMs writing code is insecure bug ridden slop. They are changing the industry in that now I have to code review messes from developers and non developers being careless. AI image and video generation isn't much better.
CEOs and investors love to talk about how "scary" AI is and publicly advocate for regulation (while privately shutting it down) because they NEED the money to keep flowing, because these things aren't profitable. Inevitabalism is a very accurate description of their PR campaign and it's sadly working, for the moment.
The usual argument is that the tech leaders are saying that only because they've invested in AI.
...Like, you don't say? If one truly believes AI is inevitable than of course they're going to put money in AI.
I wonder how many people who claim we're in an AI bubble actually short AI stocks and $NVDA. Or they'd just stutter "uh you know the market can stay irrational longer than we can stay solvent..." when asked.
There's no doubt that LLMs are useful and will generate some productivity. The hype raises them to the silver bullet tech though. This inevitably creates a bubble that it will pop at some point. People who see signs of bubble don't short because they are lacking on details as to when exactly this will happen.
[dead]
> But the real word the IT crowd needs to learn is "denialism" - if you still don't see how LLMs is changing our entire industry, you haven't been paying attention.
The best part about this issue is that it's a self correcting problem. Those who don't are risking being pushed out of the job market, whereas those who do will fare better odds.
I'm sure luddites also argued no one needed a damn machine to weave a rug, and machine-weaved rugs didn't had any soul.
Every time pro-AI people bring up the Luddites I have to laugh, because they've clearly not had their magic little boxes actually tell them anything about the Luddites.
They argued the exact opposite, they wanted proper training on how to use the "damn machines" as people were literally dying because of being untrained in their usage. They were also then beset upon by hired thugs and mercenaries that proceeded to beat and even kill the Luddites for daring to speak out against horrible worker conditions in the factories.
It's pretty funny, the anti-luddites being exactly like the anti-luddites of yore.
> They argued the exact opposite, they wanted proper training on how to use the "damn machines" as people were literally dying because of being untrained in their usage.
That's very interesting to hear, and also very unfortunate due to the loneliness your personal belief reflects. For example, your personal belief contrasts with what's clearly stated and supported in Wikipedia's article on Luddites. Is that because the whole world around you is wrong and you are the only lonely chap who is burdened by the truth?
https://en.wikipedia.org/wiki/Luddite
The interesting detail you are either unaware or chose to omit is that "training" only registered as a concern as industrialization completely eliminated the competitiveness and consequently need to what at the time represented high-skilled albeit manual labor. Luddite's arguments regarding training was not that industrial mills didn't had training, buy that "produced textiles faster and cheaper because they could be operated by less-skilled, low-wage labourers." This is a direct citation, not something that "magic little boxes" spit out. That's what motivated uprisings against these "magic little boxes": the threat that automaton posed to their livelihood for their once irreplaceable skillet being suddenly rendered useless overnight.
So, people like you who are uninformed and ignorant of history should spend some time trying to gather insights onto the problem to have a chance if understanding what's right in front of your nose. As Mark Twain said , history doesn't repeat itself but it often rhymes. Luddites represent those who failed to understand the impact that automation had on humanity, refused to understand what changes were placed upon them, and misplaced their energy and ultimate frustration and anger onto futile targets. The key factor is ignorance and unpreparedness. Fooling yourself with creative exercised covering up militant levels of ignorance does not change this one bit.
But you do you. The universe has this tendency to self correct.
1 reply →
That's actually not what the luddites argued at all; they were very explicitly trying to protect their own economic interests.
An AI could have told you that in 2 seconds.