← Back to context

Comment by Workaccount2

2 months ago

I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company. Ironically it's the very MIT report that "found AI to be a flop" (remember the "MIT study finds almost every AI initiative fails"), that also found that virtually every single worker is using AI (just not company AI, hence the flop part).

At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.

Firstly, it's not really good enough to say "our employees use it" and therefore it's providing us significant value as a business. It's also not good enough to say "our programmers now write 10x the number of lines of code and therefore that's providing us value" (lines of code have never been a good indicator of output). Significant value comes from new innovations.

Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.

The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.

  • When I can replace a CAD license that costs $250/usr/mo with an applet written by gemini in an hour, that's a hard tangible gain.

    Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.

    • Someone correct me if I'm mistaken but don't CAD programs rely on a geometric modeling kernel? From what I understand this part is incredibly hard to get right and the best implementations are proprietary. No LLM is going to be able to get to that level anytime soon.

      1 reply →

    • I agree, the applet which google plageurized through its Gemini tool saves you money. Why keep the middle man though? At this point, just pirate a copy.

      2 replies →

  • You’re attacking one or two examples mentioned in their comment, when we could step back and see that in reality you’re pushing against the general scientific consensus. Which you’re free to do, but I suspect an ideological motivation behind it.

    To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.

    All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.

    • I'm not saying LLMs are useless. But the value they have provided so far does not justify covering the country in datacenters and the scale of investment overall (not even close!).

      The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.

      (Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)

      5 replies →

    • Uh, I must have missed the “consensus” here, especially when many studies are showing a productivity decrease from AI use. I think you’ve just conjured the idea of this “scientific consensus” out of thin air to deflect criticism.

      1 reply →

It's been good at enabling the clueless to get to performance of a junior developer, and saving few % of the time for the mid to senior level developer (at best). Also amazing at automating stuff for scammers...

The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up

> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.

Well then congratulations on being in the 5%. That doesn't really change the point.

  • I’m a senior developer and it has been hugely helpful for me in both saving time and effort and improving the quality of my output.

    You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.

If it's so great and such a benefit: why scream it from to everyone? Why forced it? Why this crazy rhetoric labeling others at ideological? This makes no sense. If you found gold, just use it and get ahead of the curve. For some reason that never happens.

  • I kinda agree. We've been told for years it's a "massive productivity multiplier", and not just an iterative improvement.

    So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.

    If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.

    Surely by now the results will be visible anyway.

    So where are they?

    • To expand on this:

      LLMs are indeed currently an iterative improvement. I've found a few good use-cases for them. They're not nothing.

      But at the moment, they are nowhere near the "massive productivity multiplier" they're advertised to be. Just as adding more lanes doesn't make traffic any better, perhaps they never will.

      Or perhaps all the promises will come true -- and that, of course, is what is actually meant when the productivity gains are screamed from the rooftops. It was the same with computers, and it was the same with the internet: the proposed massive changes were going to come at some vague point in the future. Plenty of people saw those changes coming even decades in advance; reason from first principles and extrapolate the results of x scale and y investment and you couldn't not see where it was headed, at least generally.

      The future potential is being sold in much the same way here. That'd be all fine and good except for the fact that the capex required to bring this potential future into being compared to any conceivable revenue model is so completely absurd that, even putting aside the disruptive-at-best nature of the technology, making up for the literal trillions of dollars of investment will have to twist our economic model to the point of breaking in order to make the math math. Add in the fact that this technology is tailor-made to not just disrupt or transform our jobs but to replace workers should this future potential arrive, and suddenly it looks nothing like computers in the 70s or networks in the 80s. It's not wonder not everyone is excited about it -- the dynamic is, at its very core, adversarial; its very existence states the quiet part of class warfare out loud.

      Which brings us to so many people being forced to use it. I really, really hate this. Just as I don't want to be told which editor/IDE to use, I don't want to be told how to program. I deeply care about and understand my workflow quite well, thank you very much -- I've been diligently working on refining it for a good while now. And to state the obvious: if it were as good as they say it is, I'd be using it the way they want me to. I don't, because they just aren't that good (thankfully I have a choice in this matter -- for now). I also just don't like using them while programming, as I find them noisy and oddly extraverting, which tires me out. They are antithetical to flow. No one ever got into a flow state while pair programming, or managing a junior developer, and I doubt anyone ever got into a flow state while chatting with an LLM. It's just the wrong interface. The "better autocomplete" model is a better interface, but in practice I just haven't seen it do better than a good LSP or my own brain. At best it saves me a few key strokes, which I'd hardly call revolutionary. Again, not nothing, but far from the promise. We're still a very long way off.

      To get there, LLM developers need cash, and they need data. Companies are forcing LLMs into every nook and cranny of so many employees' workflows so that they can provide training data, and bring that potential future one step closer to reality. The more we use LLMs, the more likely we are to being replaced. Simple as that.

      I for one would welcome our new robot overlords if I had any faith that our society could navigate this disruption with grace and humanity. I'd be ecstatic and totally bullish on the tech if I felt it were ushering in a Star Trek-like future. But, ha, nope -- any faith I had in that sort of response died with how so many handled Covid, and especially when Trump was elected for a second time. These two events destroyed my estimation of humanity as a cooperative organism.

      No, I now expect humanity at large -- or at least the USA -- to look at the stupidest, most short-sighted, meanest option possible and enthusiastically say "let's do that!" Which, coincidentally, is another way of describing what is currently happening with LLMs: the act of forcing mediocre tools down our throats while cynically exploiting our "language = intelligence" psychological blind-spot, raising utilities prices (how is a company's electric bill my problem again?), killing personal computing, accelerating climate change at the worst possible time, all in the name of destroying both my vocation and avocation.

  • I have never seen a counter-argument to this. Why its being forced on the world? Lets here some execs from these companies answer that. My bet is on silence every time. Microsoft is forcing AI chat applications into the OS and preventing people from removing it.

    You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.

    Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.

    • oh this is because they want more data to build better ai (that will give them more money and power and probably some other things too)

Are you a boss or a worker? That's the real divide, for the most part. Bosses love AI - when your job is just sending emails and attending remote meetings, letting LLM write emails for you and summarize meetings is a godsend. Now you can go from doing 4 hours of work a week to 0 hours! And they let you fantasize about finally killing off those annoying workers and replace them with robots that never stop working and never say no.

Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.

  • I'm both, I have a day job and run a side business as well. My partner has her own business (full time) and uses AI heavily too.

    None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.

  • I'm a worker, I love AI and all my coworkers love AI.

    • Same here, I just limit my use of genAI to writing functions (and general brainstorming).

      I only use the standard "chat" web interface, no agents.

      I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.

      I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.

      1 reply →

Manual transmissions are still great! More fun to drive and an excellent anti-theft device.