Comment by asdev

10 days ago

outside of text generation and search, LLMs have not delivered any significant value

I personally have greatly benefitted from LLM's helping me reason about problems and make progress on many diverse issues across professional, recreational and mental health difficulties. I think that asking whether it's just "text generation and search" rather than something that transcends it is as meaningful as asking whether an airplane really "flies" or just "applies thrust and generates lift".

  • I personally benefit from AI auto complete when coding. I think that value isn't worth what people say it is. But I'm often wrong about what I think the value of something is (e.g. social media) and it's true value. I unfortunately often have people try to justify their thinking or beliefs by just pasting a ChatGPT output or screenshot to "prove" it.

Text generation and search are the drivers for some trillions of dollars worth of economic activity around the world.

  • > trillions of dollars

    That is monetary value. The poster may have meant "delivery" value - which has been limited (and tainted with hype).

    > Text generation

    Which «text generation», apart from code generation (quite successful in some models), would amount to «trillions of dollars worth of economic activity» at the current stage? I cannot see it at the moment.

    • I cannot see any major use case apart from code generation and autocompletion. Maybe summarizing text and learning new things but that can be achieved by any search engine.

      A good search engine could probably take over llm.

Would you mind elaborating on how LLMs have not delivered any significant value outside of text generation and search?

this statement is just patently wrong, and even those are still significant value. LLMs have been significantly impacting software engineering and software prototyping.

  • Do you have any evidence to suggest that LLMs have increased productivity in software? And if so what is the effect size?

    I don’t really see any increase in the quality, velocity, creativity of software I’m either using or working on, but I have seen a ton of candidates with real experience flunk out on interviews because they forget the basics and blame it on LLMs. I’ve honestly never seen candidates struggle with syntax the way I’m seeing them do so the last couple years.

    I find LLMs very useful for general overviews of domains but I’ve not really seen any use that is a clear unambiguous boost to productivity. You still have to read the code and understand it which takes time, but “vibe coding” prevents you from building cognitive load until that point, making code review costly.

    I feel like a lot of the hype in this space so far is aspirational, pushed by VCs, or a bunch of engineers who aren’t A/B testing: like for every hour you spend rubber ducking with an LLM, how much could you have gotten thought out with a pen and paper? In fact, usually I can go and enjoy my life more without an LLM: I write down the problem I’m considering and then go on a walk, and come back and have a mind full of new ideas.

    • It's hard to see where you're coming from. With o1 Pro or Gemini 2.5 Pro I can give a detailed text specification of a module I want to write to it and it'll write code to do exactly that, with fewer errors than I'd make on a first try. It'll also generate unit tests for it. Writing a text specification then reviewing some code for me is way, way faster than writing all that same code by scratch.

      Are you working on code that's heavily coupled to other code, such that you rarely get to write independent logic components from scratch, and IP restrictions prevent you from handing large chunks of existing code to the LLM?

    • Yeah, even though people say they are shipping faster, the software my work pays for is not seemingly getting better at any faster rate.