Comment by throw310822
14 hours ago
For one, since last month, AI is writing about 95% of my code and that of my colleagues. I just describe what I want and how it should be implemented and the AI takes care of all the details, solves bugs, configuration issues, etc. Also I use it to research libraries, dig into documentation (and then write implementations based on that), discuss architectural alternatives, etc.
Non-developers I know use them to organise meetings, write emails, research companies, write down and summarise counselling sessions (not the clients, the counselor), write press reports, help with advertising campaigns management, review complex commercial insurance policies, fix translations... The list of uses is endless, really. And I'm only talking of work-related usage, personal usage goes of course well beyond this.
> You really need to be obstinate in your convictions if you can dismiss LLMs at the time when everyone's job is being turned around by them.
I'm factual. You are the one with the extraordinary claim that LLMs will find new substantial markets/go through transformative breakthrough.
> Everywhere I look, everyone I talk to, is using LLMs
And everywhere I look, I don't. It might be the case that you stand right in the middle of an LLMs niche. Never did I say that one doesn't exist or that LLMs are inadequate at parroting existing code.
> Non-developers I know use them […]
among those are:
- things that have nothing to do with LLMs/AI
- things that you should NOT use LLMs for the reason that they will give you confidently wrong and/or random answers (because it's not in their training data/cut-off window, it's non-public information, they don't have the computing abilities to produce meaningful results)
- things that are low-value/low-stakes for which people won't be willing to pay for when asked to
> The list of uses is endless
no, it is not
> And I'm only talking of work-related usage
and we will get to see rather sooner than later how much business actually value LLMs when the real costs will be finally passed on to them.
> things that have nothing to do with LLMs/AI
These are things that have to do with intelligence. Human or LLM doesn't matter.
> things that you should NOT use LLMs for / parroting existing code / not in their training data/cut-off window, it's non-public information, they don't have the computing abilities to produce meaningful results
Sorry, but I just get the picture that you have no clue of what you're talking about- though most probably you're just in denial. This is one on the most surprising things about the emergence of AI: the existence of a niche of people that is hell-bent on denying its existence.
> intelligence. Human or LLM doesn't matter.
Being enthusiastic about a technology isn't incompatible with objective scrutiny. Throwing-up an ill-defined "intelligence" in the air certainly doesn't help with that.
Where I stand is where measured and fact-driven (aka. scientists) people do, operating with the knowledge (derived from practical evidence¹) that LLMs have no inherent ability to reason, while making a convincing illusion of it as long as the training data contains the answer.
> Sorry, but I just get the picture that you have no clue of what you're talking about- though most probably you're just in denial.
This isn't a rebuttal. So, what is it? An insult? Surely that won't help make your case stronger.
You call me clueless, but at least I don't have to live with the same cognitive dissonances as you, just to cite a few:
- "LLMs are intelligent, but when given a trivially impossible task, they happily make stuff up instead of using their `intelligence` to tell you it's impossible"
- "LLMs are intelligent because they can solve complex highly-specific tasks from their training data alone, but when provided with the algorithm extending their reach to generic answers, they are incapable of using their `intelligence` and the supplemented knowledge to generate new answers"
¹: https://arstechnica.com/ai/2025/06/new-apple-study-challenge...
1 reply →