Comment by eric-burel

6 months ago

Developers haven't even started extracting the value of LLMs with agent architectures yet. Using an LLM UI like open ai is like we just figured fire and you use it to warm you hands (still impressive when you think about it, but not worth the burns), while LLM development is about building car engines (here is you return on investment).

> Developers haven't even started extracting the value of LLMs with agent architectures yet

There are thousands of startups doing exactly that right now, why do you think this will work when all evidence points towards it not working? Or why else would it not already have revolutionized everything a year or two ago when everyone started doing this?

  • Most of them are a bunch of prompts and don't even have actual developers. For the good reason that there is no training system yet and the wording of how you call the people that build these system isn't even there or clearly defined. Local companies haven't even setup a proper internal LLM or at least a contract with a provider. I am in France so probably lagging behind USA a bit especially NY/SF but the word "LLM developer" is just arriving now and mostly under the pressure of isolated developers and companies like me. This feel really really early stage.

    • The smartest and most well funded people on the planet have been trying and failing to get value out of this technology for years and the best we've come up with so far is some statistically unreliable coding assistants. Hardly the revolution its proponents keep eagerly insisting we're seeing.

      14 replies →

    • Between the ridiculously optimistic and the cynically nihilistic I personally believe there is some value that extremely talented people at huge companies can't really provide because they're not in the right environment (too big a scale) but neither can grifters packaging a prompt in a vibecoded app.

      In the last few months the building blocks for something useful for small companies (think less than 100 employees) have appeared, now it's time for developers or catch-all IT at those companies and freelancers serving small local companies to "up-skill".

      Why do I believe this? Well for a start OCR became much more accessible this year cutting down on manual data entry compared to tesseract of yesteryear.

  • >Or why else would it not already have revolutionized everything a year or two ago when everyone started doing this?

    The internet needed 20 years to take over the world. All of the companies of the first dot com bust are in the past. The tech is solid.

3 years into automating all white collar labor in 6 months.

  • lol you must not be looking for a white collar job right now then outside of IT.

    The only thing that is over hyped is there is no white collar bloodbath but a white collar slow bleed out.

    Not mass firing events but transition by attrition over time. A bleed out in jobs that don't get back filled and absolutely nothing in terms of hiring reserve capacity for the future.

    My current company is a sinking ship, I suspect it will go under in the next two years so I have been trying to get off but there is absolutely no place to go.

    In 2-3 years I expect to be unemployed and unemployable, needing to retrain to do something I have never done before.

    What is on display in this thread is that human's are largely denial machines. We have to be otherwise we would be paralyzed by our own inevitable demise.

    It is more comforting to believe everything is fine and the language models are just some kind of doge coin tech hype bullshit.

>> Developers haven't even started extracting the value of LLMs with agent architectures yet.

What does this EVEN mean? Do words have any value still, or are we all just starting to treat them as the byproduct of probabilistic tokens?

"Agent architectures". Last time I checked an architecture needs predictability and constraints. Even in software engineering, a field for which the word "engineering" is already quite a stretch in comparison to construction, electronics, mechanics.

Yet we just spew the non-speak "Agentic architectures" as if the innate inability of LLMs in managing predictable quantitative operations is not an unsolved issue. As if putting more and more of these things together automagically will solves their fundamental and existential issue (hallucinations) and suddenly makes them viable for unchecked and automated integration.

  • This means I believe we currently underuse LLM capabilities and their empirical nature makes it difficult to assess their limitations without trying. I've been studying LLMs from various angles during a few months before coming to this conclusion, as an experienced software engineer and consultant. I must admit it is however biased towards my experience as an SME and in my local ecosystem.

  • Hallucinations might get solved by faster, cheaper and more accurate, vision and commonsense-physics models. Hypothesis: Hallucinations are a problem only because physical reality isn't text. Once people switch to models that predict physical states instead of missing text, then we'll have domestic robots and lower hallucination rates.

    • Where is the training data for that? LLMs work because we already had tons of text that could be obtained cheaply. Where is the training data for physical reality?

Theyre doing it so much it's practically a cliche.

There are underserved areas of the economy but agentic startups is not one.

> Developers haven't even started extracting the value of LLMs with agent architectures yet.

For sure there is a portion of developers who don't care about the future, are not interested in current developements and just live as before hoping nothing will change. But the rest already gave it a try and realized tools like Claude Code can give excellent results for small codebases to fail miserably at more complex tasks with the net result being negative as you get a codebase you don't understand, with many subtle bugs and inconsistencies created over a few days you will need weeks to discover and fix.

  • This is a bit developer centric, I am much more impressed by the opportunities I see in consulting rather than applying LLMs to dev tasks. And I am still impressed by the code it can output eventhough we are still in the funny intern stage in this area.

    • > I am much more impressed by the opportunities I see in consulting rather than applying LLMs to dev tasks.

      I expect there'll be a lot of consulting work in the near future in cleanup and recovery from LLM-generated disasters.

>evelopers haven't even started extracting the value of LLMs with agent architectures yet.

Which is basically what? The infinite monkey theorem? Brute forcing solutions for problems at huge costs? Somehow people have been tricked to actually embrace and accept that now they have to pay subscriptions from 20$ to 300$ to freaking code? How insane is that, something that was a very low entry point and something that anyone could do, is now being turned into some sort of classist system where the future of code is subscriptions you pay for companies ran by sociopaths who don't care that the world burns around them, as long as their pockets are full.

  • I cannot emphasize how much I agree with this comment. Thank you for writing it, I would never have had written it as well.

  • I don't have a subscription not even an Open AI account (mostly cause they messed up their google account system). You can't extract value of an LLM by just using the official UI, you just scratch the surface of how they work. And yet there aren't much developers able to actually build an actual agent architecture that does deliver some value. I don't include the "thousands" of startups that are clearly suffer from a signaling bias: they don't exist in the economy and I don't care about them like at all in my reasonning. I am talking about actual LLM developers that you can recruit locally the same way you recruit a web developer today, and that can make sense out of "frontier" LLM garbage talk by using proper architectures. These devs are not there yet.

  • I pay $300 to fly from SF to LA when I could've just walked for free. Its true. How classist!