Comment by joseangel_sc

20 hours ago

except the thing does not work as expected and it just makes you worse not better

Like I said that's temporary. It's janky and wonky but it's a stepping stone.

Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

It's only time.

  • Why is image generation the same as code generation?

    • it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.

      But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.

    • It isn't.

      Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:

      1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.

      2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.

      Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.

      1 reply →

  • > Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.

    Yes, but you’re not taking into account what actually caused this evolution. At first glance, it looks like exponential growth, but then we see OpenAI (as one example) with trillions in obligations compared to 12–13 billion in annual revenue. Meanwhile, tool prices keep rising, hardware demand is surging (RAM shortages, GPUs), and yet new and interesting models continue to appear. I’ve been experimenting with Claude over the past few days myself. Still, at some point, something is bound to backfire.

    The AI "bubble" is real, you don’t need a masters degree in economics to recognize it. But with mounting economic pressures worldwide and escalating geopolitical tension we may end up stuck with nothing more than those amusing Will Smith eating pasta videos for a while.

Comments like these are why I don't browse HN nearly ever anymore

  • Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.

    • > Whenever a new layer of abstraction is added

      LLMs aren't a "layer of abstraction."

      99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.

      In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.

      1 reply →

That's your opinion and you can not use those tools.

People are paying for it because it helps them. Who are you to whine about it?

  • But that's the entire flippin' problem. People are being forced to use these tools professionally at a stagering rate. It's like the industry is in its "training your replacement" era.