← Back to context

Comment by ofjcihen

6 days ago

I feel like we get one of these articles that addresses valid AI criticisms with poor arguments every week and at this point I’m ready to write a boilerplate response because I already know what they’re going to say.

Interns don’t cost 20 bucks a month but training users in the specifics of your org is important.

Knowing what is important or pointless comes with understanding the skill set.

I feel the opposite, and pretty much every metric we have shows basically linear improvement of these models over time.

The criticisms I hear are almost always gotchas, and when confronted with the benchmarks they either don’t actually know how they are built or don’t want to contribute to them. They just want to complain or seem like a contrarian from what I can tell.

Are LLMs perfect? Absolutely not. Do we have metrics to tell us how good they are? Yes

I’ve found very few critics that actually understand ML on a deep level. For instance Gary Marcus didn’t know what a test train split was. Unfortunately, rage bait like this makes money

  • Models are absolutely not improving linearly. They improve logarithmically with size, and we've already just about hit the limits of compute without becoming totally unreasonable from a space/money/power/etc standpoint.

    We can use little tricks here and there to try to make them better, but fundamentally they're about as good as they're ever going to get. And none of their shortcomings are growing pains - they're fundamental to the way an LLM operates.

    • Most of the benchmarks are in fact improving linearly, we often don't even know the size. You can find this out but just looking at the scores over time.

      And yes, it often is small things that make models better. It always has been, bit by slow they get more powerful, this has been happening since the dawn of machine learning

    • remember in 2022 when we "hit a wall"? everyone said that back then. turned out we didn't.

      and in 2023 and 2024 and january 2025 and ...

      all those "walls" collapsed like paper. they were phantoms; ppl literally thinking the gaps between releases were permanent flatlines.

      money obviously isn't an issue here, VCs are pouring in billions upon billions. they're building whole new data centres and whole fucking power plants for these things; electricity and compute aren't limits. neither is data, since increasingly the models get better through self-play.

      >fundamentally they're about as good as they're ever going to get

      one trillion percent cope and denial

      2 replies →

  • "pretty much every metric we have shows basically linear improvement of these models over time."

    They're also trained on random data scraped off the Internet which might include benchmarks, code that looks like them, and AI articles with things like chain of thought. There's been some effort to filter obvious benchmarks but is that enough? I cant know if the AI's are getting smarter on their own or more cheat sheets are in the training data.

    Just brainstorming, one thing I came up with is training them on datasets from before the benchmarks or much AI-generated material existed. Keep testing algorithmic improvements on that in addition to models trained on up to date data. That might be a more accurate assessment.

    • thats not a bad idea, very expensive though, and you end up with a pretty useless model in most regards.

      A lot of the trusted benchmarks today are somewhat dynamic or have a hidden set.

      1 reply →

  • >I feel the opposite, and pretty much every metric we have shows basically linear improvement of these models over time.

    Wait, what kind of metric are you talking about? When I did my masters in 2023 SOTA models where trying to push the boundaries by minuscule amounts. And sometimes blatantly changing the way they measure "success" to beat the previous SOTA

PLEASE write your response. We'll publish it on the Fly.io blog. Unedited. If you want.

  • Maybe make a video of how you're vibecoding a valuable project in an existing codebase, and how agents are saving you time by running your tools in a loop.

    • Seriously… thats the one thing I never see being posted? Is it because Agent mode will take 30-40 minutes to just bookstrap a project and create some file?

      2 replies →

    • So they can cherry pick the 1 out of 10 times that it actually performs in an impressive manner? That's the essence of most AI demos/"benchmarks" I've seen.

      Testing for myself has always yielded unimpressive results. Maybe I'm just unlucky?

      1 reply →

> with poor arguments every week

This roughly matches my experience too, but I don't think it applies to this one. It has a few novel things that were new ideas to me and I'm glad I read it.

> I’m ready to write a boilerplate response because I already know what they’re going to say

If you have one that addresses what this one talks about I'd be interested in reading it.

  • >> with poor arguments every week

    >This roughly matches my experience too, but I don't think it applies to this one.

    I'm not so sure. The argument that any good programming language would inherently eliminate the concern for hallucinations seems like a pretty weak argument to me.

    • It’s a confusing one for sure.

      To be honest I’m not sure where the logic for that claim comes from. Maybe an abundance of documentation is the assumption?

      Either way, being dismissive of one of LLMs major flaws and blaming it on the language doesn’t seem like the way to make that argument.

    • Why does that seem weak to you?

      It seems obviously true to me: code hallucinations are where the LLM outputs code with incorrect details - syntax errors, incorrect class methods, invalid imports etc.

      If you have a strong linter in a loop those mistakes can be automatically detected and passed back into the LLM to get fixed.

      Surely that's a solution to hallucinations?

      It won't catch other types of logic error, but I would classify those as bugs, not hallucinations.

      4 replies →

There's also the reverse genre: valid criticism of absolutely strawman arguments that nobody makes.

  • Which of the arguments in this post hasn't occurred on HN in the past month or so?

    • "I tried copilot 2 years ago and I didn’t like it."

      Great article BTW, it’s amazing that you’re now blaming developers smarter than you for lack of LLM adoption, as if it weren’t enough for the technology to be useful to become widespread.

      Try to deal with „an agent takes 3 minutes to make a small transformation to my codebase and it takes me another 5 to figure out why it changed what it did only to realize that it was the wrong approach and redo it by hand, which took another 7 minutes” in your next one.

What valid AI criticisms? Most criticisms of AI are not very deep nor founded in complexity theoretic arguments, whereas Yann LeCun himself gave an excellent 1 slide explanation of the limits of LLMs. Most AI criticisms are low quality arguments.

“Valid” criticism rarely come from the people barely capable of understanding the difference between AI and LLMs, and using them interchangeably.