← Back to context

Comment by CamperBob2

5 days ago

(Shrug) If you're retired or independently wealthy, you can afford that attitude. Hopefully one of those describes you.

Otherwise, you're going to spend the rest of your career saying things like, "Well, OK, so the last model couldn't count the number of Rs in 'Strawberry' and the new one can, but..."

Personally, I dislike being wrong. So I don't base arguments on points that have a built-in expiration date, or that are based on a fundamental misunderstanding of whatever I'm talking about.

Every model is deprecated in time if evidenced Science is done well, and hopefully replaced by something more accurate in time. There is no absolute right/correctness unless you are a naive child under 25 cheating on structured homework.

The point was there is nothing intelligent (or AI) about LLMs except the person fooling themselves.

In general, most template libraries already implement the best possible algorithms from the 1960s, and tuned for architecture specific optimizations. Knowing when each finite option is appropriate takes a bit of understanding/study, but does give results far quicker than fitting a statistically salient nonsense answer. Several study datum from senior developers is already available, and it proves LLMs provide zero benefit to people that know what they are doing.

Note, I am driven by having fun, rather than some bizarre irrational competitiveness. Prove your position, or I will assume you are just a silly person or chat bot. =3

  • I have no position on whether or not CamperBob is a chat-bot, but they are definitely not being silly. Their point, as I take it, is that it's dangerous to look at the state of "AI" as it is today and then ignore the rate of change. To their stated point from above:

    > Otherwise, you're going to spend the rest of your career saying things like, "Well, OK, so the last model couldn't count the number of Rs in 'Strawberry' and the new one can, but..."

    That's a very important point. I mean, it's not guaranteed that any form of AI is going to advance to the point that it starts taking jobs from people like us, but when you fail to look forward and project a little bit and imagine what they could do with another year of progress... or two years of progress... or 5 years, etc? I posit that that kind of myopia could leave one very under-prepared for the world one lands in.

    > The point was there is nothing intelligent (or AI) about LLMs except the person fooling themselves.

    Sure. The "AI Effect". Irrelevant. It doesn't matter how the machine can do your job, or whether or not it's "really intelligent". It just matters that if it can create more value, more cheaply, than you or I, we are going to wind up like John Henry. Who, btw, for anybody not familiar with that particular bit of folklore "[won the race against the machine] only to die in victory with a hammer in hand as his heart gave out from stress."

    • Both you and this chatbot Bob seem to be overly excited by the newfound LLM ability of correctly counting R's in "strawberry".

      For many, this is not a very exciting development.

      Mind you, we do follow the progress but your argument of "wait and see" is not deserving serious discussion as your stance has turned into faith.

      3 replies →

    • Speculative fiction is entertaining, but not based in reality...

      "they are definitely not being silly", that sounds like something a silly person would say. =)

      " I posit that that kind of myopia could leave one very under-prepared for the world one lands in." The numerous study data analysis results says otherwise... Thus, still speculative hype until proven otherwise.

      Not worried... initially suckered into it as a kid too... then left the world of ML years later because it was always boring, lame, and slow to change. lol =3

      8 replies →

    • > it's dangerous to look at the state of "AI" as it is today and then ignore the rate of change.

      It's self driving all over again!