← Back to context

Comment by robwwilliams

1 day ago

Hmm, I needed Claude 4’s help to parse your response. The critique was not too kind to your abbreviated arguments that current systems are not able to gauge the complexity of a prompt and the resources needed to address a question.

It feels like the rant of someone upset that their decades of formal logic approach to AI become a dead end.

I see this semi-regularly: futile attempts at handwaving away the obvious intelligence by some formal argument that is either irrelevant or inapplicable. Everything from thermodynamics — which applies to human brains too — to information theory.

Grey-bearded academics clinging to anything that might float to rescue their investment into ineffective approaches.

PS: This argument seems to be that LLMs “can’t think ahead” when all evidence is that they clearly can! I don’t know exactly what words I’ll be typing into this comment textbox seconds or minutes from now but I can — hopefully obviously — think intelligent thoughts and plan ahead.

PPS: The em-dashes were inserted automatically by my iPhone, not a chat bot. I assure you that I am a mostly human person.

  • I usually ignore ad hominem attacks but I am trying to convey a kindness here.

    Who do you think is going to be successful, those who realize the limitations and strength of a system and leverage them, or those who are complacent, with a unwarranted self-satisfaction accompanied by unawareness of actual risks or deficiencies of a particular system?

    IMHO they are always going to be too complex to know everything about a models of this size, but there are areas we do know their limits or the limits of computation in general.

    But feel free to stay on your high horse and call people names and see how well that works out for you.