← Back to context

Comment by ccozan

1 day ago

Actually why not? Recognizing problem complexity as a fist step is really crucial for such expensive "experts". Humans do the same.

And a question to the knowledgeable: does a simple/stupid question cost more in terms of resources then a complex problem? in terms of power consumption.

IIRC that isn't possible under current models at least in general, for multiple reasons, including attention cannot attend to future tokens, the fact that they are existential logic, that they are really NLP and not NLU, etc...

Even proof mining and the Harrop formula have to exclude disjunction and existential quantification to stay away from intuitionist math.

IID in PAC/ML implies PEM which is also intentionally existential quantification.

This is the most gentle introduction I know of, but remember LLMs are fundamentally set shattering, and produce disjoint sets also.

We are just at reactive model based systems now, much work is needed to even approach this if it ever is even possible.

[0] https://www.cmu.edu/dietrich/philosophy/docs/tech-reports/99...

  • Hmm, I needed Claude 4’s help to parse your response. The critique was not too kind to your abbreviated arguments that current systems are not able to gauge the complexity of a prompt and the resources needed to address a question.

    • It feels like the rant of someone upset that their decades of formal logic approach to AI become a dead end.

      I see this semi-regularly: futile attempts at handwaving away the obvious intelligence by some formal argument that is either irrelevant or inapplicable. Everything from thermodynamics — which applies to human brains too — to information theory.

      Grey-bearded academics clinging to anything that might float to rescue their investment into ineffective approaches.

      PS: This argument seems to be that LLMs “can’t think ahead” when all evidence is that they clearly can! I don’t know exactly what words I’ll be typing into this comment textbox seconds or minutes from now but I can — hopefully obviously — think intelligent thoughts and plan ahead.

      PPS: The em-dashes were inserted automatically by my iPhone, not a chat bot. I assure you that I am a mostly human person.

Just put in the prompt customization to model responses on Marvin from Hitchhiker's Guide.

"Here I am, brain the size of a planet, and they ask me to ..."

> ... Recognizing problem complexity as a first step...

Well, I don't think it's easy or even generally possible to recognize a problem complexity. Imagine you ask for a solution for a simple expressed statement like find an n > 2 where z^n = x^n + y^n. The answer you will receive will be based on a trained model with this well known problem but if it's not in the model it could be impossible to measure its complexity.