Comment by Havoc
11 days ago
> it lacks the meta cognition to ask 'hey, is there a better way to do this?'.
Recently had an AI tell me this code (that it wrote) is a mess and suggested wiping it and starting from scratch with a more structure plan. That seems to hint at some meta cognition outlines
Haha, it has the human developer traits of thinking all old code is garbage, failing to identify oneself as the dummy who wrote this particular code, and wanting to start from scratch.
It's like NIH syndrome but instead "not invented here today". Also a very human thing.
More like NIITS: Not Invented in this Session.
Perhaps. I've had LLMs tell me some code is deeply flawed garbage that should be rewritten about code that exact same LLM wrote minutes before. It could be a sign of deep meta cognition, or it might be due to some cognitive gaps where it has no idea why it did something a minute ago and suddenly has a different idea.
This is not a fair criticism. There is _nobody_ there, so you can't be saying 'code the exact same LLM wrote minutes before'. There is no 'exact same LLM' and no ideas for it to have, you're trying to make sense of sparkles off the surface of a pond. There's no 'it' to have an idea and then a different idea, much less deep meta cognition.
I'm not sure we disagree. I was pushing back against the idea that suggesting a rewrite of some code implies meta cognition abilities on the part of the LLM. That seems like weak evidence to me.
They should’ve named him tom instead of Claude in homage to Ten second Tom from fifty first dates
I asked Claude to analyze something and report back. It thought for a while said “Wow this analysis is great!” and then went back to thinking before delivering the report. They’re auto-sycophantic now!
Someone will say "you just need to instruct Claude.md to be more meta and do a wiggum loop on it"
Metacognition As A Service, you say?
Running on the Meta Cognition Protocol server near you.
You’ll get sued by Meta for this!
I think that’s called “consulting”.
lol no it doesn’t. It hints at convincing language models