Comment by jug
3 days ago
I'll always be skeptical about using AI to amplify AI. I think humans are needed to amplify AI since humans are so far documented to be significantly more creative and proactive in pushing the frontier than AI. I know, it's maybe a radical concept to digest.
> I'll always be skeptical about using AI to amplify AI.
This project was in part written by Claude, so for better or worse I think we're at least 3 levels deep here (AI-written code which directs an AI to direct other AIs to write code).
I think I'm more optimistic about this than brute-forcing model training with ever larger datasets, myself. Here's why.
Most models I've benchmarked, even the expensive proprietary models, tend to lose coherence when the context grows beyond a certain size. The thing is, they typically do not need the entire context to perform whatever step of the process is currently going on.
And there appears to be a lot of experimentation going on along the line of having subagents in charge of curating the long term view of the context to feed more focused work items to other subagents, and I find that genuinely intriguing.
My hope is that this approach will eventually become refined enough that we'll get dependable capability out of cheap open weight models. That might come in darn handy, depending on the blast radius of the bubble burst.
Based on clear, operational definitions, AI is definitely more creative than humans. E.g., can easily produce higher scores on a Torrance test of divergent thinking. Humans may still be more innovative (defined as creativity adopted into larger systems), though that may be changing.
More creative? I've just seen my premium subscription "AI" struggling to find a trivial issue of a missing import in a very small / toy project. Maybe these tools are getting all sorts of scores on all sorts of benchmarks, I dont doubt it, but why are there no significant real-world results after more than 3 years of hype? It reminds of that situation when the geniuses at Google offered the job to the guy who created Homebrew and then rejected him after he supposedly did not do well on one of those algorithmic tasks (inverting a binary tree? - not sure if I remember correctly). There are also all sorts of people scoring super high on various IQ tests, but what counts, with humans as with the supposed AI is the real world results. Benchmarks without results do not mean anything.
It is as creative as it's training material.
You think it is creative because you lack the knowledge of what it has learnt.
This is absurd to the point of being comical. Do you really believe that?
If an “objective” test purports to show that AI is more creative than humans then I’m sorry but the test is deeply flawed. I don’t even need to look at the methodology to confidently state that.
It’s a point, I suppose, about being clear about what we mean. In psychology, we need to define terms in terms of measures — and we’ve traditionally measured human creativity that way. Not without critique, but true!
https://en.wikipedia.org/wiki/Torrance_Tests_of_Creative_Thi...
His comment must be fueled by his own lack of creativity. He has engulfed himself in the AI, and his own knowledge gap prevents him from even scratching the surface of his own stupidity.
1 reply →