Comment by bbor
5 months ago
Just as humans use a single brain for both quick responses and deep reflection, we believe reasoning should be an integrated capability of frontier models rather than a separate model entirely.
Interesting. I've been working on exactly this for a bit over two years, and I wasn't surprised to see UAI finally getting traction from the biggest companies -- but how deep do they really take it...? I've taken this philosophy as an impetus to build an integrated system of interdependent hierarchical modules, much like Minsky's Society of Mind that's been popular in AI for decades. But this (short, blog) post reads like it's more of a behavioral goal than a design paradigm.
Anyone happen to have insight on the details here? Or, even better, anyone from Anthropic lurking in these comments that cares to give us some hints? I promise, I'm not a competitor!
Separately, the throwaway paragraph on alignment is worrying as hell, but that's nothing new. I maintain hope that Anthropic is keeping to their founding principles in private, and tracking more serious concerns than "unnecessary refusals" and prompt injection...
IIRC there's some reasoning in old Sonnet too, they're just expanding that. Perhaps that's part of why it was so good for a while.
https://www.reddit.com/r/ClaudeAI/comments/1iv356t/is_sonnet...