← Back to context

Comment by davorak

5 days ago

> Can humans actually do that?

From my reading yes, but I think I am likely reading the statement differently than you are.

> from first principles

Doing things from first principles is a known strategy, so is guess and check, brute force search, and so on.

For an llm to follow a first principles strategy I would expect it to take in a body of research, come up with some first principles or guess at them, then iteratively construct and tower of reasonings/findings/experiments.

Constructing a solid tower is where things are currently improving for existing models in my mind, but when I try openai or anthropic chat interface neither do a good job for long, not independently at least.

Humans also often have a hard time with this in general it is not a skill that everyone has and I think you can be a successful scientist without ever heavily developing first principles problem solving.

"Constructing a solid tower" from first principles is already super-human level. Sure, you can theorize a tower (sans the "solid") from first principles; there's a software architect at my job that does it every day. But the "solid" bit is where things get tricky, because "solid" implies "firm" and "well anchored", and that implies experimental grounds, experimental verification all the way, and final measurable impact. And I'm not even talking particle physics or software engineering; even folding a piece of paper can give you surprising mismatches between theory and results.

Even the realm of pure mathematics and elegant physic theories, where you are supposed to take a set of axioms ("first principles") and build something with it, has cautionary tales such as the Russel paradox or the non-measure of Feymann path integrals, and let's not talk about string theory.