Comment by staticassertion
5 days ago
Thanks, this is great and I'll have quite a bit to read here.
> people like to move goalposts whenever a new result comes out, which is silly. Could AI systems do this 2 years ago? No. I don’t know how people don’t look at robust trends in performance improvement, combined with verifiable RL rewards, and can’t understand where things are going.
I don't think it's goal post moving to acknowledge improvements but still reject the conclusion that AI has reached a specific milestone if those improvements don't justify the position. I doubt anyone sensible is rejecting improvements.
> But you pretend as though this is definitive evidence that “LLMs are poor general reasoners”.
I don't think I've ever made any definitive claims at all, quite the contrary - I've tried to express exactly how open I am to what you're saying. As I've said, I'm a functionalist, and I already am largely supportive of reductive intelligence, so I'm exactly the type of person who would be sympathetic to what you're saying.
> "That’s how science is done" is a bit of an oversimplification
Of course, but I don't think it's too much to ask for to have a theory and evidence. I don't need a lined up series of papers that all start with perfectly syllogisms and then map to well controlled RCTs or whatever. Just an "I think this accounts for it, here's how I support that".
> The claim that there is no framework + no real tests is just not true anymore.
I didn't say it wasn't true, to be clear, I asked for it. Again, I'm sympathetic to the view at a glance so I simply need a way to reason about it.
No need for a complete view, I'd never expect such a thing.
> The model is: reasoning is not inherently human, it’s mathematical.
Well, hand wringing perhaps, but I'd say it's maybe mathematical, computational, structural, functional, whatever - I think we're on the same page here regardless.
> It falls easily within the purview of RL, statistics, representation, optimization, etc, and to claim otherwise would require evidence.
Sure, but I grant that, in fact I believe it entirely. But that doesn't mean that every mathematical construct exhibits the function of intelligence.
> What is the robust model for reasoning in humans again? Simulations and models — what are these? Interventative analysis — we can’t do this with LLMs? Falsifying test cases — what would satisfy you here beyond everything I’ve presented above?
Sorry, I'm not fully understanding this framing. We can do those things with LLMs, and it's hard to say what I would be satisfied. In general, I'd be satisfied with a theory that (a) accounts for the data (b) has supporting evidence (c) does not contradict any major prior commitments. I don't think (c) will be an issue here.
> You say “brains are intelligent” ==> “intelligence is an emergent property of cells zapping” is absurd,
Because intelligence could have been a property of our brains being wet, or roundish, or it could have been a property of our spines, or maybe some force we hadn't discovered, or a soul, etc. We formed a theory, it accounted for observations, we performed tests, we've modeled things, etc, and so the theories we've adopted have been extremely successful and I think hold up quite well. But certainly we didn't go "the brain has electricity, the brain is intelligent, therefor electricity in the brain is what drives intelligence".
> Brains _are_ made up of real, physical atoms organized into molecules organized into cells organized into a coordinated system, and…that’s it? What’s missing here?
Certainly nothing on my world view.
No comments yet
Contribute on Hacker News ↗