Comment by globnomulous

7 days ago

Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.

It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.

If those statistical models are helping you do better research, or basically doing most of it better than you can, does it matter? People act like models are implicitly bad because they are statistical, which makes no sense at all.

If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement. And yes it will progress because thats the nature of it doing meaningful work.

  • > If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement.

    That's a lot of vague language. I don't really see any way to respond. I suppose I can say this much: the usefulness of a tool is not proof of the correctness of the predictions we make about it.

    > And yes it will progress because thats the nature of it doing meaningful work.

    This is a non sequitur. It makes no sense.

    And I never said there's anything bad about or wrong with statistical models.