← Back to context

Comment by ctoth

4 days ago

Pre-registering a prediction:

When (not if) AI does make a major scientific discovery, we'll hear "well it's not really thinking, it just processed all human knowledge and found patterns we missed - that's basically cheating!"

Turns out goalposts are the world’s most easily moved objects. We should start building spacecraft out of them.

  • I saw the phrase "goalposts aren't just moving, they're doing parkour" recently and I do love that image. It does seem to capture the state of things quite well.

Less that AI is cheating and more that we basically found a way to take the thousand monkeys with infinite time scenario and condense that into a reasonable(?) amount of time and with some decent starting instructions. The AI wouldn't have done any of the heavy lifting of the discovery, it just iterated on the work of past researchers at speeds beyond human.

  • Honest question - how is that not true of those past researchers?

    IE, they...

    - Start with the context window of prior researchers.

    - Set a goal or research direction.

    - Engage in chain of thought with occasional reality-testing.

    - Generate an output artifact, reviewable by those with appropriate expertise, to allow consensus reality to accept or reject their work.

  • It sounds like you're saying AI is just doing brute force with a lot of force, but I can't imagine that's actually what you think, so would you mind clarifying?

If you want credit for getting predictions right, you have to predict something that has less than 100% probability to happen.

I think both can be true - I'm pretty sure a lot of what it is viewed as genius insight by the public, is actually researchers being really familiar with the state of the art in their field and putting in the legwork of trying new ideas.

People get very fragile when AI is better at something than them (excluding speed/scale of operations, where computers have an obvious edge)