Comment by emp17344

3 hours ago

Well, yeah… turns out that goal wasn’t a good indicator for AGI, so we re-evaluated. That’s changing your hypothesis in the face of evidence, not “moving the goalposts” in the fallacious sense.

What’s the indicator for AGI now? We are so far past the Turing Test it isn’t funny. In fact the models now are too intelligent, you would never think a human would have that much knowledge quickly about a subject you chose at random.