← Back to context

Comment by famouswaffles

6 days ago

The paper has all those prominent institutions who acknowledge the contribution so realistically, why would you be skeptical ?

they probably also acknowledge pytorch, numpy, R ... but we don't attribute those tools as the agent who did the work.

I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.

  • I don't see the authors of those libraries getting a credit on the paper, do you ?

    >I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.

    Sure

    • At least one of the authors works for OpenAI. It’s a puff piece.

  • And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?

    • > And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?

      Do you really want to be treated like an old PC (dismembered, stripped for parts, and discarded) when your boss is done with you (i.e. not treated specially compared to a computer system)?

      But I think if you want a fuller answer, you've got a lot of reading to do. It's not like you're the first person in the world to ask that question.

      1 reply →

    • It's always a value decision. You can say shiny rocks are more important than people and worth murdering over.

      Not an uncommon belief.

      Here you are saying you personally value a computer program more than people

      It exposes a value that you personally hold and that's it

      That is separate from the material reality that all this AI stuff is ultimately just computer software... It's an epistemological tautology in the same way that say, a plane, car and refrigerator are all just machines - they can break, need maintenance, take expertise, can be dangerous...

      LLMs haven't broken the categorical constraints - you've just been primed to think such a thing is supposed to be different through movies and entertainment.

      I hate to tell you but most movie AIs are just allegories for institutional power. They're narrative devices about how callous and indifferent power structures are to our underlying shared humanity

Their point is, would you be able to prompt your way to this result? No. Already trained physicists working at world-leading institutions could. So what progress have we really made here?