← Back to context

Comment by GardenLetter27

1 month ago

We already have AGI in some ways though. Like I can use Claude for both generating code and helping with some maths problems and physics derivations.

It isn't a specific model for any of those problems, but a "general" intelligence.

Of course, it's not perfect, and it's obviously not sentient or conscious, etc. - but maybe general intelligence doesn't require or imply that at all?

For me, general intelligence from a computer will be achieved when it knows when it's wrong. You may say that humans also struggle with this, and I'd agree - but I think there's a difference between general intelligence and consciousness, as you said.

  • Being wrong is one thing, on the other hand knowing that they don't know something is something humans are pretty good at (even if they might not admit to not knowing something and start bullshitting anyways). Current AI predictably fails miserably every single time.

    • > knowing that they don't know something is something humans are pretty good at (even if they might not admit to not knowing something and start bullshitting anyways)

      I'd like to believe this, but I'm not a mind reader and I feel like the last decade has eroded a lot of my trust in the ability of adults to know when they're wrong. I still have hope for children, at least.