← Back to context

Comment by tensor

13 days ago

Call me crazy, but I don't want an AI that bases its reasoning on politics. I want one that is primarily scientific driven, and if I ask it political questions it should give me representative answers. E.g. "The majority view in [country] is [blah] with the minority view being [bleh]."

I have no interest in "all sides are equal" answers because I don't believe all information is equally informative nor equally true.

The current crop of AIs can't do science though, they are disconnected from the physical world and can't test hypothesis or gather data.

  • They can definitely gather and analyze all sorts of data proactively. I'm guessing you haven't used o3 Deep Research?

    • You've misunderstood, I mean in context. tensor said "I want one that is primarily scientific driven" - Deep Research can't achieve that because it can't independently run experiments. It can do research, but doing research isn't being scientifically driven, being scientifically driven means when you're not sure about something you run an experiment to see what is true rather than going with whatever your tribe says is true.

      If Deep Research comes up against a situation where there is controversy it can't settle the matter scientifically because it would need to do original research. Which it cannot do due to a lack of presence in meatspace.

      That might change in the future, but right now it is impossible.

It's token prediction, not reasoning. You can simulate reasoning, but it's not the same thing - there is not an internal representation of reality in there anywhere

But if you don't incorporate some moral guidelines, I think if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left, without some bias tossed in there for humanistic concerns. The universe doesn't "care" if humans exist or not, but our impact on the planet is a huge negative if one creature's existence is as important as any other's

  • > if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left

    That may or may not be its logical conclusion. You’re speculating based on your own opinions that this is logical.

    If I were to guess, it would be indifferent about us and care more about proliferating into the universe than about earth. The AI should understand how insignificant earth is relative to the scale of the universe or even the Milky Way galaxy.

  • The size of their brain may depend on how many people are in the economy.