← Back to context

Comment by jal278

2 years ago

But applied mathematics can have ethical impact -- e.g. the concept of whether a human should trust the output of a particular language model. So GP's idea of 'trust' not applying because an object has its basis in math seems like a false dividing line. Ultimately everything can be grounded in things such as math as far as we know, although its not useful to reason about e.g. ethics from thinking about the mathematics of neuronal behavior.

This is not true. Lots of things have no mathematical foundations because it is impossible to state them formally/symbolically. If you can not specify it formally then it is not mathematics. AI is mathematics because software/code/hardware is mathematics so all the hullabaloo about "safety" makes absolutely no sense other than as a marketing gimmick. Even alignment has been co-opted by OpenAI's marketing department to sell more subscriptions.

But in any event, the endgame of AI is a machine god that perpetuates itself and keeps humans around as pets. That is the best case scenario because by most measures the developed world is already a mechanical apparatus and the only missing piece for its perpetuation is the mechanical brain.

As usual, I can build this mechanical brain for $80B so tell your VC friends.

  • I don't get this line of logic -- of course software has safety implications, because people use it for things in the real world. It isn't "math' that is cleanly separable from the rest of humanity; its training data comes from humanity, and it will be used towards human goals. AI is entangled with the rest of human dealings.

    Whether AI poses existential threats for us or not, I'm open to either direction, but that the experts (e.g. Hinton, LeCun) are divided is reason enough to be concerned.

    • The way safety is handled in real world situations is through legal and monetary incentives. If the tanker you are driving to the gas station blows up then people get fired (no pun intended) and face legal repercussions. This is the case for anything that must operate in the real world. Safety is defined and then legally enforced. AI safety is no different, if an AI system makes a mistake then the operators of that system must be held liable. That's it, everything else about extinction and other sci-fi plots has no bearing on how these systems should be deployed and managed.

      I have no idea what people talk about when they say LLMs must be safe. It generates words, what exactly about words is unsafe?