← Back to context

Comment by RainyDayTmrw

3 months ago

What should be the solution here? There's a thing that, despite how much it may mimic humans, isn't human, and doesn't operate on the same axes. The current AI neither is nor isn't [any particular personality trait]. We're applying human moral and value judgments to something that doesn't, can't, hold any morals or values.

There's an argument to be made for, don't use the thing for which it wasn't intended. There's another argument to be made for, the creators of the thing should be held to some baseline of harm prevention; if a thing can't be done safely, then it shouldn't be done at all.

The solution is make a public leaderboard with scores; all the LLM developers will work hard to maximize the score on the leaderboard.