Comment by rudhdb773b
2 days ago
The critical point is who the "we" is.
Is "we" the parents teaching their children their own unique values, or is the "we" a government or corporation forcing one set of values on all children.
Why not encourage the users of AI to use a Safety.md (populated with some reasonable but optional defaults)?
There's nothing a meaningless document can do when the AI is not aligned in the first place.
"alignment" is the computer version for (philosophical not medical) "consciousness", a totally subjective, immeasurable concept.
I think you have a misunderstanding of the term alignment. Really, you could replace "aligned" with "working" and "misaligned" with "broken".
A washing machine has one goal, to wash your clothes. A washing machine that does not wash your clothes is broken.
An AI system has some goal. A target acquisition AI system might be tasked with picking out enemies and friendlies from a camera feed. A system that does so reliably is working (aligned) a system that doesn't is broken (misaligned). There's no moral or philosophical angle necessary if your goal doesn't already include that. Aligned doesn't mean good and misaligned doesn't mean evil.
The problem comes when your goal includes moral, ethical and philosophical judgements.