Comment by danny_codes
2 months ago
IMO the idea that an LLM company can make a "safe" LLM is.. unrealistic at this time. LLMs are not very well-understood. Any guardrails are best-effort. So even purely technical claims of safety are suspect.
That's leaving aside your point, which is the overwhelming financial interest in leveraging manipulative/destructive/unethical psychological instruments to drive adoption.
No comments yet
Contribute on Hacker News ↗