Comment by skeledrew
21 hours ago
> We also will build technical safeguards to ensure our models behave as they should
A bold statement. It would appear they've definitively solved prompt injection and all the other ills that LLMs have been susceptible to. And forgot to tell the world about it.
/s
No comments yet
Contribute on Hacker News ↗