Comment by layer8
3 days ago
LLMs have "knowledge" and guardrails by their system prompts. The interesting thing is that the agentic AIs in question don't seem to have guardrails that would deter them from acting like that.
3 days ago
LLMs have "knowledge" and guardrails by their system prompts. The interesting thing is that the agentic AIs in question don't seem to have guardrails that would deter them from acting like that.
No comments yet
Contribute on Hacker News ↗