Comment by layer8
6 months ago
LLMs have "knowledge" and guardrails by their system prompts. The interesting thing is that the agentic AIs in question don't seem to have guardrails that would deter them from acting like that.
6 months ago
LLMs have "knowledge" and guardrails by their system prompts. The interesting thing is that the agentic AIs in question don't seem to have guardrails that would deter them from acting like that.
No comments yet
Contribute on Hacker News ↗