Comment by cv5005
9 hours ago
Yeah I dont think a single current LLM would fool me in a turing test - I would obiously use all kinds of prompt injection techniques, ask about 'dangerous' or controversial topics, ask about random niche facts in varied fields, etc.
No comments yet
Contribute on Hacker News ↗