Comment by bee_rider
4 hours ago
If a human was being grilled like this by an LLM, I’d call that my dystopian. If companies have LLMs that address each other in a somewhat adversarial manner, that seems not so bad. They don’t have feelings to protect after all, so it is kind of nice if they can cut through each other’s bullshit.
Imagine if there were some kind of way to compress the interrogation down to known-valid aspects, avoiding the parts that are unnecessary for machines. You could have some kind of a programmatic interface...
Yea let’s call it the Agent Prioritized Interrogation interface.
Yeah, I take your point. It seems like the idea, though, is to work with services that are specifically trying to expose some kind of special LLM based interface. I dunno if that’s prominent or useful, I avoid that kind of thing.