Comment by somewhatgoated
6 hours ago
Rules and consequences seem to apply to humans in a similar way as prompts and harnesses govern LLMs. The greater the level of power a human possesses the less they are governed by these restraints, this doesnt apply to LLMs so at least in that aspect they are an improvement. But yea we can’t really punish or inflict pain on them - this seems like a problem
I think a simpler model is variety.
There are billions of people, you can interview/hire/fire until you get the right match.
There are 2? frontier LLM providers. 5? if you are more generous / ok with more trailing edge.
Everyone thought OpenAI was great, until Claude got better in Q1 and they switched to Anthropic, and then Codex got better and a good chunk moved back to OpenAI.. Seems kind of binary currently.
Why does it matter if you can inflict pain on them? Is that normal and acceptable in your line of work?
Being able to fire someone, thus causing potentially significant hardship, is considered quite normal and acceptable in most lines of work.
Yea I didn’t mean actual physical violence but rules need painful consequences in some way to be meaningful?