← Back to context

Comment by steveBK123

6 hours ago

Humans can be governed by rules with consequences and replaced with individuals with a appropriate level of risk taking / rule following for the role.

Rules and consequences seem to apply to humans in a similar way as prompts and harnesses govern LLMs. The greater the level of power a human possesses the less they are governed by these restraints, this doesnt apply to LLMs so at least in that aspect they are an improvement. But yea we can’t really punish or inflict pain on them - this seems like a problem

  • I think a simpler model is variety.

    There are billions of people, you can interview/hire/fire until you get the right match.

    There are 2? frontier LLM providers. 5? if you are more generous / ok with more trailing edge.

    Everyone thought OpenAI was great, until Claude got better in Q1 and they switched to Anthropic, and then Codex got better and a good chunk moved back to OpenAI.. Seems kind of binary currently.

  • Why does it matter if you can inflict pain on them? Is that normal and acceptable in your line of work?

    • Being able to fire someone, thus causing potentially significant hardship, is considered quite normal and acceptable in most lines of work.

      1 reply →

Which has, famously, been a great consolation for people who suffered the consequences of human failure in the past

That seems like it applies just fine to LLMs as well: You can replace an LLM with a different model, different prompts, etc. for the appropriate level of risk taking. Rule following is even easier, given you can sandbox them.