Comment by coldtea

19 hours ago

>Anyone who would follow a mistake like that up with demanding a confession out of the agent is not mature enough to be using these tools. Lord, even calling it a "confession" is so cringe. The agent is not alive. The agent cannot learn from its mistakes

The problem is millions of years of evolutionary wiring makes us see it as alive. Even those mature enough to understand the above on the conscious level, would still have a subconscious feeling as if it's alive during interactions, or will slip using agency/personhood language to describe it now and then.

They should at least stop responding in the first person.

  • That's one of the first instructions in my system prompt when I'm working with an LLM:

    > Do not reply in the first person – i.e. do not use the words "I," "Me," "We," and so on – unless you've been asked a direct question about your actions or responses.

    It's not bulletproof but it works reasonably well.

  • We need to make like Japanese and come up with some neo-first-person-pronouns for bots to use to refer to themselves.

Using files called SOUL, CONSTITUTION, and so on seems like it would make it more likely we see LLMs as pseudo-alive. It’s both a diminishing of what makes us human and a betrayal of what LLMs truly are (and should be respected as such).

> The problem is millions of years of evolutionary wiring makes us see it as alive. Even those mature enough to understand the above on the conscious level, would still have a subconscious feeling as if it's alive during interactions, or will slip using agency/personhood language to describe it now and then.

Also four (4) whole years of propaganda, which includes UX patterns and RLHF optimizations to encourage us to interact with it like a person.

> The problem is millions of years of evolutionary wiring makes us see it as alive

Maybe for laymen, but I would think most technologists should understand that we're working with the output of what is effectively a massive spreadsheet which is creating a prediction.

  • The thing with evolutionary wiring is that it doesn't matter if you're layman or "technologist". The technologist part is just a small layer on top of very thick caveman/animal insticts and programming.

    That's why a technologist can, just as easily as any layman, get addicted to gambling, or do crazy behaviors when attracted by the opposite sex.

    • >small layer on top of very thick caveman/animal insticts and programming.

      Which is also why marketing and advertising works on EVERYONE. When AI puts out the phrase "Prompt engineering", everyone instinctively treat it as something deterministic, despite them having some idea of how an LLM works...

  • The same could be said for your brain.

    LLMs are highly intelligent. Comparing them to spreadsheets is reductionist and highly misleading.

    • >LLMs are highly intelligen

      I will tell you why it is not.

      Intelligence is understanding low level stuff and using it to reason about and understand high level stuff.

      When LLMs demonstrate "highly intelligent" behavior, like solving a complex math problem (high level stuff), but also simultaneously demonstrate that it does not know how to count (low level stuff that the high level stuff depends on), it proves that it is not actually "intelligent" and is not "reasoning".

      2 replies →