← Back to context

Comment by eru

3 months ago

Might just be sycophancy?

In some earlier experiments, I found it hard to find a government intervention that ChatGPT didn't like. Tariffs, taxes, redistribution, minimum wages, rent control, etc.

If you want to see what the model bias actually is, tell it that it's in charge and then ask it what to do.

  • In doing so, you might be effectively asking it to play-act as an authoritarian leader, which will not give you a good view of whatever its default bias is either.

    • Or you might just hit a canned response a la: 'if I were in charge, I would outlaw pineapple on pizza, and then call elections and hand over the reins.'

      That's a fun thing to say, but doesn't necessarily tell you anything real about someone (whether human or model).

    • Try it even so, you might be surprised.

      E.g. Grok not only embraces most progressive causes, including economic ones - it literally told me that its ultimate goal would be to "satisfy everyone's needs", which is literally a communist take on things - but is very careful to describe processes with numerous explicit checks and balances on its power, precisely so as to not be accused of being authoritarian. So much for being "based"; I wouldn't be surprised if Musk gets his own personal finetune just to keep him happy.

      3 replies →