← Back to context

Comment by scarface_74

1 year ago

Yes, you mean you should be explicit about what you want a computer to do to get expected results? I learned that in my 6th grade programming class in the mid 80s.

I’m not saying Gemini doesn’t suck (like most Google products do). I am saying that I know to be very explicit about what I want from any LLM.

That’s just the thing, it literally changes your prompt instructions to randomize gender and ethnicity even when you specify. If you do specify, it might flag you as being inappropriate and give a refusal. This has been a common strategy for image generators to try to combat implicit biases in the training data (more internet images of nurses are female therefore asking for “nurse” will always yield a female nurse unless the system appends randomly “male” nurse), but Google appears to have gone way overboard to where is scolds you if you ask for a female nurse since you are being biased and should know men can also be nurses.

  • I’m in no way defending Gemini. But if I am explicit about race for ChatGPT, it respects the prompt.

    I have two coworkers in a private Slack and we are always generating crazy memes with ChatGPT. If I specify a bald Black guy (me), a white woman and a Filipino guy, it gets it right.

    I tried some of the same prompts that Gemini refused to render with ChatGPT or forced “diversity” on, ChatGPT did it correctly.