← Back to context

Comment by partiallypro

1 year ago

This isn't even the worst I've seen from Gemini. People have asked it about actual terrorist groups, and it tries to explain away that they aren't so bad and it's a nuanced subject. I've seen another that was borderline Holocaust denial.

The fear is that some of this isn't going to get caught, and eventually it's going to mislead people and/or the models start eating their own data and training on BS that they had given out initially. Sure, humans do this too, but humans are known to be unreliable, we want data from the AI to be pretty reliable given eventually it will be used in teaching, medicine, etc. It's easier to fix now because AI is still in its infancy, it will be much harder in 10-20 years when all the newer training data has been contaminated by the previous AI.