Comment by nerdjon

1 year ago

It is really frustrating that this topic has been twisted to some reverse racism or racism against white people that completely overshadows any legitimate discussion about this... even here.

We saw the examples of bias in generated images last year and we should well understand how just continuing that is not the right thing to do.

Better training data is a good step, but that seems to be a hard problem to solve and at the speeds that these companies are now pushing these AI tools it feels like any care of the source of the data has gone out the window.

So it seems now we are at the point of injecting parameters trying to tell an LLM to be more diverse, but then the AI is obviously not taking proper historical context into account.

But how does an LLM be more Diverse? By tracking how diverse it is with the images it puts out? Does it do it on a per user basis or for everyone?

More and more it feels like we are trying to make these large models into magic tools when they are limited by the nature of just being models.