That will definitely be a problem, but I suspect and hope that there will be governing AI models that can be "prompted" with clear and concise instructions that will be demonstrably free of bias towards any group, either by a direct reading or by evaluation with trusted 3rd party models.
If the public does not trust the fairness of the AI prompt, that will hopefully lead to revolution and replacement of the prompt with something more principled, similar to how rigged elections (sometimes) trigger revolutions.
That will definitely be a problem, but I suspect and hope that there will be governing AI models that can be "prompted" with clear and concise instructions that will be demonstrably free of bias towards any group, either by a direct reading or by evaluation with trusted 3rd party models.
If the public does not trust the fairness of the AI prompt, that will hopefully lead to revolution and replacement of the prompt with something more principled, similar to how rigged elections (sometimes) trigger revolutions.