← Back to context

Comment by khokhol

1 year ago

Indeed. What's striking to me about this fiasco is (aside from the obvious haste with which this thing was shoved into production) that apparently the only way these geniuses can think of to de-bias these systems - is to throw more bias at them. For such a supposedly revolutionary advancement.

If you look at attempts to actively rewrite history, they have to because a hypothetical model trained only on facts would produce results that they won't like

  • Models aren't trained on pure "facts" though - they're trained on a dataset of artifacts that reflect today's and yesterday's biases from the world that created them.

    If you trained a model purely on past history, it would see a 1:1 correlation between "US President" and "man" and decide that women cannot be President. That's factually incorrect, and it's not "rewriting history" to tune models so they know the difference between what's happened so far and what's allowable, or possible in a just world.

    • Maybe it would have the Constitution thrown in there also and figure out that "women cannot be President" is untrue? Sort of like in the real world.

      Because otherwise, I guess I agree, you only know that you are taught and presented; AI especially because there is no intelligence in it whatsoever, only endless if blocks tuned for correlation.

    • That is not my point. Even if we had a model that could portray reality as objective as possible, a lot of people wouldn't like that and be actually offended by it.

      This has also been going on a lot in the "representation" discourse.

      A bohemian village 500 years ago would have been 100% white in almost all circumstances. Surgents would be male. Telephone scammers Indian and so on.

      But in many ways, simply showing reality is not only not wanted but even offensive. What has to be shown is an idealized version of reality that we want to achieve and that is "more diversity". And what is maximum diversity? Zero white people.

      > If you trained a model purely on past history, it would see a 1:1 correlation between "US President" and "man" and decide that women cannot be President.

      Why would you think that? You and me also know the history but also realize that a woman can be president.

      1 reply →

> For such a supposedly revolutionary advancement.

The technology is objectively not ready, at least to keep the promises that are/have been advertised.

I am not going to get too opinionated, but this seems to be a widespread theme, and to people that don't respond to marketing advances (remember Tivo?), but are willing to spend real money and real time, it would be "nice" if there was signalling to this demographic.

That struck me as well. While the training data is biased in various ways (like media in general are), it should however also contain enough information for the AI to be able to judge reasonably well what a less biased reality-reflecting balance would be. For example, it should know that there are male nurses, black politicians, etc., and represent that appropriately. Black Nazi soldiers are so far out that it sheds doubt on either the AI’s world model in the first place, or on the ability to apply controlled corrections with sufficient precision.

  • You are literally saying that the training data, despite its bias, should somehow enable the AI to correct to acheive a different understanding than that bias, which is self-contradictory. You are literally suggesting that the data both omits and contains the same information.

    • I wonder if we’ll ever get something like ‘AI-recursion’, where you get an AI to apply specific transformations to data which is then used to train on, sort of like machines making better machines.

      E.g. take some data A, and then have a model (for instance ChatGPT-like) extrapolate based on it, potentially adding new depths or details about the given data.

  • Apparently the biases in the output tend to be stronger than what is in the training set. Or so I read.