Comment by dragonwriter
1 year ago
You are literally saying that the training data, despite its bias, should somehow enable the AI to correct to acheive a different understanding than that bias, which is self-contradictory. You are literally suggesting that the data both omits and contains the same information.
I wonder if we’ll ever get something like ‘AI-recursion’, where you get an AI to apply specific transformations to data which is then used to train on, sort of like machines making better machines.
E.g. take some data A, and then have a model (for instance ChatGPT-like) extrapolate based on it, potentially adding new depths or details about the given data.