Comment by andy_ppp

2 years ago

To be honest I’ve been noticing how many times chat GPT loses meaning and becomes grammatically correct gibberish. When it has really good examples this is fine but leaping into almost any new area it gets quickly out of its depth. Our brains can look at their own learned patterns and derive new ones quite easily. The transformer seems to find this really hard, it is very good at some party tricks but I wonder if it will remain good at derivatives and completely useless at less common ideas for a while yet? Personally I’m not sure AGI is a good idea given the history of human beings who think they are superior to their ancestors.