Comment by exasperaited
2 months ago
When Minsky and Papert showed that the perceptron couldn't learn XOR, it contributed to wiping the neural network off the map for decades.
It seems no amount of demonstrating fundamental flaws in this system that should have been solved by all the new improved "reasoning" works anymore. People are willing to call these "trick questions", as if they are disingenuous, when they are discovered in the wild through ordinary interactions.
Does my tiny human brain in, this.
It doesn't work this time because there are plenty of models, including GPT5 Thinking that can handle this correctly, and so it is clear this isn't a systemic issue that can't be trained out of them.
> a systemic issue
It will remain a suggestion of a systemic issue until it will be clear that architecturally all checks are implemented and mandated.
It is clear it is not, given we have examples of models that handles these cases.
I don't even know what you mean with "architecturally all checks are implemented and mandated". It suggests you may think these models work very differently to how they actually work.
5 replies →
I had to look this up. This proof only applies to single layer perceptrons, right?
And once they had the multi-layer solution, that unblocked the road and lead to things like LLMs