Comment by ihatepython
3 years ago
Nobody really cares about doing real AI to find the solution to a difficult problem, all they care about is generating 'research' that supports a pre-determined conclusion.
If you want to do real AI, I think having a foundation in physics (electricity and magnetism) is a good start, as well as quantum computers and quantum physics. Personally I think neural nets are a dead end, I think the industry is moving in the wrong direction I'd stay away from Kaggle, I think it's a waste of time. Just decide what domain that you want to solve and focus on that, don't just follow the industry cause most of the people are brain-dead
Quite a bold claim to say neural networks are dead end right when it is literally revolutionizing the entire digital world. You have any reason to back it up?
Neural networks make great pattern-matchers -- and all of the current implementations match patterns very well. They will produce something that matches the query, but not necessarily any correct information.
What OpenAI and friends have done is very impressive, but it's not the singularity that many laypeople make it out to be.
> Neural networks make great pattern-matchers
Not really. LLMs and diffusion models have one thing in common: they don't match patterns. They elaborate. Pattern matching is needed for that to some extent, but these models are not as good at it as they are at completing models. Machine learning is really good at "masking". And the short version of that is "there's something missing, fill it in". This is why people call ChatGPT "autocorrect". It fills in the missing word at the end of the conversation. Any pattern matching it can do is only in service of that.
I might remark that I have 3 daughters, and when I look at the exercises their schools provide them, that's exactly what they get them to do, starting with "fill in the missing word" going all the way to "write an essay defending ..."
> They will produce something that matches the query, but not necessarily any correct information.
Here's the problem I have with this statement. You can say the exact same thing about humans. I remember math teachers in high school where I now know ... they were idiots. In fact, a few things they presented as truth took quite a bit of effort to train back out of my behaviour.
Things we know about human behavior: it's mostly based on imitation. One might remark: imitation creates a strong possibility of stupid behavior, including behaving stupidly in large numbers. We know incidents where large numbers of humans have literally killed themselves through stupidity, both intentionally and accidentally. The reason we find the many sketches about dodo birds funny is that we've all seen people behave (somewhat) like them.
Plus this matches my experience. If I look at what people do, it's trivial to see: no matter what problem a human encounters, they will do something. If they don't have a good answer that including incredibly stupid things, hurting themselves or others. And where we do exhibit good problem solving behavior, is spreads through humanity through imitation. In many instances it is said that those behaviors were discovered by accident, and then spread through imitation.
Equally, if I look at how signals are propagated inside the brain and nervous system, it's kind of obvious. Signal comes in, signal comes out. The odds of a signal dying or amplifying (outside of medical problems with the brain) are very small. Your brain doesn't generate intelligent behaviour, it "converts" input signals into output signals. That's what it does. This seems like an excellent way to guarantee the "any problem will get a response, if necessary a very stupid one" behavior.
Given that all LLMs can do is read what humans have written, they are remarkably correct and creative. Of course, for them to actually progress the state of the art they will need to become actors in society, not just read the internet. They'll actually have to try stuff out, make things happen.
1 reply →
Uhhhh, what? Symbolic systems is the answer instead or?
The contrarian approach is to focus on GOFAI. Deep Learning isn't all that mathematically sophisticated anyways.
Can GOFAI compete with GPT4 or even GPT3?