Comment by haswell
2 years ago
I generally agree that we quickly adjust to new tech and forget how impactful it is.
But I can’t fully get on board with this:
> but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is? Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would.
The parent teaching a toddler bears some vague resemblance to machine learning, but the underlying results of that learning (and the process of learning itself) could not be any more different.
More problematic than this, while you may be correct that we will eventually be able to explain human biology with the precision of an engineer, these recent AI advances have not made meaningful progress towards that goal, and such an achievement is arguably many decades away.
It seems you are concluding that because we might eventually explain human biology, we can draw conclusions now about AI as if such an explanation had already happened.
This seems deeply problematic.
AI is “real” in the sense that we are making good progress on advancing the capabilities of AI software. This does not imply we’ve meaningfully closed the gap with human intelligence.
I think the point is that we have been “meaningfully closing” the gap rapidly, and at this point it is only a matter of time, the end can be seen, even if currently not completely written out in some equations.
It does seem like on HN, the audience is heavily weighted towards software developers that are not biologist, and often cannot see the forest for the trees. They know enough about AI programming to dismiss the hype, and not enough about biology, and miss that this is pretty amazing.
The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI. These fields are starting to converge and inform each other. I’m saying this is happening fast enough that the end game is in sight, humans are just made of parts, an engineering problem that will be solved.
Free will and consciousness are overrated, we think of ourselves as having some mystically exceptional consciousness, which clouds the credit we give advancements in AI. ‘AI will never be able to equal a human’, when humans just want lunch, and our ‘free will’ is based on how much sleep we got. DNA is a program; it builds the brain that is just responding to inputs. Read some Robert Sapolsky, human reactions are just hormones, chemicals, responding to inputs. We will eventually have an AI that mimics a human because humans aren’t that special. Even if the function of every single molecule in the body, or every equation in AI, is all fully mapped out, enough is to stop claiming 'specialness'.
> I think the point is that we have been “meaningfully closing” the gap rapidly
In your opinion, how wide is this gap? To claim that it is closing at a meaningful pace brings the implication that we understand the width. Has anyone made a credible claim that we actually understand the width of the gap?
> The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI.
This is a thinking trap. Without an understanding or definition of the breadth of the problem space, both fields could be making perfectly equivalent progress and it would still imply nothing regarding the width of the gap or the progress made closing it.
> These fields are starting to converge and inform each other.
Collaboration does not imply anything more than the existence of cooperation across fields. Do you have specific examples where the science itself is converging?
My understanding is that our ability to comprehend neural processes is still so limited that researchers focus on the brains of worms (e.g. a flatworm’s 53 neurons), and we still don’t understand how they work.
> and at this point it is only a matter of time, the end can be seen
Who is claiming we have any notion of being close enough to see the end? Most experts on the cutting edge cite the enormous distance yet to be covered.
I’m not claiming the progress made isn’t meaningful by itself. I’m struggling with your claim that we have any idea how much further we have to go.
Landing rovers on Mars is a huge achievement, but compared to the array of advancements required to colonize space, it seems like just a small step forward in comparison.
You are right, I'm playing fast and loose with some assumptions and opinions without citations.
I just don't like falling into the other trap of wasting my day to write a complete paper with citations for some loosely defined internet argument on a subject that is already stacked on a pile of controversy and misunderstandings. I think I could easily find a number of citations that have conflated vocabulary, or re-defined vocabulary. This is my opinion, don't think I need to document a cited cross reference list of these re-defined terms to say this.
Probably this is the same problem that exists between a research paper, and a popular science book. Neither is as detailed and exact or also as high level and understandable as everyone desires. So, yes, these are some opinions, just from a certain point of view, my opinions are more correct than others opinions.
2 replies →
> the underlying results of that learning (and the process of learning itself) could not be any more different
To drill down a bit, I think the difference is that the child is trying to build a model - their own model - of the world, and how symbols describe or relate to it. Eventually they start to plan their own way through life using that model. Even though we use the term "model" that's not at all what a neural-net/LLM type "AI" is doing. It's just adjusting weight to maximize correlation between outputs and scores. Any internal model is vague at best, and planning (the also-incomplete core of "classical" AI before the winter) is totally absent. That's a huge difference.
ChadGPT is really not much more than ELIZA (1966) on fancy hardware, and it's worth noting that Eliza's was specifically written to illustrate superficiality of (some) conversation. Its best known DOCTOR script was intentionally a parody of Rogerian therapy. Plus ça change, plus c'est la même chose.
Why do we think that inside the 'weights' there is not a model? Where in the brain can you point and say 'there is the model'. The wiggly mass of neurons creates models and symbols, why do we assume that inside large neural nets the same thing isn't happening. When I see pictures of both (brain scan versus weights), they look pretty similar. Sorry I don't have latest citation, but was under assumption that the biggest breakthroughs in AI were around symbolic logic.
As I said, the model is vague at best. Regardless of how the information is stored, a child knows that a ball is a thing with tangible behaviors, not just a word that often appears with certain other words. A child knows what truth is, and LLMs rather notoriously do not. An older adult knows that a citation must not only satisfy a form but also relate to something that exists in the real world. An LLM is helpless with material not part of its training set. Try getting one to review a draft of a not-yet-published paper or book, and you'll get obvious garbage back. Any human with an equivalent dollar value in training can do better. A human can enunciate their model, and make predictions, and adjust the model in recognizable ways without a full-brain reset. An LLM can do none of these things. The differences are legion.
LLMs are not just generalists, but dilettantes to a degree we'd find extremely tiresome in a human. So of course half the HN commentariat loves them. It's a story more to do with Pygmalion or Narcissus than Prometheus ... and BTW good luck getting Chad or Brad to understand that metaphor.