Comment by almosthere

5 days ago

Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

I permit you, that yes, the data in the model is a LOT more cool, but some team could by hand, given billions of years (well probably at least 1 Octillion years), reproduce that model and save it to a disk. Again, no different than data stored in JIRA at that point.

So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

>Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

Obviously, it is different or else we would just use JIRA and a database to replace GPT. Models very obviously do NOT store training data in the weights in the way you are imagining.

>So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

Thinking is by all appearances substrate independent. The moment we created computers, we created another substrate that could, in the future think.

  • But LLMs are effectively a very complex if/else if tree:

    if the user types "hi" respond with "hi" or "bye" or "..." you get the point. It's basically storing the most probably following words (tokens) given the current point and its history.

    That's not a brain and it's not thinking. It's similar to JIRA because it's stored information and there are if statements (admins can do this, users can do that).

    Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

    The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

    • This is a pretty tired argument that I don't think really goes anywhere useful or illuminates anything (if I'm following you correctly, it sounds like the good old Chinese Room, where "a few slips of paper" can't possibly be conscious).

      Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

      The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

      So, what is the missing element that would satisfy you? It's "nowhere near the complexity of the human or bird brain", so I guess it needs to be more complex, but at the same time "just because something is complex does not mean it thinks".

      Does it need to be struck by lightning or something so it gets infused with the living essence?

      1 reply →

You're getting to the heart of the problem here. At what point in evolutionary history does "thinking" exist in biological machines? Is a jumping spider "thinking"? What about consciousness?