← Back to context

Comment by og_kalu

5 days ago

You cannot say, 'we know it's not thinking because we wrote the code' when the inference 'code' we wrote amounts to, 'Hey, just do whatever you figured out during training okay'.

'Power over your computer', all that is orthogonal to the point. A human brain without a functioning body would still be thinking.

Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

I permit you, that yes, the data in the model is a LOT more cool, but some team could by hand, given billions of years (well probably at least 1 Octillion years), reproduce that model and save it to a disk. Again, no different than data stored in JIRA at that point.

So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

  • >Well, a model by itself with data that emits a bunch of human written words is literally no different than what JIRA does when it reads a database table and shits it out to a screen, except maybe a lot more GPU usage.

    Obviously, it is different or else we would just use JIRA and a database to replace GPT. Models very obviously do NOT store training data in the weights in the way you are imagining.

    >So basically if you have that stance you'd have to agree that when we FIRST invented computers, we created intelligence that is "thinking".

    Thinking is by all appearances substrate independent. The moment we created computers, we created another substrate that could, in the future think.

    • But LLMs are effectively a very complex if/else if tree:

      if the user types "hi" respond with "hi" or "bye" or "..." you get the point. It's basically storing the most probably following words (tokens) given the current point and its history.

      That's not a brain and it's not thinking. It's similar to JIRA because it's stored information and there are if statements (admins can do this, users can do that).

      Yes it is more complex, but it's nowhere near the complexity of the human or bird brain that does not use clocks, does not have "turing machines inside", or any of the other complete junk other people posted in this thread.

      The information in Jira is just less complex, but it's in the same vein of the data in an LLM, just 10^100 times more complex. Just because something is complex does not mean it thinks.

      2 replies →

  • You're getting to the heart of the problem here. At what point in evolutionary history does "thinking" exist in biological machines? Is a jumping spider "thinking"? What about consciousness?