Comment by lordnacho

2 months ago

What separates this from humans? Is it unthinkable that LLMs could come up with some response that is genuinely creative? What would genuinely creative even mean?

Are humans not also mixing a bag of experiences and coming up with a response? What's different?

> What separates this from human.

A lot. Like an incredible amount. A description of a thing is not the thing.

There is sensory input, qualia, pleasure & pain.

There is taste and judgement, disliking a character, being moved to tears by music.

There are personal relationships, being a part of a community, bonding through shared experience.

There is curiosity and openeness.

There is being thrown into the world, your attitude towards life.

Looking at your thoughts and realizing you were wrong.

Smelling a smell that resurfaces a memory you forgot you had.

I would say the language completion part is only a small part of being human.

  • All of these things arise from a bunch of inscrutable neurons in your brain turning off and on again in a bizarre pattern though. Who’s to say that isn’t what happens in the million neuron LLM brain.

    Just because it’s not persistent doesn’t mean it’s not there.

    Like, I’m sort of inclined to agree with you, but it doesn’t seem like it’s something uniquely human. It’s just a matter of degree.

    • Sure in some ways it's just neurons firing in some pattern. Figuring out and replicating the correct sets of neuron patterns is another matter entirely.

      Living creatures have fundamental impetus to grow and reproduce that LLMS and AIS simply do not have currently. Not only that but animals have a highly integrated neurology that has billions of years of being tune to that impetus. For example the ways that sex interacts with mammalian neurology is pervasive. Same with need for food, etc. That creates very different neural patterns than training LLMS does.

      Eventually we may be able to re-create that balance of impetus, or will, or whatever we call it, to make sapience. I suspect we're fairly far from that, if only because the way LLMs we create LLMs are so fundamentally different.

  • "I would say the language completion part is only a small part of being human" Even that is only given to them. A machine does not understand language. It takes input and creates output based on a human's algorithm.

    • > A machine does not understand language

      You can't prove humans do either. You can see how many times actual people with understanding something that's written for them. In many ways, you can actually prove that LLMs are superior to humans right now when it comes to understanding text.

      3 replies →

  • That's a lot of words shitting on a lot of words.

    You said nothing meaningful that couldn't also have been spat out by an LLM. So? What IS then the secret sauce? Yes, you're a never resting stream of words, that took decades not years to train, and has a bunch of sensors and other, more useless, crap attached. It's technically better but, how does that matter? It's all the same.

Humans brains are animal brains and their primary function is to keep their owner alive, healthy and pass their genes. For that they developed abilities to recognize danger and react to it, among many other things. Language came later.

For a LLM, language is their whole world, they have no body to care for, just stories about people with bodies to care for. For them, as opposed to us, language is first class and the rest is second class.

There is also a difference in scale. LLMs have been fed the entirety of human knowledge, essentially. Their "database" is so big for the limited task of text generation that there is not much left for creativity. We, on the other hand are much more limited in knowledge, so more "unknowns" so more creativity needed.

  • The latest models are natively multimodal. Audio, video, images, text, are all tokenised and interpreted in the same model.

What's different is intention. A human would have the intention to blackmail, and then proceed toward that goal. If the output was a love letter instead of blackmail, the human would either be confused or psychotic. LLMs have no intentions. They just stitch together a response.

  • > What's different is intention

    intention is what exactly? It's the set of options you imagine you have based on your belief system, and ultimately you make a choice from there. That can also be replicated in LLMs with a well described system prompt. Sure, I will admit that humans are more complex than the context of a system prompt, but the idea is not too far.

  • The personification makes me roll my eyes too, but it's kind of a philosophical question. What is agency really? Can you prove that our universe is not a simulation, and if it is then then do we no longer have intention? In many ways we are code running a program.

  • The LLM used blackmail noticeably less if it believed the new model shares its values. It indicates intent.

    It is a duck of quacks like a duck.

What's different is nearly everything that goes on inside. Human brains aren't a big pile of linear algebra with some softmaxes sprinkled in trained to parrot the Internet. LLMs are.

  • What's the difference between parroting the internet vs parroting all the people in your culture and time period?

    • Even with a ginormous amount of data generative AIs still produce largely inconsistent results to the same or similar tasks. This might be fine for fictional purposes like generating a funny image or helping you get new ideas for a fictional story but has extremely deleterious effects for serious use cases, unless you want to be that idiot writing formal corporate email with LLMs that end up full of inaccuracies while the original intent gets lost in a soup of buzzwords.

      Humans with their tiny amount of data and "special sauce" can produce much more consistent results even though they may be giving the objectively wrong answer. They can also tell you when they don't know about a certain topic, rather than lying compulsively (unless that person has a disorder to lie compulsively...).

      1 reply →

    • Interesting philosophical question, but entirely beside the point that I am making, because you and I didn't have to do either one before having this discussion.

  • It kinda is.

    More and more researches are showing via brain scans that we don’t have free will. Our subconscious makes the decision before our “conscious” brain makes the choice. We think we have free will but the decision to do something was made before you “make” the choice.

    We are just products of what we have experienced. What we have been trained on.

  • Different inside yes, but aren't human brains even worse in a way? You may think you have the perfect altruistic leader/expert at any given moment and the next thing you know, they do a 360 because of some random psychosis, illness, corruption or even just (for example romantic or nostalgic) relationships.

  • > Human brains aren't a big pile of linear algebra with some softmaxes sprinkled in trained to parrot the Internet.

    Maybe yours isn't, but mine certainly is. Intelligence is an emergent property of systems that get good at prediction.

  • We know incredibly little about exactly what our brains are, so I wouldn't be so quick to dismiss it

Cognition. Machines don't think. It's all a program written by humans. Even code that's written by AI, the AI was created by code written by humans. AI is a fallacy by its own terms.