Comment by ohcmon
2 years ago
> glorified token predicting machine trained on existing data (made by humans)
sorry to disappoint, but human brain fits the same definition
2 years ago
> glorified token predicting machine trained on existing data (made by humans)
sorry to disappoint, but human brain fits the same definition
Sure.
> Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
> To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system.
https://aeon.co/essays/your-brain-does-not-process-informati...
What are you talking about? Do you have any actual cognitive neuroscience to back that up? Have they scanned the brain and broken it down into an LLM-analogous network?
If you genuinely believe your brain is just a token prediction machine, why do you continue to exist? You're just consuming limited food, water, fuel, etc for the sake of predicting tokens, like some kind of biological crypto miner.
Genetic and memetic/intellectual immortality, of course. Biologically there can be no other answer. We are here to spread and endure, there is no “why” or end-condition.
If your response to there not being a big ending cinematic to life with a bearded old man and a church choir, or all your friends (and a penguin) clapping and congratulating you is that you should kill yourself immediately, that’s a you problem. Get in the flesh-golem, shinzo… or Jon Stewart will have to pilot it again.
I'm personally a lot more than a prediction engine, don't worry about me.
For those who do believe they are simply fleshy token predictors, is there a moral reason that other (sentient) humans can't kill -9 them like a LLaMa3 process?
7 replies →
Well, yes. I won't commit suicide though, since it is an evolutionarily developed trait to keep living and reproducing since only the ones with that trait survive in the first place.
If LLMs and humans are the same, should it be legal for me to terminate you, or illegal for me to terminate an LLM process?
1 reply →
It's a cute generalization but you do yourself a great disservice. It's somewhat difficult to argue given the medium we have here and it may be impossible to disprove but consider that in first 30 minutes of your post being highly visible on this thread no one had yet replied. Some may have acted in other ways.. had opinions.. voted it up/down. Some may have debated replying in jest or with a some related biblical verse. I'd wager a few may have used what they could deduce from your comment and/or history to build a mini model of you in their heads, and using that to simulate the conversation to decide if it was worth the time to get into such a debate vs tending to other things.
Could current LLM's do any of this?
I’m not the OP, and I genuinely don’t like how we’re slowly entering the “no text in internet is real” realm, but I’ll take a stab at your question.
If you made an LLM to pretend to have a specific personality (e.g. assume you are a religious person and you’re going to make a comment in this thread) rather than “generic catch-all LLM”, they can pretty much do that. Part of Reddit is just automated PR LLMs fighting each other, making comments and suggesting products or viewpoints, deciding on which comment to reply and etc. You just chain bunch of responses together with pre-determined questions like “given this complete thread, do you think it would look organic if we responded with a plug to a product to this comment?”.
It’s also not that hard to generate these type of “personalities”, since you can use a generic one to suggest you a new one that would be different from your other agents.
There are also Discord communities that share tips and tricks for making such automated interactions look more real.
These things might be able to produce comparable output but that wasn't my point. I agree that if we are comparing ourselves over the text that gets written then LLM's can achieve super intelligence. And writing text can indeed be simplified to token predicting.
My point was we are not just glorified token predicting machines. There is a lot going on behind what we write and whether we write it or not. Does the method matter vs just the output? I think/hope it does on some level.
See, this sort of claim I am instantly skeptical of. Nobody has ever caught a human brain producing or storing tokens, and certainly the subjective experience of, say, throwing a ball, doesn't involve symbols of any kind.
> Nobody has ever caught a human brain producing or storing tokens
Do you remember learning how to read and write?
What are spelling tests?
What if "subjective experience" isn't essential, or is even just a distraction, for a great many important tasks?
Entirely possible. Lots of things exhibit complex behavior that probably don't have subjective experience.
My point is just that the evidence for "humans are just token prediction machines and nothing more" is extremely lacking, but there's always someone in these discussions who asserts it like it's obvious.
Any output from you could be represented as a token. It is a very generic idea. Ultimately whatever you output is because of chemical reactions that follow from the input.
It could be represented that way. That's a long way from saying that's how brains work.
Does a thermometer predict tokens? It also produces outputs that can be represented as tokens, but it's just a bit of mercury in a tube. You can dissect a thermometer as much as you like and you won't find any token prediction machinery. There's lots of things like that. Zooming out, does that make the entire atmosphere a token prediction engine, since it's producing eg wind and temperatures that could be represented as tokens?
If you need one token per particle then you're admitting that this is task is impossible. Nobody will ever build a computer that can simulate a brain-sized volume of particles to sufficient fidelity. There is a long, long distance from "brains are made of chemicals" to "brains are basically token prediction engines."
6 replies →