Comment by boogrpants
11 hours ago
> I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given.
Just like people who get degrees in economics or engineering and engage in such role-play for decades. They're often pretty bad at anything they are not trained on.
Coincidentally, if you put a single American English speaker on a team of native German language speakers you will notice information transference falls apart.
Very normal physical reality things occurring in two substrates, two mediums. As if there is a shared limitation called the rest of the universe attempting to erode our efforts via entropy.
LLM is a distribution of human generated data sets. Since humans have the same incompleteness problems in society this affords enough statistical wiggle room for LLMs to make shit up; humans do it! Look in their data!
We're massively underestimating realities indifference to human existence.
There is no doing any better until we effectively break physics, by that I really mean come upon a game changing discovery that informs us we had physics all wrong to begin with.
The fact there are a lot of people around who don't think (including me at times!) does mean LLMs doing that are thinking.
Much like LLMs writing text like mindless middle managers, it doesn't mean they're intelligent, more that mindless middle managers aren't.
> Just like people
I understand that having model related vocabulary borrow similar words we use to describe human brains and cognition gets confusing. We are not the same, we don’t “learn” the same we certainly don’t use the knowledge we posses in the same way.
The major difference between an LLM and a human is that as a human, I can look at your examples (which sound solid at first glance) and choose to truly “reason” about them in a way that allows me to judge if they’re correct or even applicable.
how’s your reasoning different from LLM reasoning?
What humans are known to do, and apparently there is no limit to what they won't, is anthropomorphizing. I think there's not been a single one of these discussions where someone inevitably says LLM's don't do X as well as a human and someone interjects in cult-like fashion.
1 reply →
Obviously. You are not exactly the same as your nearest neighbor but have similar observable traits to outside observers.
But since you end up trying to differentiate yourself from an LLM in vague, conceptual qualifiers, not empirical differences, what it means to "reason" ...I am left uncertain what you mean at all.
An LLM can reject false assertions and generate false positives just like a human.
Within a culture too individual people become pretty copy paste distillations of their generations customs. As a social creature you aren't that different. Really all that sets you apart from other people or a computer is a unique meat suit.
Unfortunately for your meat suit most people don't care it exists and will carry on with their lives never noticing it.
While LLMs have massive valuations right now. Pretty sure the public has spoken when it comes to the differences you fail to illustrate actually mattering.
> While LLMs have massive valuations right now. Pretty sure the public has spoken when it comes to the differences you fail to illustrate actually mattering.
Are you seriously using market valuation as an indicator of worth?
1 reply →
I think I've read that book... but I distinctly remember the plot was a lot more engaging.
1 reply →