← Back to context

Comment by ath3nd

2 days ago

> Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning

Nah, it's just a very very big and fancy autocomplete with probabilistic tokenization and some extra tricks thrown in to minimize the shortcomings of the approach.

> Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems.

LLMs are maybe artificial but they are not intelligence unless you have overloaded the term intelligence to mean something much less and more trivial. A crow and even a cat is intelligent. An LLM is not.

That's copium.

The proper name for it is "AI effect", but the word "copium" captures the essence perfectly.

Humans want to feel special, and a lot of them feel like intelligence is what makes them special. So whenever a new AI system shows a new capability that was thought to require intelligence? A capability that was once exclusive to humans? That doesn't mean it's "intelligent" in any way. Surely it just means that this capability was stupid and unimportant and didn't require any intelligence in the first place!

Writing a simple short story? Solving a college level math problem? Putting together a Bash script from a text description of what it should do? No intelligence required for any of that!

Copium is one hell of a drug.

  • > Copium is one hell of a drug.

    What is the word for creating an account 12 days ago and exclusively defending the LLMs because they can't defend themselves?

    > Writing a simple short story

    Ah, allow me to introduce you to the Infinite Monkey theorem.

    https://en.wikipedia.org/wiki/Infinite_monkey_theorem

    In the case of LLMs it's just the monkey's hand is artificially guided by all the peanut-guided trainings it was trained on but it still didn't use a single ounce of thought or intelligence. Sorry that you get impressed by simple tricks and confuse them for magic.

    • And this proves what exactly? That any task can be solved by pure chance at pass@k, with k blowing out to infinity as the solution space grows?

      We know that. The value of intelligence is being able to outperform that random chance.

      LLMs already outperform a typewriter monkey, a keyboard cat, and a non-insignificant amount of humans on a very diverse range of tasks.