Comment by boredemployee

12 hours ago

> Why should the use of AI tools be any different?

Because none of the tools you mentioned are crazily marketed as intelligent

You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time

LLMs are intelligent. Marketing them as such is an accurate descriptor of what they are.

If people are confusing the word intelligence for things like maturity or wisdom, that's not a marketing problem, that's an education and culture problem, and we should be getting people to learn more about what the tools are and how they work. The platforms themselves frequently disclaim reliance on their tools - seek professional guidance, experts, doctors, lawyers, etc. They're not being marketed as substitutes for expert human judgment. In fact, all the AI companies are marketing their platforms as augmentations for humans - insisting you need a human in the loop, to be careful about hallucinations, and so forth.

The implication is that there's some liability for misunderstandings or improper use due to these tools being marketed as intelligent; I'm not sure I see how that could be?

  • Calling LLMs "intelligent" is not a neutral technical description, because in the end it carries strong anthropomorphic implications that shape how users interpret and trust all these systems.

    Remember that decades of research in human computer interaction show that framing and interface design strongly influence user perception.

    also disclaimers do little to counteract this effect. Because LLMs simulate linguistic competence without understanding or truth-tracking mechanisms, marketing them as intelligent risks systematically misleading users about their capabilities and limitations.

    • >because in the end it carries strong anthropomorphic implications

      I mean that is typical human ego at play. My dog is intelligent, and there is no system of definitions of intelligent that doesn't overlap humans and dogs. Yet I won't let my dog drive my car.

  • LLMs are NOT intelligent. They are mathematical equations that provide results that would give the sense of intelligence. That is NOT the same thing.

    • And airplanes don't really fly. And submarines don't swim. And there aren't any real horses powering your engine.

      The difference, of course, is that intelligence is the thing that is done, a subset of computation, and all computation is substrate independent. You can definitely argue that LLMs are less intelligent than humans. This is obvious, for the time being, and easy to demonstrate. Saying they are not intelligent is simply untrue.

      Whether you want to go to a formalized definition of intelligence, like AIXI, or a neuroscience definition, or the vernacular definition, LLMs are intelligent. The idea that they're random number stochastic functions that are just partially producing sensible results is about 6 years past its expiration date, and if you've been holding on to that idea, it's really time to update your model.

      ALICE bots or ELIZA back in the day had a "sense" of intelligence. Modern LLMs are more intelligent than the average human.