← Back to context

Comment by api

7 days ago

"A junior intern who has memorized the Internet" is how one member of our team described it and it's still one of the best descriptions of these things I've heard.

Sometimes I think these things are more like JPEGs for knowledge expressed as language. They're more AM (artificial memory) than AI (artificial intelligence). It's a blurry line though. They can clearly do things that involve reasoning, but it's arguably because that's latent in the training data. So a JPEG is an imperfect analogy since lossy image compressors can't do any reasoning about images.

> They can clearly do things that involve reasoning

No.

> but it's arguably because that's latent in the training data.

The internet is just bigger than what a single human can encounter.

Plus a single human isn't likely to be able to afford to pay for all that training data the "AI" peddlers have pirated :)

  • A dismissive “no” is not a helpful addition to this discussion. The truth is much more interesting and subtle than “no”. Directed stochastic processes that reach a correct conclusion of novel logic problems more often than chance means that something interesting is happening, and it’s sensible to call that process “reasoning”. Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No. But dismissing “reasoning” out of hand also dismisses genuinely interesting phenomena.

    • Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change.

      It's true that something interesting is happening. GP did not dispute that. That doesn't make it reasoning, and many people still believe that words should have meaning in order to discuss things intelligently. Language is ultimately a living thing and will inevitably change. This usually involves people fighting the change and no one know ahead of time which side will win.

      14 replies →

    • > A dismissive “no” is not a helpful addition to this discussion.

      Yes, your "no" must be more upbeat! Even if it's correct. You must be willing to temper the truth of it with something that doesn't hurt the feelings of the massive.

      > Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No.

      But here it's fine to use a "No." because these are your straw men, right?

      Is it just wrong to use a "No." when it's not in safety padding for the overinvested?

    • I have a hunch it can reflect what humans do "A junior intern who has memorized the Internet and talks without thinking on permanent autopilot". We're just surprised how much humans can do without thinking.

    • >A dismissive “no” is not a helpful addition to this discussion.

      Neither are wide-eyed claims stemming from drinking too much LLM company koolaid. Blatantly mistaken claims don't need more than a curt answer.

      Why don't I go ahead and claim chatGPT has a soul, so to then get angry if my claim is dismissed?

  • > No.

    You are missing the forest for the trees by dismissing this so readily.

    LLMs can solve IMO-level math problems, debug quite difficult bugs in moderately sized codebases, and write prototypes for very unique and weird coding projects. They solve difficult reasoning problems, and so I find it mystifying that people still work so hard to justify their belief that they're "not actually reasoning". They are flawed reasoners in some sense, but it seems ludicrous to me to suggest that they are not reasoning at all when they generalise to new logical problems so well.

    Do you think humans are logical machines? No, we are not. Therefore, do we not reason?

    • >Do you think humans are logical machines? No, we are not. Therefore, do we not reason?

      No, but we are conscious, and we know we are conscious, which doesn't require being a logical being too. LLMs on the other hand aren't conscious and there's zero evidence that they are. Thus, they don't reason, since this, unlike logic, does require consciousness.

      Why not avoid re-definining things into a salad mix of poor logic until you can pretend that something with no evidence in its favor is real.

      1 reply →

> "A junior intern who has memorized the Internet"

... who can also type at superhuman speeds, but has no self-awareness, creativity or initiative.