Comment by hitarpetar

4 days ago

how do you definitely know that?

Also, does it matter?

The point being made here is about the data LLMs have been trained with. Sure that contains questions&answers but obviously not all of it is in that form. Just like an encyclopedie contains answers without the questions. So imo specifying this as 'no-one asked this before' is irrelevant.

More interesting: did OP get a sensible answer to a question about data which definitely was not in the training set? (and indeed, how was this 'definitely' established'). Not that if the answer is 'yes' that'll prove 'thinking', as opposed to calling it e.g. advanced autocompletion, but it's a much better starting point.

Because I gave them a unique problem I had and it came up with an answer it definitely didn't see in the training data.

Specifically I wanted to know how I could interface two electronic components, one of which is niche, recent, handmade and doesn't have any public documentation so there's no way it could have known about it before.

  • one of which is niche, recent, handmade and doesn't have any public documentation

    I still see 2 possibilities: you asked it something similar enough that it came up with a fairly standard answer which just happened to be correct, or you gave it enough info.

    - for example you created a new line of MCUs called FrobnicatorV2, and asked is 'how do I connect a power supply X to FrobnicatorV2' and it gave an answer like 'connect red wire to VCC and black to GND'. That's not exactly special.

    - or, you did desribe that component in some way. And you did do that using standard electronics lingo so essentially in terms of other existing components which it definitely did know (unless you invented something completely new not using any currently know physics). As such it's irrelevant that your particular new component wasn't known because you gave away the answer by describing it? E.g. you aksed it 'how do I connect a power supply X to an MCU with power pins Y and Z'. Again nothing special.

    • If a human uses their general knowledge of electronics to answer a specific question they haven't seen before that's obviously thinking. I don't see why LLMs are held to a different standard. It's obviously not repeating an existing answer verbatim because that doesn't exist in my case.

      You're saying it's nothing "special" but we're not discussing whether it's special, but whether it can be considered thinking.

      2 replies →