← Back to context

Comment by brookst

2 years ago

Is it possible for humans to be wrong about something, without lying?

I don't agree with the argument that "if a human can fail in this way, we should overlook this failing in our tooling as well." Because of course that's what LLMs are, tools, like any other piece of software.

If a tool is broken, you seek to fix it. You don't just say "ah yeah it's a broken tool, but it's better than nothing!"

All these LLM releases are amazing pieces of technology and the progress lately is incredible. But don't rag on people critiquing it, how else will it get better? Certainly not by accepting its failings and overlooking them.

  • “Broken” is word used by pedants. A broken tool doesn’t work. This works, most of the time.

    Is a drug “broken” because it only cures a disease 80% of the time?

    The framing most critics seem to have is “it must be perfect”.

    It’s ok though, their negativity just means they’ll miss out on using a transformative technology. No skin off the rest of us.

  • I think the comparison to humans is just totally useless. It isn’t even just that, as a tool, it should be better than humans at the thing it does, necessarily. My monitor is on an arm, the arm is pretty bad at positioning things compared to all the different positions my human arms could provide. But it is good enough, and it does it tirelessly. A tool is fit for a purpose or not, the relative performance compared to humans is basically irrelevant.

    I think the folks making these tools tend to oversell their capabilities because they want us to imagine the applications we can come up with for them. They aren’t selling the tool, they are selling the ability to make tools based on their platform, which means they need to be speculative about the types of things their platform might enable.

  • If a broken tool is useful, do you not use it because it is broken ?

    Overpowered LLMs like GPT-4 are both broken (according to how you are defining it) and useful -- they're just not the idealized version of the tool.

    • Maybe not if its the case that your use of the broken tool would result in the eventual undoing of your work. Like, lets say your staple gun is defective and doesn't shoot the staples deep enough, but it still shoots. You can keep using the gun, but it's not going to actually do its job. It seems useful and functional, but it isn't and its liable to create a much bigger mess.

      8 replies →

  • I think you're reading a lot into GP's comment that isn't there. I don't see any ragging on people critiquing it. I think it's perfectly compatible to think we should continually improve on these things while also recognizing that things can be useful without being perfect

    • I don't think people are disputing that things can be useful without being perfect. My point was that when things aren't perfect, they can also lead to defects that would not otherwise be perceived based upon the belief that the tool was otherwise working at least adequately. Would you use a staple gun if you weren't sure it was actually working? If it's something you don't know a lot about, how can you be sure it's working adequately?

Lying implies an intent to deceive despite, or giving a response despite having better knowledge, which I'd argue LLMs can't do, at least not yet. It just requires a more robust theory of mind than I'd consider them to realistically be capable of.

They might have been trained/prompted with misinformation, but then it's the people doing the training/prompting who are lying, still not the LLM.

  • To the question of whether it could have intent to deceive, going to the dictionary, we find that intent essentially means a plan (and computer software in general could be described as a plan being executed) and deceive essentially means saying something false. Furthermore, its plan is to talk in ways that humans talk, emulating their intelligence, and some intelligent human speech is false. Therefore, I do believe it can lie, and will whenever statistically speaking a human also typically would.

    Perhaps some humans never lie, but should the LLM be trained only on that tiny slice of people? It's part of life, even non-human life! Evolution works based on things lying: natural camouflage, for example. Do octopuses and chameleons "lie" when they change color to fake out predators? They have intent to deceive!

Most humans I professionally interact with don't double down on their mistakes when presented with evidence to the contrary.

The ones that do are people I do my best to avoid interacting with.

LLMs act more like the latter, than the former.