Comment by fragmede

1 year ago

https://chatgpt.com/share/66e3f9e1-2cb4-8009-83ce-090068b163...

Keep up, that was last week's gotcha, with the old model.

My point is the previous "intelligent" failed at simple task, the new one will also fail on simple tasks.

That's ok for humans but not for machines.

  • ‘That's ok for humans but not for machines.’

    This is a really interesting bias. I mean, I understand, I feel that way too… but if you think about it, it might be telling us something about intelligence itself.

    We want to make machines that act more like humans: we did that, and we are now upset that they are just as flaky and unreliable as drunk uncle bob. I have encountered plenty of people that aren’t as good at being accurate or even as interesting to talk to as a 70b model. Sure, LLMs make mistakes most humans would not, but humans also make mistakes most LLMs would not.

    (I am not trying to equate humans and LLMs, just to be clear) (also, why isn’t equivelate a word?)

    It turns out we want machines that are extremely reliable, cooperative , responsible and knowledgeable. We yearn to be obsolete.

    We want machines that are better than us.

    The definition of AGI has drifted from meaning. “able to broadly solve problems the (class of which) system designers did not anticipate” to “must be usefully intelligent at the same level as a bright, well educated person”.

    Where along the line did we suddenly forget that dog level intelligence was a far out of reach goal until suddenly it wasn’t?