In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
(1) You are correct in that placing both of those questions into Google doesn't quite get you anywhere near the answer that I imagine ChatGPT gives you (as you point out). Although, Google does "infer" which Bush you are talking about, there isn't a clear "this person is older" answer, you have to dive into the wiki pages basically to get the answer.
(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.
I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."
The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?
In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
(1) You are correct in that placing both of those questions into Google doesn't quite get you anywhere near the answer that I imagine ChatGPT gives you (as you point out). Although, Google does "infer" which Bush you are talking about, there isn't a clear "this person is older" answer, you have to dive into the wiki pages basically to get the answer.
(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.
I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."
The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?