In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.
Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.
>The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.
Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?
Well, my point wasn’t that there is a good definition of consciousness.
My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.
Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.
“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.
We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.
I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.
The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.
I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
I've used it to proof emails for grammar, and it's done ok.
I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.
I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.
I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.
I’m not the person you replied to but I’ve been using OpenAI’s API a lot for work. Some examples:
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
It can be both bullshit and utterly astounding.
In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.
It's just not a daily driver for technical experts yet.
> quantum leap
Ironically accurate.
In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
1 reply →
The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.
Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.
>The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.
Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?
Well, my point wasn’t that there is a good definition of consciousness.
My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.
Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.
“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.
We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.
I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.
The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.
Well said. It is a mistake to anthropomorphize large language models; they really hate that.
The only problem with the “ChatGPT is bullshit” argument is that it is only half true.
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
> ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
sounds like most people tbf
There are people who in many situations use as much critical though as ChatGPT does.
ChatGPT isn't as good as a human who puts in a lot of effort, but in many jobs it can easily outperform humans who don't care very much.
1 reply →
I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
How are you using it at work?
I've used it to proof emails for grammar, and it's done ok.
I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.
I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.
I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.
I’m not the person you replied to but I’ve been using OpenAI’s API a lot for work. Some examples:
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.