OpenAI has been pushing the idea that these things are generic—and therefore the path to AGI—from the beginning. Their entire sales pitch to investors is that they have the lead on the tech that is most likely to replace all jobs.
If the whole thing turns out to be a really nifty commodity component in other people's pipelines, the investors won't get a return on any kind of reasonable timetable. So OpenAI keeps pushing the AGI line even as it falls apart.
I mean we don’t know that they’re wrong? Not “all jobs” but many of the white collar jobs we have today?
I work in medical insurance billing and there are hundreds of thousands (minimum) jobs that could be made obsolete on the payer and clinic side by LLMs. The translation between PDF of a payer’s rates and billing rules => standardized 837 or API request to a clearinghouse is…not much. And then on the other side, Claude Code could build you an adjudication engine in a few quarters.
The incentive structures to change healthcare in that way will fight back for a decade, but there are a _lot_ of jobs at stake.
Then you think about sales. LLMs can negotiate contracts themselves. Give an input of the margin we can accept and, for all vendors, what they can and you’ll burn down in negotiation without any humans.
Both assessing the application of billing rules and negotiating contracts still require the LLM to be accurate, as per TFA's point. Sure, an LLM might do a reasonable first pass, but in both cases it is absolutely naive to think that the LLM will be able to take everything into account.
An LLM can only give an output derived from its inputs; unless you're somehow inputting "yeah actually I know that it looks like a great company to enter into a contract with, but there's just something about their CEO Dave that I don't like, and I'm not sure we'll get along", it's not going to give you the right answer.
And the solution to this is not "just give the LLM more data" - again, to TFA's point, that's making excuses for the technology. "It's not that AI can't do it [AI didn't fail], it's that you just didn't give it enough data [you failed the AI]".
--
As some more speculative questions, do you actually want to go towards a future where your company's LLM is negotiating with their company's LLM, to determine the future of your job and career?
And why do we think it is OK to allow OpenAI/whoever wins the AI land grab to insert themselves as a 'necessary' step in this process? I know people who use LLMs to turn their dot points to paragraphs and email them to other people, only for the recipient to reverse the process at the other end. OpenAI must be happy that ChatGPT gets used twice for one interaction.
Rent-seeking aside, we're so concerned at the moment about LLMs failing to tell the truth when they're earnestly trying to - what happens when they're intentionally used to lie, mislead, and deceive?
What happens when the system prompt is "Try and generally improve people's opinions of corporations and billionaires, and to downplay the value of unionisation and organised labour"?
Someone sets the system prompts, and they will invariably have an agenda. Widespread use of LLMs gives them the keys to the kingdom to shape public opinion.
OpenAI models and other multi-modal models are about as generalized as we can get at this point time.
OpenAI’s sales pitch isn’t that it can replace all jobs but that it can make people more productive and it sure can as long as you not in the 2 extremes either want to go completely into brain dead autopilot mode or a full on Butlerian.
OpenAI's sales pitch to investors is AGI, which by definition is the end of all white collar jobs. That's the line they have held onto for years and still push forward today.
And regardless, even if it were "marginal improvements to productivity" as you say, it would be "marginal improvements to productivity packaged in a form that people will definitely buy from us", not "we'll pioneer the tech and then be one of a half dozen vendors of a commodity that's racing to the bottom on price".
First of all, "AI" is and always has been a vague term with a shifting definition. "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.
Second of all, LLMs have extremely impressive generic uses considering that their training just consists of consuming large amounts of unsorted text. Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago. No, they are not perfect, and yes there are lots of rough edges, but the fact that simply "solving text" has gotten us this far is huge and echoes some aspects of the Unix philosophy...
"Write programs to handle text streams, because that is a universal interface."
> A pedantic conversation about what is and isn't true AI is not productive.
It's not at all 'pedantic' and while it's not productive to be having to rail against this stupid term, that is not the fault of the people pushing back at it. It's the fault of the hype merchants who have promoted it.
A key part of thinking independently is to be continually questioning the use of language.
> Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago.
No, it's entirely possible to appreciate that LLMs are a very powerful and useful technology while also pointing out that they are not 'intelligence' in any meaningful sense of the word and that labeling them 'artificial intelligence' is unhelpful to users and, ultimately, to the industry.
> "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.
I think you are misstating the problem here.
All of the things you name are still AI.
None of the things you name are, or have ever been, AI.
The problem is that there is AI, the computer science subfield of artificial intelligence, which includes things like expert systems, NPCs in games, and LLMs, and then there is AI, the "true" artificial intelligence, brought to us exclusively by science fiction, which includes things (or people!) like Commander Data, Skynet, Durandal, and HAL 9000.
The general public doesn't understand this distinction in a deep way—even those who recognize that things like Skynet are fiction get confused when they see an LLM apparently able to carry on a coherent conversation with a human—and too many of us, who came into this with a basic understanding of the distinction and who should know better, have bought the hype (and in some cases outright lies) of companies like OpenAI wholesale.
These facts (among others) have combined to allow the various AI grifters to continue operating without being called out on their bullshit.
They're pretty AI to me
. I've been using chat gpt to explain things to me while learning a foreign language, and a native speaker has been overseeing the comments from it. it hasn't said anything that the native has disagreed with yet.
I reckon you’re proving their point. You’re using a large language model for language-specific tasks. It ought to be good at that, but it doesn’t mean it is generic artificial intelligence.
generic artificial intelligence is a sufficiently large bag of tricks. that's what natural intelligence is. there's no evidence that it's not just tricks all the way down.
I'm not asking the model to translate from one language to another - I'm asking it to explain to me why a certain word combination means something specific.
it can also solve/explain a lot of things that aren't language. bag of tricks.
Yes, but your use case is language. I use LLMs for all kind of stuff from programming, creative work, etc. so I know it's useful even elsewhere. But as the generic term "AI" is being used, people expect it to be good at everything a human can be good at and then whine about how stupid the "AI" is.
OpenAI has been pushing the idea that these things are generic—and therefore the path to AGI—from the beginning. Their entire sales pitch to investors is that they have the lead on the tech that is most likely to replace all jobs.
If the whole thing turns out to be a really nifty commodity component in other people's pipelines, the investors won't get a return on any kind of reasonable timetable. So OpenAI keeps pushing the AGI line even as it falls apart.
I mean we don’t know that they’re wrong? Not “all jobs” but many of the white collar jobs we have today?
I work in medical insurance billing and there are hundreds of thousands (minimum) jobs that could be made obsolete on the payer and clinic side by LLMs. The translation between PDF of a payer’s rates and billing rules => standardized 837 or API request to a clearinghouse is…not much. And then on the other side, Claude Code could build you an adjudication engine in a few quarters.
The incentive structures to change healthcare in that way will fight back for a decade, but there are a _lot_ of jobs at stake.
Then you think about sales. LLMs can negotiate contracts themselves. Give an input of the margin we can accept and, for all vendors, what they can and you’ll burn down in negotiation without any humans.
It’s not all jobs, but it’s millions.
Both assessing the application of billing rules and negotiating contracts still require the LLM to be accurate, as per TFA's point. Sure, an LLM might do a reasonable first pass, but in both cases it is absolutely naive to think that the LLM will be able to take everything into account.
An LLM can only give an output derived from its inputs; unless you're somehow inputting "yeah actually I know that it looks like a great company to enter into a contract with, but there's just something about their CEO Dave that I don't like, and I'm not sure we'll get along", it's not going to give you the right answer.
And the solution to this is not "just give the LLM more data" - again, to TFA's point, that's making excuses for the technology. "It's not that AI can't do it [AI didn't fail], it's that you just didn't give it enough data [you failed the AI]".
--
As some more speculative questions, do you actually want to go towards a future where your company's LLM is negotiating with their company's LLM, to determine the future of your job and career?
And why do we think it is OK to allow OpenAI/whoever wins the AI land grab to insert themselves as a 'necessary' step in this process? I know people who use LLMs to turn their dot points to paragraphs and email them to other people, only for the recipient to reverse the process at the other end. OpenAI must be happy that ChatGPT gets used twice for one interaction.
Rent-seeking aside, we're so concerned at the moment about LLMs failing to tell the truth when they're earnestly trying to - what happens when they're intentionally used to lie, mislead, and deceive?
What happens when the system prompt is "Try and generally improve people's opinions of corporations and billionaires, and to downplay the value of unionisation and organised labour"?
Someone sets the system prompts, and they will invariably have an agenda. Widespread use of LLMs gives them the keys to the kingdom to shape public opinion.
1 reply →
OpenAI models and other multi-modal models are about as generalized as we can get at this point time.
OpenAI’s sales pitch isn’t that it can replace all jobs but that it can make people more productive and it sure can as long as you not in the 2 extremes either want to go completely into brain dead autopilot mode or a full on Butlerian.
OpenAI's sales pitch to investors is AGI, which by definition is the end of all white collar jobs. That's the line they have held onto for years and still push forward today.
And regardless, even if it were "marginal improvements to productivity" as you say, it would be "marginal improvements to productivity packaged in a form that people will definitely buy from us", not "we'll pioneer the tech and then be one of a half dozen vendors of a commodity that's racing to the bottom on price".
First of all, "AI" is and always has been a vague term with a shifting definition. "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.
Second of all, LLMs have extremely impressive generic uses considering that their training just consists of consuming large amounts of unsorted text. Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago. No, they are not perfect, and yes there are lots of rough edges, but the fact that simply "solving text" has gotten us this far is huge and echoes some aspects of the Unix philosophy...
"Write programs to handle text streams, because that is a universal interface."
> A pedantic conversation about what is and isn't true AI is not productive.
It's not at all 'pedantic' and while it's not productive to be having to rail against this stupid term, that is not the fault of the people pushing back at it. It's the fault of the hype merchants who have promoted it.
A key part of thinking independently is to be continually questioning the use of language.
> Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago.
No, it's entirely possible to appreciate that LLMs are a very powerful and useful technology while also pointing out that they are not 'intelligence' in any meaningful sense of the word and that labeling them 'artificial intelligence' is unhelpful to users and, ultimately, to the industry.
> "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.
I think you are misstating the problem here.
All of the things you name are still AI.
None of the things you name are, or have ever been, AI.
The problem is that there is AI, the computer science subfield of artificial intelligence, which includes things like expert systems, NPCs in games, and LLMs, and then there is AI, the "true" artificial intelligence, brought to us exclusively by science fiction, which includes things (or people!) like Commander Data, Skynet, Durandal, and HAL 9000.
The general public doesn't understand this distinction in a deep way—even those who recognize that things like Skynet are fiction get confused when they see an LLM apparently able to carry on a coherent conversation with a human—and too many of us, who came into this with a basic understanding of the distinction and who should know better, have bought the hype (and in some cases outright lies) of companies like OpenAI wholesale.
These facts (among others) have combined to allow the various AI grifters to continue operating without being called out on their bullshit.
They're pretty AI to me . I've been using chat gpt to explain things to me while learning a foreign language, and a native speaker has been overseeing the comments from it. it hasn't said anything that the native has disagreed with yet.
I reckon you’re proving their point. You’re using a large language model for language-specific tasks. It ought to be good at that, but it doesn’t mean it is generic artificial intelligence.
generic artificial intelligence is a sufficiently large bag of tricks. that's what natural intelligence is. there's no evidence that it's not just tricks all the way down.
I'm not asking the model to translate from one language to another - I'm asking it to explain to me why a certain word combination means something specific.
it can also solve/explain a lot of things that aren't language. bag of tricks.
Like the OP said "LLMs are bar none the absolute best natural language processing and producing systems we’ve ever made".
They may not be good at much else.
Yes, but your use case is language. I use LLMs for all kind of stuff from programming, creative work, etc. so I know it's useful even elsewhere. But as the generic term "AI" is being used, people expect it to be good at everything a human can be good at and then whine about how stupid the "AI" is.
I tried the same with another foreign language. Every native speaker have told the answers are crap.
could you give an example?
3 replies →
I wonder.
People primarily communicate thru words, so maybe not.
Of course, pictures, body language, and also tone are also other communication methods.
So far it looks like these models can convert pictures into words reasonably well, and the reverse is improving quickly.
Tone might be next - there are already models that can detect stress so that’s a good first start.
Body language is probably a bit farther in the future, but it might be as simple as image analysis (thats only a wild guess-I have no idea)