Comment by ilrwbwrkhv
2 years ago
There is no "superintelligence" or "AGI".
People are falling for marketing gimmicks.
These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.
So there is nothing to worry. These "apps" might be as popular as Excel, but will go no further.
Agreed. The AI of our day (the transformer + huge amounts of questionably acquired data + significant cloud computing power) has the spotlight it has because it is readily commoditized and massively profitable, not because it is an amazing scientific breakthrough or a significant milestone toward AGI, superintelligence, the benevolent Skynet or whatever.
The association with higher AI goals is merely a mixture of pure marketing and LLM company executives getting high on their own supply.
It's a massive attractor of investment funding. Is it proven to be massively profitable?
I read in Forbes about a construction company that used AI-related tech to manage the logistics and planning. They claimed that they were saving upwards of 20% of their costs because everything was managed more accurately. (Maybe they had little control boxes on their workers too; I don't know.)
The point I am trying to make is that the benefits of AI-related tech is likely to be quite pervasive and we should be looking at what corporations are actually doing. Sort of what this poem says:
For while the tired waves, vainly breaking / Seem here no painful inch to gain, / Far back through creeks and inlets making, / Comes silent, flooding in, the main.
1 reply →
It's definitely massively profitable for Nvidia...
Being profitable is probably a matter of time and technology maturing. Think about the first Iphone, Windows 95, LCD/LEDs, etc.
The potential of a sufficiently intelligent agent, probably something very close to a really good AGI, albeit still not an ASI, could be measured in billions of billions of mostly inmediate return of investment. LLMs are already well into the definition of hard AI, there are already strong signs it could be somehow "soft AGI".
If by chance, you're the first to reach ASI, all the bets are down, you just won everything on the table.
Hence, you have this technology, LLM, then most of the experts in the field (in the world blabla), say "if you throw more data into it, it becames more intelligent", then you "just" assemble an AI team, and start training bigger, better LLMs, ASAP, AFAP.
More or less this is the reasoning behind the investments, sans accounting the typical pyramidal schemes of investments in hyped new stuff.
1 reply →
> Is it proven to be massively profitable?
Oh sure, yes. For Nvidia.
Gold rush, shovels...
If you described Chatgpt to me 10 years ago, I would have said it's AGI.
Probably. If you had shown ChatGPT to the LessWrong folks a decade ago, most would likely have called it AGI and said it was far to dangerous to share with the public, and that anyone who thought otherwise was a dangerous madman.
I don't feel that much has changed in the past 10 years. I would have done the same thing then as now, spent a month captivated by the crystal ball until I realized it was just refracting my words back at me.
> These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.
Did evolution understand consciousness?
> So there is nothing to worry.
Is COVID conscious?
I don't think the AI has to be "sentient" in order to be a threat.
https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...
Just bad software can be existential threat if it is behind sensitive systems. A neural network is bad software for critical systems.
> understand consciousness
We do not call Intelligence something related to consciousness. Being able to reason well suffices.
That is something I hear over and over, particularly as a rebuttal to the argument that llm is just a stochastic parrot. Calling it "good enough" doesn't mean anything, it just allows the person saying it to disengage from the substance of the debate. It's either reasons or it doesn't, and today it categorically does not.
That some will remark that you do not need consciousness to achieve reasoning does not lose truth because a subset sees in LLMs something that appears to them as reasoning.
I do not really understand who you are accusing of a «good enough» stance: we have never defined "consciousness" as a goal (cats are already there and we do not seem to need further), we just want something that reasons. (And that reasons excellently well.)
The apparent fact that LLMs do not reason does is drily irrelevant to an implementation of AGI.
The original poster wrote that understanding consciousness would be required to «crack AGI» and no, we state, we want AGI as a superhuman reasoner and consciousness seems irrelevant.
4 replies →
> There is no "superintelligence" or "AGI"
There is intelligence. The LLM current state-of-the-art technology produces output analog to natural intelligences.
This things are already intelligent.
Saying that LLMs aren't producing "intelligence" is like saying planes actually don't fly because they are not flapping their wings like birds.
If you run fast enough, you'll end flying at some point.
Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".
> There is intelligence.
There isn't
> Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".
Even the most stupid people can usually ask questions and correct their answers. LLMs are incapable of that. They can regurgitate data and spew a lot of generated bullshit, some of which is correct. Doesn't make them intelligent.
Here's a prime example that appeared in my feed today: https://x.com/darthsidius1985/status/1802423010886058254 And all the things wrong with it: https://x.com/yvanspijk/status/1802468042858737972 and https://x.com/yvanspijk/status/1802468708193124571
Intelligent it is not
> Even the most stupid people can usually ask questions and correct their answers. LLMs are incapable of that. They can regurgitate data and spew a lot of generated bullshit, some of which is correct. Doesn't make them intelligent.
The way the current interface for most models works can result in this kind of output, the quality of the output - not even in the latests models - doesn't necessarily reflects the fidelity of the world model inside the LLM nor the level of insight it can have about a given topic ("what is the etymology of the word cat").
The current usual approach is "one shot", you've got one shot at the prompt, then return your output, no seconds thoughts allowed, no recursion at all. I think this could be a trade-off to get the cheapest most feasible good answer, mostly because the models get to output reasonably good answers most of the time. But then you get a percentage of hallucinations and made up stuff.
That kind of output could be - in a lab - fully absent actually. Did you you notice that the prompt interfaces never gives and empty or half-empty answer? "I don't know", "I don't know for sure", "I kinda know, but it's probably a bit shaky answer", or "I could answer this, but I'd need to google some additional data before", etc.
There's another one, almost never, you get to be asked back by the model, but the models can actually chat with you about complex topics related to your prompt. It's obvious when you're chatting with some chatbot, but not that obvious when you're asking it for a given answer for a complex topic.
In a lab, with recursion enabled, the models could get the true answers probably most of the time, including the fabulous "I don't know". And they could get the chance to ask back as an allowed answer, asking for additional human input, relaying on a live RHLF right there (it's quite technically feasible to achieve, not economically sound if you have a public prompt GUI facing the whole planet inputs).
but it wouldn't make much economic sense to make public a prompt interface like that.
I think it could also have a really heavy impact in the public opinion if they get to see a model that never makes a mistake, because it can answer "I don't know" or can ask you back to get some extra details about your prompt, so there you have another reason to do not make prompts that way.
3 replies →
> These models will remain in the word vector similarity phase forever.
Forever? The same AI techniques are already being applied to analyze and understand images and video information after that comes ability to control robot hands and interact with the world and work on that is also ongoing.
> Till the tie we understand consciousness, we will not crack AGI …
We did not fully understand how bird bodies work yet that did not stop development of machines that fly. Why is an understanding of consciousness necessary to “crack AGI”?
No one is saying there is. Just that we've reached some big milestones recently which could help get us there even if it's only by increased investment in AI as a whole, rather than the current models being part of a larger AGI.
Imagine a system that can do DNS redirection, MITM, deliver keyloggers, forge authorizations and place holds on all your bank accounts, clone websites, clone voices, fake phone and video calls with people that you don’t see a lot. It can’t physically kill you yet but it can make you lose your mind which imo seems worse than a quick death
Why would all of these systems be connected to a single ai? I feel like you are describing something criminal humans do through social engineering, how do you foresee this AI finding itself in this position?
> Why would all of these systems be connected to a single ai?
because someone decides to connect them either unintentionally, or intentionally for personal gain, or, more likely, for corporate purposes which seem "reasonable" or "profitable" at the time, but the unintended consequences were not thought through.
Look at that recent article linked to HN about how MSFT allowed a huge security flaw in AD for years in order to not "rock the boat" and gain a large government contract. AI will be no different.
I foresee it in that position due to people building it as such. Perhaps the same criminal humans you mention, perhaps other actors with other motivations.