Comment by Al-Khwarizmi
11 hours ago
> OpenAI enabled its users to have a sext conversation.
Considering that this is only with verified adults, how is this "evil"? I find it more evil to treat full grown adult users as kids and heavily censor their use of LLMs.
(Not to detract from the rest of your post, with which I agree).
Ok so for that matter let's pose this hypothetical... How would you feel if Disney or Nintendo produced adult content for verified adults?
Why should anyone feel anything offensive about that? Or why would anyone get offended over this? I really do not understand what the issue would be.
My point is not about morality. It’s about ROI focus and that OpenAI can’t and won’t ever return anything remotely close to what’s been invested. Adult content is not getting them closer to profitability.
And if anyone believes the AGI hyperbole, oh boy I have a bridge and a mountain to sell.
LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.
I have also a hard time understanding how AGI will magically appear.
LLMs have their name for a reason: they model human language (output given an input) from human text (and other artifacts).
And now the idea seems to be that when we do more of it, or make it even larger, it will stop to be a model of human language generation? Or that human language generation is all there is to AGI?
I wish someone could explain the claim to me...
Because the first couple major iterations looked like exponential improvements, and, because VC/private money is stupid, they assumed the trend must continue on the same curve.
And because there's something in the human mind that has a very strong reaction to being talked to, and because LLMs are specifically good at mimicking plausible human speech patterns, chatGPT really, really hooked a lot of people (including said VC/private money people).
LLMs aren't language models, but are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Roughly the same architecture can generate passable images, music, or even video.
It's not that language generation is all there is to AGI, but that to sufficiently model text that is about the wide range of human experiences, we need to model those experiences. LLMs model the world to varying degrees, and perhaps in the limit of unbounded training data, they can model the human's perspective in it as well.
[1] https://x.com/karpathy/status/1582807367988654081
<< LLM tech will never lead to AGI.
I suspect this may be one of those predictions that may not quite pan out. I am not saying it is a given, but never is about as unlikely.
...Why?
7 replies →
>LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.
Why would you think synapses (or their dynamics) are required for AGI rather than being incidental owing to the constraints of biology?
(This discussion never goes anywhere productive but I can't help myself from asking)
I don't see what is so complicated about modelling a synapse. Doesn't AlmostAnyNonLinearFunc(sum of weighted inputs) work well enough?
Yeah the disapproval/disgust I'm seeing everywhere, from pretty much every side that I keep my eye on, about OpenAI enabling erotica generation with ChatGPT is so frustrating, because it seems like just Puritanism and censorship, and desiring to treat adults like children as you say.
The issues that these pseudo-relationships can cause have barely begun to be discussed, nevermind studied and understood.
We know that they exist, and not only for people with known mental health issues. And that's all we know. But the industry will happily brush that aside in order to drive up those sweet MAU and MRR numbers. One of those, "I'm willing to sacrifice [a percentage of the population] for market share and profit" situations.
Edits: grammar
People form parasocial relationships with AI already with content restrictions in place. It seems to me that that is a separate issue entirely.
That's kind of patronizing position or maybe a conservative one (in US terms). There can be harm, there can be good, nobody can say at this moment for sure which is more.
Do you feel the same about say alcohol and cigarettes? We allow those, heck we encourage those in some situations for adults yet they destroy whole societies (look at russia with alcohol, look at Indonesia for cigarettes if you haven't been there).
I see a lot of points to discuss and study but none to ban with parent's topic.
1 reply →
It is not bad per se but in my opinion it shows that OpenAI is desperately trying to stop bleeding money.
I mean, their issue isn't that not enough users are using ChatGPT, so they need to enable new user modalities to draw more people in — they already have something like 800 million MAU. Their issue is that most of their tokens are generated free right now both from those users and stuff like CoPilot, and they're building stupidly huge unnecessary data enters to scale their way to "AGI." So yeah, everyone says this looks like a sign of desperation, but I just don't see it at all, because it would solve a problem they don't actually have (not enough people finding GPT useful).
1 reply →
Looks like OpenAI can do anything it desires, but if an indie artist tries to take money for NSFW content, or even just make it for free publicly - they get barred from using payment processors and such.