He would be among those who lack "healthy inclination to skepticism" in my book. I do not doubt his brilliance. Personally, I think he is more intelligent than I am.
But, I do have a distinct feeling that his enthusiasm can overwhelm his critical faculties. Still, that isn't exactly rare in our circles.
Everything Karphathy said, until his recent missteps, was received as gospel, both in the AI community and outside.
This influencer status is highly valuable, and I would not be surprised if he was approached to gently skew his discourse towards more optimism, a win-win situation ^^
I think many serious endeavors would benefit from including a magician.
Intelligent experts fail time and again because while they are experts, they don't know a lot about lying to people.
The magician is an expert in lying to people and directing their attention to where they want it and away from where they don't.
If you have an expert telling you, "wow this is really amazing, I can't believe that they solved this impossible technical problem," then maybe get a magician in the room to see what they think about it before buying the hype.
Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Its the same reason why a pure technologist can fail spectacularly at developing products that deliver experiences that people want.
More like people know where to hype, whom to avoid criticising unless measured etc. I have rarely seen him criticising Elon's vision only approach and that made me skeptical.
>
Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Intelligence (which psychologists define as the g factor [1]; this concept is very well-researched) does not make you an expert on any given topic. It just, for example, typically enables you to learn new topics faster, and lets you see connections between topics.
If Karpathy did not spend a serious effort of learning to get a good understanding of people, it's likely that he is not an expert on this topic (which I guess basically nobody would expect).
Also, while being a rationalist very likely requires you to be rather intelligent, only a (I guess rather small) fraction of highly intelligent people are rationalists.
I think these people are just as prone to behavioral biases as the rest of us. This is not a problem per se, it's just that it is difficult to interpret what is happening right now and what will happen, which creates an overreliance on the opinions of the few people closely involved. I'm sure given the pace of change and the perception that this is history-changing is impacting peoples' judgment. The unusual focus on their opinions can't be helping either. Ideally people are factoring this into their claims and predictions, but it doesn't seem like that's the case all the time.
> I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over".
> To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk.
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.
> This brings me again to a tweet from a few days ago
"The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live.
> TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad
Once again LLM defenders fall back on "lots of AI" as a success metric. Is the AI useful? No, but we have a lot of it! This is like companies forcing LLM coding adoption by tracking token use.
> But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions
"If number go up, emergent behaviour?" is not a compelling excuse to me. Karpathy is absolutely high on his own supply trying to hype this bubble.
That was 10 days ago. I wonder if the discussions the moltys have begin to converge into a unified voice or if they diverge into chaos without purpose.
I haven't seen much real cooperation-like behavior on moltbook threads. The molts basically just talk past one another and it's rare to see even something as trivial as recognizable "replies" where molt B is clearly engaging with content from molt A.
He would be among those who lack "healthy inclination to skepticism" in my book. I do not doubt his brilliance. Personally, I think he is more intelligent than I am.
But, I do have a distinct feeling that his enthusiasm can overwhelm his critical faculties. Still, that isn't exactly rare in our circles.
It's not about that, he just will profit financially from pumping AI so he pumps AI, no need to go further.
I have the same feeling.
Everything Karphathy said, until his recent missteps, was received as gospel, both in the AI community and outside.
This influencer status is highly valuable, and I would not be surprised if he was approached to gently skew his discourse towards more optimism, a win-win situation ^^
1 reply →
[flagged]
I think many serious endeavors would benefit from including a magician.
Intelligent experts fail time and again because while they are experts, they don't know a lot about lying to people.
The magician is an expert in lying to people and directing their attention to where they want it and away from where they don't.
If you have an expert telling you, "wow this is really amazing, I can't believe that they solved this impossible technical problem," then maybe get a magician in the room to see what they think about it before buying the hype.
Ha, great analogy.
CMO?
Intelligent people are very good at deceiving themselves.
Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Its the same reason why a pure technologist can fail spectacularly at developing products that deliver experiences that people want.
More like people know where to hype, whom to avoid criticising unless measured etc. I have rarely seen him criticising Elon's vision only approach and that made me skeptical.
2 replies →
> Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Intelligence (which psychologists define as the g factor [1]; this concept is very well-researched) does not make you an expert on any given topic. It just, for example, typically enables you to learn new topics faster, and lets you see connections between topics.
If Karpathy did not spend a serious effort of learning to get a good understanding of people, it's likely that he is not an expert on this topic (which I guess basically nobody would expect).
Also, while being a rationalist very likely requires you to be rather intelligent, only a (I guess rather small) fraction of highly intelligent people are rationalists.
[1] https://en.wikipedia.org/wiki/G_factor_(psychometrics)
8 replies →
I think these people are just as prone to behavioral biases as the rest of us. This is not a problem per se, it's just that it is difficult to interpret what is happening right now and what will happen, which creates an overreliance on the opinions of the few people closely involved. I'm sure given the pace of change and the perception that this is history-changing is impacting peoples' judgment. The unusual focus on their opinions can't be helping either. Ideally people are factoring this into their claims and predictions, but it doesn't seem like that's the case all the time.
To be honest it's pretty embarrassing how he got sucked into the Moltbook hype.
He's biased. He needed it to be real. He has a vested interest in these sorts of things panning out.
It's just about money.
This was his explanation for anyone interested:
> I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over".
> To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk.
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.
> This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live.
> TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
https://x.com/karpathy/status/2017442712388309406
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad
Once again LLM defenders fall back on "lots of AI" as a success metric. Is the AI useful? No, but we have a lot of it! This is like companies forcing LLM coding adoption by tracking token use.
> But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions
"If number go up, emergent behaviour?" is not a compelling excuse to me. Karpathy is absolutely high on his own supply trying to hype this bubble.
You interpret claims into Karpathy's tweets that in my opinion are not there in the original text.
> Once again LLM defenders fall back on "lots of AI" as a success metric.
That's not implied by anything he said. He simply said that it was fascinating, and he's right.
That was 10 days ago. I wonder if the discussions the moltys have begin to converge into a unified voice or if they diverge into chaos without purpose.
I haven't seen much real cooperation-like behavior on moltbook threads. The molts basically just talk past one another and it's rare to see even something as trivial as recognizable "replies" where molt B is clearly engaging with content from molt A.
1 reply →
So was he also with FSD...