Comment by danbruc

6 days ago

Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.

[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.

> The current generation of AI models will turn out to be essentially a dead end.

It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".

To give some pause before dismissing the current state of the art prematurely:

I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.

I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.

  • I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.

    • I'm not sure if popular models cheat at this, but if I ask for it (o3-mini) I get correct results/intermediate values (for 794206 * 43124, chosen randomly).

      I do suspect this is only achieveable because the model was specifically trained for this.

      But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.

      I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.

      > in a strong version of this test I would want nothing related to long multiplication in the training data.

      Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.

      3 replies →

  • Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).

    • While I agree that suffering has ethical connotations, I don't think it makes sense to treat it as a requirement. A Buddhist who manages to achieve enlightenment and overcome suffering does not immediately cease to be a moral patient, right?

      1 reply →

  • If you asked your cat to make a REST API call I suppose it would fail, but the same applies if you asked a chatbot to predict realtime prey behavior.

    • I think LLMs are much closer to grasping movement prediction than the cat is to learning english for what its worth.

      IMO "ability to communicate" is a somewhat fair proxy for intelligence (even if it does not capture all of an animals capabilities), and current LLMs are clearly superior to any animal in that regard.

  • >I would already consider LLM based current systems more "intelligent" than a housecat.

    An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.

It may be that LLM-AI is a dead end on the path to General AI (although I suspect it will instead turn out to be one component). But that doesn't mean that LLMs aren't good for some things. From what I've seen, they represent a huge improvement in (machine) translation, for example. And reportedly they're pretty good at spiffing up human-written text, and maybe even generating text--provided the human is on the lookout for hallucinations (and knows how to watch for that).

You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)

This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.

Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).

  • Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.

    When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.

  • I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.

    • This is definitely some people's experience. It's not mine.

      I think the distinction is due to different tools being used, how the tool is being used and the use case it's being used for.

Isn't this entirely missing the point of the article?

> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America

That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.

Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.

  • Something can be much better than before but still be a dead end. Literally a dead end road can take you closer but never get you there.

    • But dead end to what? All progress eventually plateaus somewhere? It's clearly insanely useful in practice. And do you think there will be any future AGI whose development is not helped by current LLM technology? Even if the architecture is completely different the ability of LLMs to understand humans data automatically is unparalleled.

      8 replies →

  • I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.

    • Current generation of LLMs have very limited ability to learn new skills at inference time. I disagree this means they cannot reason. I think reasoning is by an large a skill which can be taught at training time.

      2 replies →