← Back to context

Comment by throw310822

2 hours ago

> You know what this reminds me of? Language X comes out (e.g., Lisp or Haskell), and people try it, and it's this wonderful, magical experience, and something just "clicks", and they tell everyone how wonderful it is.

I can relate to this. And I can understand that, depending on how and what you code, LLMs might have different value, or even none. Totally understand.

At the same time.. well, let's put it this way. I've been fascinated with programming and computers for decades, and "intelligence", whatever it is, for me has always been the holy grail of what computers can do. I've spent a stupid amount of time thinking about how intelligence works, how a computer program could unpack language, solve its ambiguities, understand the context and nuance, notice patterns that nobody told it were there, etc. Until ten years ago these problems were all essentially unsolved, despite more than half a century of attempts, large human curated efforts, funny chatbots that produced word salads with vague hints of meaning and infuriating ones that could pass for stupid teenagers for a couple of minutes provided they selected sufficiently vague answers from a small database... I've seen them all. In 1968's A Space Odyssey there's a computer that talks (even if "experts prefer to say that it mimics human intelligence") and in 2013's Her there's another one. In between, in terms of actual results, there's nothing. "Her" is as much science fiction as it is "2001", with the aggravating factor that in Her the AI is presented as a novel consumer product: absurd. As if anything like that were possible without a complete societal disruption.

All this to say: I can't for the life of me understand people who act blasé when they can just talk to a machine and the machine appears to understand what they mean, doesn't fall for trivial language ambiguities but will actually even make some meta-fun about it if you test them with some well known example; a machine that can read a never-seen-before comic strip, see what happens in it, read the shaky lettering and finally explain correctly where the humour lies. You can repeat to yourself a billion times "transformers something-something" but that doesn't change the fact that what you are seeing is intelligence, that's exactly what we always called intelligence- the ability to make sense of messy inputs, see patterns, see the meanings behind the surface, and communicate back in clear language. Ah, and this technology is only a few years old- little more than three if we count from ChatGPT. These are the first baby steps.

So it's not working for you right now? Fine. You don't see the step change, the value in general and in perspective? Then we have a problem.