← Back to context

Comment by layer8

19 days ago

> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.

Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.

> The “magic” for me was that computers were completely scrutable and > reason-able

Yes, and computers were something that gave you powerful freedom. You could make a computer do anything it was physically able to as long as your mind could follow up. Computers followed logic, they didn't have opinions, they gave you full control of themselves and you would have unlimited control.

I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.

  • You can’t reason about inference (or training) of LLMs on the semantic level. You can’t predict the output of an LLM for a specific input other than by running it. If you want the output to be different in a specific way, you can’t reason with precision that a particular modification of the input, or of the weights, will achieve the desired change (and only that change) in the output. Instead, it’s like a slot machine that you just have to try running again.

    The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

    • That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

      Right, which is the point: LLMs are much more like human coworkers than compilers in terms of how you interact with them. Nobody would say that there's no point to working with other people because you can't predict their behavior exactly.

      12 replies →

    • What are you inputs and outputs? If inputs are zip files and outputs is uncompressed text, don't use an LLM. If inputs are English strings and outputs are localized strings, LLMs are way more accurate than any procedural code you might attempt for the purpose. Plus changing the style of outputs by modifying inputs/weights is also easier, you just need to provide a few thousand samples rather than think of every case. Super relevant for human coding, how many hobbyists or small businesses have teams of linguists on staff?