← Back to context

Comment by sysmax

10 days ago

LLMs are a glorified regex engine with fuzzy input. They are brilliant at doing boring repetitive tasks with known outcome.

- Add a 'flags' argument to constructors of classes inherited from Record.

- BOOM! Here are 25 edits for you to review.

- Now add "IsCaseSensitive" flag and update callers based on the string comparison they use.

- BOOM! Another batch of mind-numbing work done in seconds.

If you get the hang of it and start giving your LLMs small, sizable chunks of work, and validating the results, it's just less mentally draining than to do it by hand. You start thinking in much higher-level terms, like interfaces, abstraction layers, and mini-tests, and the AI breeze through the boring work of whether it should be a "for", "while" or "foreach".

But no, don't treat it as another human capable of making decisions. It cannot. It's a fancy machinery for applying known patterns of human knowledge to the locations where you point based on a vague hint, but not a replacement for your judgement.

I hate that I understand the internals of LLM technology enough to be both insulted and in agreement with your statement.

  • why is it insulting? It's an incredible piece of machinery for refracting natural language into other language. That itself accounts for a majority of orders people pass on to other people before something actually gets done.

> If you get the hang of it and start giving your LLMs small, sizable chunks of work, and validating the results, it's just less mentally draining than to do it by hand. You start thinking in much higher-level terms, like interfaces, abstraction layers, and mini-tests, and the AI breeze through the boring work of whether it should be a "for", "while" or "foreach".

Isn’t that the proper programming state of mind? I think about keywords the same amount of time a pianist think about the keys when playing. Especially with vim where I can edit larger units reliably, so I don’t have to follow the cursor with my eyes, and can navigate using my mental map.

  • Ultimately, yes, programming with LLMs is exactly the sort of programming we've always tried to do. It gets rid of the boring stuff and lets you focus on the algorithm at the level you need to - just like we try to do with functions and LSP and IDE tools. People needn't be scared of LLMs: they aren't going to take our jobs or drain the fun out of programming.

    But I'm 90% confident that you will gain something from LLM-based coding. You can do a lot with our code editing tools, but there's almost certainly going to be times when you need to do a sequence of seven things to get the outcome you want, and you can ask the computer to prepare that for you.

If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?

  • > If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?

    Firstly, "very few" still means "a large number of" considering how many of us there are.

    Compared to "zero" for LLMs, that's a pretty significant difference.

    Secondly, humans have a much larger context window, and it is not clear how LLMs in their current incarnation can catch up.

    Thirdly, maybe more of us invent new ideas of significance that the world will just never know. How will you be able to tell if some plumber deep in West Africa comes up with a better way to seal pipes at joins? From what I've seen of people, this sort of "do trivial thing in a new way" happens all the time.

    • Not only "our context window" is larger but we can add and remove from it on-the-fly, or rely on somebody else who, for that very specific problem, has a far better informed "context window", that BTW they're adding to/removing from on-the-fly as well.

  • I think if we fully understood this (both what exactly ishuman conciousness and how llm differs - not just experimentally but theoretically) we would then be able to truly create human-AI