← Back to context

Comment by lionkor

6 hours ago

It DOES matter, because the claim that LLMs are a layer of abstraction implies that it's somehow more than a random word generator. It does a great job at generating words in the right order, and often, given enough time, datacenter resources, money, and training, they can produce code that runs and does things as expected.

However, there is absolutely nothing stopping an LLM from "deciding" tomorrow that a fix it built a week ago is no longer real, because not only has that fix left its context, but also the bug was not obvious.

> However, there is absolutely nothing stopping an LLM from "deciding" tomorrow that a fix it built a week ago is no longer real, because not only has that fix left its context, but also the bug was not obvious.

Yeah, and we've never had deterministic tools like GCC suddenly fuck up commonly-relied-on undefined behavior between releases. Sure.

I get what you're saying, but again, to the vast majority of devs, none of that shit matters. Whether that's a good thing or a bad thing is a different discussion.

  • No, deterministic tools so far have not fucked up completely by themselves. There's always a bug, maybe a fix, and maybe a regression test.