← Back to context

Comment by drw85

1 month ago

This is the problem with the LLM fallacy.

You think it'll rapidly get smarter, but it just recreates things from all the terrible code it was fed. Code and how it is written also rapidly changes these days and LLMs have some trouble drawing lines between versions of things and the changes within them.

Sure, they can compile and test things now, which might make the code work and able to run. The quality of it will be hard to increase without manually controlling and limiting the type of code it 'learns' from.