← Back to context

Comment by simoncion

8 hours ago

> It does look a bit AI generated though

These days, when I hear a project owner/manager describe the project as a "clean room reimplementation", I expect that they got an LLM [0] to extrude it. This expectation will not always be correct, but it'll be correct more likely than not.

[0] ...whose "training" data almost certainly contains at least one implementation of whatever it is that it's being instructed to extrude...

As far as LLM-produced correctness goes, it all comes down to the controls that have been put in place (how valid the tests are, does it have a microbenchmark suite, does it have memory leak detection, etc.)