← Back to context

Comment by alzoid

5 days ago

I ran into an AI coded bug recently the generated code had a hard coded path that resolved another bug. My assumption is the coder was too lazy to find the root cause of the bug and asked the LLM to "make it like this". The LLM basically set a flag to true so the business logic seems to work. It shouldn't have got past the test but whatever.

In another code base, all the code was written with this pattern. Its like the new code changed what the old code did. I think that 'coder' kept a big context window and didn't know how to properly ask for something. There was 150 line function that only needed to be 3 lines, a 300 line function that could be done in 10 etc. There were several a sections where the LLM moved the values of a list to another list and then looped through the new list to make sure the values were in the new list. It did this over and over again.