← Back to context

Comment by xnickb

18 days ago

> I expect it will find its niche in development and become actually useful, taking off menial tasks from developers.

Reading AI generated code is arguably far more annoying than any menial task. Especially if the said code happens to have subtle errors.

Speaking from experience.

This is probably version 0.1 or 0.2.

Reviewing what the AI does now is not to be compared with human PRs. You are not doing the work as it is expected in the (hopefully near?) future but you are training the AI and the developers of the AI and more crucially: you are digging out failure modes to fix.

  • While I admire your optimism regarding those errors getting fixed, I myself am sceptical about the idea of that happening in my lifetime (I'm in my mid 30s).

    It would definitely be nice to be wrong though. That'd make life so much easier.

This is true for all code and has nothing to do with AI. Reading code has always been harder than writing code.

The joke is that PERL was a write-once, read-none language.

> Speaking from experience.

My experience is all code can have subtle errors, and I wouldn't treat any PR differently.

  • I agree, but when working with code written by your teammate you have a rough idea what kind of errors to expect.

    AI however is far more creative than any given single person.

    That's my gut feeling anyway. I don't have numbers or any other rigorous data. I only know that Linus Torvalds made a very good point about chain of trust. And I don't see myself ever trysting AI the same way I can trust a human.

    • It depends what we set as the bar for the AI. Like now, the bar wasn't even "have all tests pass without modifying the actual tests". That is probably lower than for any PR you would need to look at.