← Back to context

Comment by throwaway844498

18 days ago

"Pushing the state of the art" and experimenting on a critical software development framework is probably not the best idea.

Why not, when it goes through code review by experienced software engineers who are experts on the subject in a codebase that is covered by extensive unit tests?

  • I don't know about you, but it's much more likely for me to let a bug slip when I'm reviewing someone else's code than when I'm writing it myself.

    This is what's happening right now: they are having to review every single line produced by this machine and trying to understand why it wrote what it wrote.

    Even with experienced developers reviewing and lots of tests, the likelihood of bugs in this code compared to a real engineer working on it is much higher.

    Why not do this on less mission critical software at the very least?

    Right now I'm very happy I don't write anything on .NET if this is what they'll use as a guinea pig for the snake oil.

    • That is exactly what you want to evaluate the thechnology. Not make a buggy commit into softwared not used by nobody and reviewed by an intern. But actually review it by domain professionals, in real world very well-tested project. So they could make an informed decision on where it lacks in capabilities and what needs to be fixed before they try it again.

      I doubt that anyone expected to merge any of these PRs. Question is - can the machine solve minor (but non-trivial) issues listed on github in an efficient way with minimal guidance. Current answer is no.

      Also, _if_ anything was to be merged, dotnet is dogfooded extensively at Microsoft, so bugs in it are much more likely to be noticed and fixed before you get a stable release on your plate.

      1 reply →