Comment by rafaelmn
17 hours ago
If you're trusting core contributors without AI I don't see why you wouldn't trust them with it.
Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
I trust people to understand the code they write. I don't trust them to understand code they didn't write.
So you don't trust projects with more than one author? By definition, they'd have to understand each other's code.
Different people can understand different parts of the code.
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++.
So, should I trust an LLM as much as a C compiler?
What if it impairs judgement?