This is a good point but I'd take it in the opposite direction from the implication, we should document which tools were used in general, it'd be a neat indicator of what people use.
> AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).
They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?
Having an honesty based tag could be only way to monitor impact or get after a fix in code bases if things go south.
That is at the moment: - Nobody knows for sure what agents might add and their long term effects on codebases.
- It's at best unclear that AI content in a codebase can be reliably determined automatically.
- Even if it's not malicious, at least some of its contributions are likely to be deleterious and pass undetected by human review.
This is a good point but I'd take it in the opposite direction from the implication, we should document which tools were used in general, it'd be a neat indicator of what people use.
it makes sense to keep track of what model wrote what code to look for patterns, behaviors, etc.
It isn't?
> AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).
They mention an Assisted-by tag, but that also contains stuff like "clang-tidy". Surely you're not interpreting that as people "attributing" the work to the linter?