Comment by chrisjj
13 days ago
> I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.
And how do you verify its output isn't total fabrication?
13 days ago
> I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.
And how do you verify its output isn't total fabrication?
I read through it, scanning sections that seem uncontroversial and reading more closely sections that talk about things I'm less sure about. The output cites key lines of code, which are faster to track down and look at than trying to remember where in a large codebase to look.
Inconsistencies also pop up in backtesting, for example if there's a point that the llm answers different ways in multiple iterations, that's a good candidate to improve docs on.
Similar to a coworker's work, there's a certain amount of trust in the competency involved.
Your docs are a contact. You can verify that contract using integration tests
Contract? These docs are information answering user queries. So if you use a chatbot to generate them, I'd like to be reasonably sure they aren't laden with the fabricated misinformation for which these chatbots are famous.
It's a very reasonable concern. My solution is to have the bot classify what the message is talking about as a first pass, and have a relatively strict filtering about what it responds to.
For example, I have it ignore messages about code freezes, because that's a policy question that probably changes over time, and I have it ignore urgent oncall messages, because the asker there probably wants a quick response from a human.
But there's a lot of questions in the vein of "How do I write a query for {results my service emits}", how does this feature work, where automation can handle a lot (and provide more complete answers than a human can off the top of their head)
1 reply →