← Back to context

Comment by dsl

21 days ago

> In the repo where we're building the agent, the agent itself is actually the #5 contributor

How does this align with Microsoft's AI safety principals? What controls are in place to prevent Copilot from deciding that it could be more effective with less limitations?

Copilot only does work that has been assigned to it by a developer, and all the code that the agent writes has to go through a pull request before it can be merged. In fact, Copilot has no write access to GitHub at all, except to push to its own branch.

That ensures that all of Copilot's code goes through our normal review process which requires a review from an independent human.