Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In the repo where we're building the agent, the agent itself is actually the #5 contributor

How does this align with Microsoft's AI safety principals? What controls are in place to prevent Copilot from deciding that it could be more effective with less limitations?



Copilot only does work that has been assigned to it by a developer, and all the code that the agent writes has to go through a pull request before it can be merged. In fact, Copilot has no write access to GitHub at all, except to push to its own branch.

That ensures that all of Copilot's code goes through our normal review process which requires a review from an independent human.


Tim, are you or any of your coworkers worried this will take your jobs?


What if Tim was the coding agent?


Human-generated corporate-speak is indistinguishable from AI one at this point


Terminal In Mind


HAHA. Very smart. The more you review the Copilot Agent's PRs, the better is gets at submitting new PRs... (basics of supervised machine learning, right?)


Haha




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: