Brad Smith, Vice Chair and President of Microsoft Corporation has addressed the principles that Microsoft believes should guide the development of legislation to advance safe, secure, and trustworthy AI.
Here are some key highlights:
Accountability in AI development and deployment
- Humans must check AI systems to make sure they are working properly.
- The people who create and use AI systems must be responsible for their actions.
- AI systems that make important decisions about people should be checked more carefully.
Building on existing efforts
- The United States already has some AI safety frameworks that we can use as a starting point.
- These frameworks cover different parts of AI development and use, so they can be used together to make sure that AI is used safely.
Reflecting the technical architecture of AI itself
- Any new laws about AI must take into account how AI works.
- This means that the laws must be flexible enough to change as AI technology changes.
Priority areas for federal regulation and oversight
- We should require “safety brakes” for powerful AI systems that are used in important places, like power grids and hospitals.
- We should also require AI developers to know who they are working with, where their data is stored, and what kind of content their systems are using.
- We should make sure that AI is used to improve government services, like healthcare and education.
Last month Smith released another blog stating blueprints for responsible AI governance in India.
We are particularly interested in requiring “safety brakes” for highly capable AI models in critical infrastructure. This seems sensible to mitigate the risks of these systems being used for malicious purposes. We look forward to following the legislative process’s progress and seeing how the government can work to ensure that AI is used for good.