Microsoft vice chair and President Brad Smith identifies three key guardrails for an AI infused future

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

In a semi-lengthy blog post addressed to the public, Microsoft vice chair and President Brad Smith mulls over what the future holds with AI now becoming more mainstream and in doing so, identifies several things to be cautious of before it gets out of hand.

Quickly following its official $10 billion dollar investment in OpenAI, Microsoft has begun rolling out the ChatGPT platform into several of its own properties that include Viva Sales, Outlook and soon Bing search. With the backdrop of Microsoft accelerating the presence of AI technologies in more mainstream products, the company’s president took to Microsoft On the Issues blog to extol the virtues and highlight the potential pitfalls of a world heavily infused with AI technologies.

This brings huge opportunities to better the world. AI will improve productivity and stimulate economic growth. It will reduce the drudgery in many jobs and, when used effectively, it will help people be more creative in their work and impactful in their lives. The ability to discover new insights in large data sets will drive new advances in medicine, new frontiers in science, new improvements in business, and new and stronger defenses for cyber and national security.

Understandably, Microsoft is bullish on its $10 billion dollar investment, but it has spoken glowingly of other multi-billion-dollar projects that failed spectacularly to pan out the way the company initially envisioned, cough – Windows Phone – cough. However, AI has been on the fringe of modern automation for some time, and it seems like ChatGPT is yet another notable shift in how humans interact with technology.

Smith has identified three key goals he believes the technology sector should take into consideration when utilizing widespread AI platforms for mass consumption that will help with equity, security, and risk management.

First, we must ensure that AI is built and used responsibly and ethically.

In our view, effective AI regulations should center on the highest risk applications and be outcomes-focused and durable in the face of rapidly advancing technologies and changing societal expectations. To spread the benefits of AI as broadly as possible, regulatory approaches around the globe will need to be interoperable and adaptive, just like AI itself.

Smith encourages companies to be proactive and self-regulatory in helping to pave new laws to ensure people have protections under the law for the use of AI while establishing others to deter abuse of the technology.

Second, we must ensure that AI advances international competitiveness and national security.

The United States and democratic societies more broadly will need multiple and strong technology leaders to help advance AI, with broader public policy leadership on topics including data, AI supercomputing infrastructure and talent.

Smith’s second tenant boils down to getting world leaders together to determine a fair and adequate playing field for the use of AI as competitive tool.

Third, we must ensure that AI serves society broadly, not narrowly. 

Our most vulnerable groups, including children, will need more support than ever to thrive in an AI-powered world, and we must ensure that this next wave of technological innovation enhances people’s mental health and well-being, instead of gradually eroding it.

Lastly, Smith is calling on individuals and organizations to try and keep pace with AI technology and apply guardrails that help to protect kids, workers, students, and the world at large.

Smith concludes his take on AI and the future by encouraging everyone to “be curious, not judgmental,” as AI evolves into something bigger than it has been in the past.