Google introduces Secure AI Framework as an aid towards safeguarding AI technology

Priya Walia

deepmind

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Google has introduced a set of recommended measures for businesses to safeguard their artificial intelligence (AI) models from unauthorized access. These guidelines are included in a recently launched technical guide called the Secure AI Framework (SAIF). The company says it endorses the importance of creating a framework to ensure responsible handling of the technology that underpins AI advancement in both the public and private sectors.

As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical, a Google blog post reads.

Cybersecurity executives from Google, Royal Hansen, and Phil Venables, wrote in a blog post that making AI models secure by default is a crucial aspect of implementing such models, and the introduction of the SAIF marks a significant first step towards achieving that goal.

As per the search giant, the SAIF system has been created to reduce the risks inherent in AI systems. Such risks include but are not limited to theft of the model, poisoning of training data, the injection of malicious inputs via prompt injection, and unauthorized access to confidential information present in the training data.

Google has asserted that SAIF can provide support to firms in protecting their neural network’s code and training dataset from theft. Additionally, it is highly effective in blocking a variety of other threats. The framework is primarily based on six distinct collections of best practices designed to help enterprises upgrade their AI security operations in different areas.

The Secure AI framework proposes six core ideas to enhance AI system security for organizations. These ideas include implementing existing security controls to new AI systems by adopting measures such as data encryption, expanding threat intelligence research to address AI-specific threats, leveraging automation in cyber defenses to react to anomalous activity quickly, conducting regular security reviews of AI models, continuously testing AI systems, and setting up an AI risk-aware team to mitigate business risks.

In the meantime, Microsoft is endeavoring to address the anxieties of the general public regarding the potential damage and abuse that could arise from unbridled AI deployment, which has prompted the company to impart its pledge of transparency through the Customer Commitments principle. Anthony Cook, the Corporate Vice President and Deputy General Counsel of Microsoft has unveiled three innovative AI Customer Commitments to allay concerns and foster a culture of trust in the organization.

Via CSO Online