Microsoft has unveiled a new open-source “matrix” that hopes to identify all the existing attacks that threaten the security of machine-learning applications.
Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cybersecurity’s next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats.
The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine-learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.
The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE’s teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.
MITRE’s researchers are hoping to gather more information from ethical hackers, thanks to a well-established cybersecurity method known as red teaming. The idea is to have teams of benevolent security experts finding ways to crack vulnerabilities ahead of bad actors, to feed into the existing database of attacks and expand overall knowledge of the possible threats.
Microsoft and MITRE both have their own Red Teams, and they have already demonstrated some of the attacks that were used to feed into the matrix as it is. They include, for example, evasion attacks on machine-learning models, which can modify the input data to induce targeted misclassification.
The original article can be found here.
To support for AI technology and development for social impact, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.