Home » Center for AI and Digital Policy » CAIDP Update – US Commission Sidesteps American Values in Recommendations on AI

CAIDP Update – US Commission Sidesteps American Values in Recommendations on AI

The United States National Security Commission on AI met this week to vote on recommendations and an interim report to the US Congress. The Commission adopted almost all of the 80 recommendations, derived from  six work streams. The Recommendations concerned primarily AI R&D, AI and national security, training AI talent, building US AI leadership, and global cooperation on AI and also ethics. There were also recommendations (and an interesting discussion) concerning “malign” information, enabled by AI..

There were several references to “ethics” in the discussion of workforce training and the management of genomic data in the Recommendations.  “Ethics” appears in the discussion of  the AI Partnership for Defense, which seeks to “provide value-based global leadership” on adoption of AI. The Commission recommended an Agreement on AI principles and ethics to govern NATO’s development and use of AI. The Commission also proposes to operationalize the DOD Principles of AI Ethics – Responsible, Equitable, Traceable, Reliable, and Governable.  There were several references in the Draft Report to “bias” in data sets and in STEM education. There was a recommendation (#64) to “establish a Digital Coalition of democratic states and the private sector to coordinate efforts and strategy around AI and emerging technologies, beginning with a Digital Summit.” This recommendation builds on growing support for the Quadrilateral Security Dialogue, a strategic forum among the US, Australia, India, and Japan.

The words “privacy,” “fairness,” and “transparency,” appear frequently in the 263-page Draft Recommendations. However, there were no specific recommendations to operationalize these principles. This was surprising given the President’s 2019 Executive Order on AI, which emphasized American values such as privacy and civil liberties, as well as recent statements by US Chief Technology Officer Michael Kratsios, who told POLITICO earlier this week that the US stood behind AI policies that support democratic values and specifically criticized the “dystopian” social credit scores of China. There is also White House Guidance to federal agencies that underscores privacy, civil liberties, human rights, the rule of law, and respect for intellectual property. None of these goals appear in the recent recommendations of the National Security Commission on AI, and it remains unclear whether the Commission supports the application of these regulations to the defense agencies in the US federal government. This is of concern as one of the most notable early AI applications in the US was the Total Information Awareness program, coordinated by the Department of Defense, to track and profile Americans. Other governments are currently exploring the integration of data sets containing personal information to launch AI national initiatives.

In sum, the recent National Security Commission on AI Recommendations promote ethics, acknowledges bias, and proposes a strategy for allies that seeks to maintain accountability for AI systems in the defense setting. However, the report leaves open important questions about the application of national values to AI projects within the defense agencies. Also notable, the documents for the Commission are stored on the proprietary Google Drive rather than on the Commission’s website. Public documents are typically stored on a public website.

 

Marc Rotenberg, Director

Center for AI and Digital Policy at Michael Dukakis Institute

The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy