AI Safety And Security In The Context of US Government

Written by roman.yampolskiy | Published 2020/09/15
Tech Story Tags: artificial-intelligence | hackernoon-top-story | machine-learning | ai | deep-learning | data-science | technology | ai-safety

TLDR President of the USA signed an executive order on Maintaining American Leadership in Artificial Intelligence. The President emphasized that the “… relevant personnel shall identify any barriers to, or requirements associated with, increased access to and use of such data and models, including safety and security concerns. The National Artificial Intelligence Research and Development Strategic Plan[6] published in October of 2016 explicitly addresses Safety and Security of AI systems. The plan states: “Before an AI system is put into widespread use, assurance is needed that the system will operate safely and securely, in a controlled manner”via the TL;DR App

On February 11th 2019 President of the USA signed an executive order on Maintaining American Leadership in Artificial Intelligence[1]. In it, the President particularly emphasized that the “… relevant personnel shall identify any barriers to, or requirements associated with, increased access to and use of such data and models, including … safety and security concerns …”. Additionally, in March, the White House announced AI.gov, an initiative for presenting efforts from multiple federal agencies all geared towards creating “AI for the American People”[2].
Once again, robust and Safe AI was emphasized: “The complexity of many AI systems creates important safety and security challenges that must be addressed to ensure that these systems are trustworthy. In particular, AI systems have some inherent cybersecurity risks because of the characteristics of how the technology is designed. R&D investments such as DARPA’s AI Next Campaign will create solutions for countering adversarial attacks on AI technologies, such as those that attempt to contaminate training data, modify algorithms, create adversarial inputs, or exploit flaws in AI system goals. This research is expected to lead to more secure, robust, and safe AI systems that are reliable and trustworthy.”
Current administration continues work started under President Obama in 2016, which culminated in a Report on the Future of Artificial Intelligence[3], which addressed how: “… to ensure that AI applications are fair, safe, and governable; and how to develop a skilled and diverse AI workforce.” At the time White House Office of Science and Technology Policy issued a Request for Information on the Future of Artificial Intelligence[4] to which Dr. Yampolskiy submitted a response on the subject of Safety and Control issues for AI[5].
The National Artificial Intelligence Research and Development Strategic Plan[6] published in October of 2016 explicitly addresses Safety and Security of AI Systems and cites work of Yampolskiy on AI Safety Engineering [1] and Responses to AI Risk [2].
The plan states: “Before an AI system is put into widespread use, assurance is needed that the system will operate safely and securely, in a controlled manner. Research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy. As with other complex systems, AI systems face important safety and security challenges due to:
  • Complex and uncertain environments: In many cases, AI systems are designed to operate in complex environments, with a large number of potential states that cannot be exhaustively examined or tested. A system may confront conditions that were never considered during its design.
  • Emergent behavior: For AI systems that learn after deployment, a system’s behavior may be determined largely by periods of learning under unsupervised conditions. Under such conditions, it may be difficult to predict a system’s behavior.
  • Goal misspecification: Due to the difficulty of translating human goals into computer instructions, the goals that are programmed for an AI system may not match the goals that were intended by the programmer.
  • Human-machine interactions: In many cases, the performance of an AI system is substantially affected by human interactions. In these cases, variation in human responses may affect the safety of the system.
To address these issues and others, additional investments are needed to advance AI safety and security, including explainability and transparency, trust, verification and validation, security against attacks, and long-term AI safety and value-alignment.”
Likewise security against attacks on AI systems is emphasized: “AI embedded in critical systems must be robust in order to handle accidents, but should also be secure to a wide range of intentional cyber attacks. Security engineering involves understanding the vulnerabilities of a system and the actions of actors who may be interested in attacking it. While cybersecurity R&D needs are addressed in greater detail in the NITRD Cybersecurity R&D Strategic Plan, some cybersecurity risks are specific to AI systems.
For example, one key research area is “adversarial machine learning” that explores the degree to which AI systems can be compromised by “contaminating” training data, by modifying algorithms, or by making subtle changes to an object that prevent it from being correctly identified (e.g., prosthetics that spoof facial recognition systems). The implementation of AI in cybersecurity systems that require a high degree of autonomy is also an area for further study. One recent example of work in this area is DARPA’s Cyber Grand Challenge that involved AI agents autonomously analyzing and countering cyber attacks.”
The report even touches on long-term AI safety and value-alignment: “AI systems may eventually become capable of “recursive self-improvement,” in which substantial software modifications are made by the software itself, rather than by human programmers. To ensure the safety of self-modifying systems, additional research is called for to develop: self-monitoring architectures that check systems for behavioral consistency with the original goals of human designers; confinement strategies for preventing the release of systems while they are being evaluated; value learning, in which the values, goals, or intentions of users can be inferred by a system; and value frameworks that are provably resistant to self-modification.”
A complimentary report on Preparing for the Future of Artificial Intelligence[7] released around the same time emphases issues of AI Safety, Security and Governance and made the following recommendation: “Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.”
On February 12, 2019 Department of Defense unveiled its Artificial Intelligence Strategy[8], which emphasizes “… responsibility and use of AI through its guidance and vision principles for using AI in a safe, lawful and ethical way.” DoD aims to lead in ethics and AI safety: “The Department will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values. We will consult with leaders from across academia, private industry, and the international community to advance AI ethics and safety in the military context.
We will invest in the research and development of AI systems that are resilient, robust, reliable, and secure; we will continue to fund research into techniques that produce more explainable AI; and we will pioneer approaches for AI test, evaluation, verification, and validation. … As we improve the technology and our use of it, we will continue to share our aims, ethical guidelines, and safety procedures to encourage responsible AI development and use by other nations.[9]
In particular DoD will take the following actions:
  • “Investing in research and development for resilient, robust, reliable, and secure AI. In order to ensure DoD AI systems are safe, secure, and robust, we will fund research into AI systems that have a lower risk of accidents; are more resilient, including to hacking and adversarial spoofing; demonstrate less unexpected behavior; and minimize bias. We will consider “emergent effects” that arise when two or more systems interact, as will often be the case when introducing AI to military contexts. To foster these characteristics in deployed systems in both military and civilian contexts, we will pioneer and share novel approaches to testing, evaluation, verification, and validation, and we will increase our focus on defensive cybersecurity of hardware and software platforms as a precondition for secure uses of AI.
  • Continuing to fund research to understand and explain AI-driven decisions and actions. We will continue funding research and development for “explainable AI” so users can understand the basis of AI outputs. This will help users understand, appropriately trust, and effectively manage AI systems.”
References

Written by roman.yampolskiy | Professor of Computer Science. AI Safety & Cybersecurity Researcher.
Published by HackerNoon on 2020/09/15