Defense Innovation Board's AI Principles Project

Message from the DIB

As the Department of Defense (DoD) moves forward in integrating artificial intelligence (AI) and machine learning across its functions, in accordance with the National Defense Strategy and DoD AI Strategy, it is imperative to keep pace with modern business practices to maintain our strategic and technological advantage. Such an approach involves serious reflection and inquiry, a commitment to responsible research and innovation, and the safe and ethical use of these technologies.

We have witnessed how deeply committed the women and men who work in the Department are to ethics: avoiding civilian casualties, adhering to international humanitarian law, and collaborating with allies in international fora to advance international law and norms. Additionally, the Department extensively tests all its systems, especially weapons systems, and systems employing AI will likely be subjected to more scrutiny than ever before.

The DIB noted that -- as with all new technologies -- rigorous work is needed to ensure new tools are used responsibly and ethically. The stakes are high in fields such as medicine or banking, but nowhere are they higher than in national security.

For this reason, in July 2018, the Department asked the DIB to undertake an effort to help catalyze a dialogue and set of consultations to establish a set of AI Principles for Defense, particularly while the adoption of this technology is at a nascent stage. The DIB is well-suited to enjoin business, academic, and non-profit perspectives with their accumulated insights into defense, and to engage a diverse array of stakeholders in our society who have important views to bring to this discussion. To that end, the DIB intends to make this process as robust, inclusive, and transparent as practical.



On July 11, 2018, Brendan McCord spoke at a Defense Innovation Board public meeting, and announced the launch of the Joint Artificial Intelligence Center and the Department’s simultaneous request that the DIB examine AI ethics and develop a set of principles to guide ethical development and application of AI in DoD.

DIB Chairman Dr. Eric Schmidt tasked the DIB Science & Technology Subcommittee to take on this effort.

In August 2018, the Department formed an informal, internal DoD Principles & Ethics Working Group that includes FVEY partners and meets regularly to assist the DIB in gathering information and to promote cooperation. This effort aides us in mapping the internal ecosystem of stakeholders in DoD working on AI.

The Defense Innovation Board Science & Technology Subcommittee held a series of Roundtable Discussions to seek advice from leading experts in the field. The purpose of these Roundtable Discussions was to challenge rather than reach consensus. The diverse perspectives gathered served to inform the Board member's research and preparations as they develop a set of proposed AI principles for the Department of Defense.

The first roundtable took place at Harvard University on January 22, 2019. Due to the federal government closure at the time, the corresponding public listening session had to be canceled.

The second roundtable took place at Carnegie Mellon University on March 13, 2019, and the public listening session took place the following day, on March 14. Watch the public listening session below and read the transcript, including the public comments, hereTo view public comments submitted in advance of this session, as well as those submitted in advance of the January session, click here.

The third roundtable took place at Stanford University on April 26, preceded by the public listening session on April 25. Watch the public listening session below and read the transcript, including the public comments, hereTo view public comments submitted in advance of this session, click here.

During the DIB’s quarterly public meeting on July 10, 2019. board members discussed the AI Principles project as part of the meeting agenda. Please click here to watch the meeting.

For additional public comments that have been submitted, please click here. Please note that all public comments submitted to the DIB reflect the commenter’s personal views, unless indicated otherwise. We will continue to update this document to reflect new comments we have received.

During the DIB’s quarterly public meeting on October 31, 2019, the DIB members voted to approve the proposed AI Principles.

Report Materials

The following individuals participated in a roundtable discussion with DIB members and shared their individual perspective on the topic. These individuals and their organizations do not necessarily endorse the principles or any other product that ultimately results from the DIB's expert consultations. We are grateful for their participation in the project by sharing their insights, experiences, and perspectives.

  • General John Allen (Ret.), President, Brookings Institution
  • Dr. Dario Amodei, Research Director, OpenAI
  • Dr. Genevieve Bell, Director, Director of the Autonomy, Agency and Assurance (3A) Institute, Australian National University, and Senior Fellow, Intel Corporation
  • Mr. Jack Clark, Policy Director, OpenAI
  • Dr. Paul Cohen, Dean, School of Computing and Information, University of Pittsburgh
  • Ms. Rebecca Crootof, Clinical Lecturer and Executive Director of the Information Society Project, Yale Law School
  • Dr. David Danks, Professor of Philosophy and Psychology, Carnegie Mellon University
  • Dr. Richard Danzig, Board Trustee, RAND, and Board Member, Center for a New American Security
  • Dr. Neil Davison, Policy Adviser, Legal Division Arms Unit, International Committee of the Red Cross
  • Dr. Ed Felten, Director, Center for Information Technology Policy, and Professor of Computer Science, Princeton University
  • Dr. Mary Gray, Senior Researcher, Microsoft Research, and Fellow, Harvard’s Berkman Klein Center for Internet and Society
  • Mr. Andrew Grotto, International Security Fellow, Center for International Security and Cooperation; and Research Fellow, Hoover Institution, Stanford University
  • Dr. Martial Hebert, Head of Robotics Institute, Carnegie Mellon University
  • Ms. Evanna Hu, CEO, Omelas
  • Dr. Joi Ito, Director, MIT Media Lab
  • Dr. Sheila Jasanoff, Professor of Science and Technology Studies, Harvard Kennedy School
  • Dr. Colin Kahl, Co-Director, Center for International Security and Cooperation, Freeman Spogli Institute for International Studies, Stanford University
  • Dr. Michael Klare, Senior Visiting Fellow, Arms Control Association
  • Dr. Mykel Kochenderfer, Co-Director, AI Safety Center, and Assistant Professor of Aeronautics and Astronautics, Stanford University
  • Ms. Marta Kosmyna, Silicon Valley Lead, Campaign to Stop Killer Robots
  • Ms. Mira Lane, Head of Design and Ethics, Microsoft
  • Dr. Seth Lazar, Professor of Philosophy, Australian National University
  • Dr. Yann LeCun, Chief AI Scientist, Facebook
  • Dr. Fei-Fei Li, Professor of Computer Science, Stanford University
  • Mr. Frank Long, Associate Product Manager, Google
  • Dr. Bill Mark, President, Information and Computing Sciences Division, SRI International
  • Mr. Chris Martin, Director R+D Intelligent + Secure IoT, Bosch
  • Dr. Michael McFaul, Director, Freeman Spogli Institute for International Studies, Stanford University
  • Dr. Paul Nielsen, CEO, Software Engineering Institute, Carnegie Mellon University
  • Dr. Lisa Parker, Professor and Director, Center for Bioethics and Health Law, University of Pittsburgh
  • Dr. Rob Reich, Faculty Director, Center for Ethics in Society, and Professor of Political Science, Stanford University
  • Ms. Dawn Rucker, Principal, Rucker Group
  • Dr. Tuomas Sandholm, Professor of Computer Science, Carnegie Mellon University
  • Mr. Michael Sellitto, Deputy Director, Institute for Human-Centered Artificial Intelligence, Stanford University
  • Dr. Lucy Suchman, Professor of Sociology, Lancaster University
  • Mr. Jonathan Zittrain, Professor of International Law, Harvard Law School; Professor, Harvard Kennedy School; and Professor of Computer Science, Harvard School of Engineering and Applied Sciences

Several additional experts participated in a roundtable, but requested that their names be withheld.

In addition, the following individuals provided their input to the DIB on this initiative outside of a roundtable setting. Like the roundtable discussion participants listed above, they too do not necessarily endorse the principles or any other product that ultimately results from the DIB's expert consultations. We will continue to update this list before the DIB releases the final version of the principles on October 31, 2019.

  • Mr. Jared Brown, Senior Advisor for Government Affairs, Future of Life Institute
  • Mr. Miles Brundage, Research Scientist (Policy), OpenAI
  • Dr. Charina Chou, Global Policy Lead for Emerging Technologies, Google
  • Dr. Lorrie Cranor, Associate Department Head, Engineering and Public Policy; FORE Systems Professor, Engineering & Public Policy, and School of Computer Science; Director, CyLab Usable Privacy and Security Laboratory; Co-director, MSIT-Privacy Engineering masters program
  • Dr. James Crawford, Founder and CEO, Orbital Insight
  • Mr. Jeffrey Ding, Researcher, Centre for the Governance of AI, Future of Humanity Institute, University of Oxford
  • Dr. Ann Drobnis, Director, Computing Community Consortium, Computer Research Association
  • Dr. Baruch Fischhoff, Professor, Institute for Politics and Strategy, and the Departments of Social and Decision Sciences and Engineering and Public Policy, Carnegie Mellon University
  • Dr. Jodi Forlizzi, Director, Human-Computer Interaction Institute, Carnegie Mellon University
  • Dr. Matt Gee, CEO, BrightHive
  • Dr. Yolanda Gil, President, Association for the Advancement of Artificial Intelligence (AAAI), and Research Professor of Computer Science and Spatial Sciences, University of Southern California
  • Mr. Mina Hanna, Chair, IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee
  • Dr. Mark Hill, Professor of Computer Science, University of Wisconsin
  • Ms. Natalie Evans Harris, Head of Strategic Initiatives, BrightHive
  • Dr. Michael Horowitz, Professor of Political Science, University of Pennsylvania
  • Ms. Christine Fox, Assistant Director, Policy and Analysis, Johns Hopkins University Applied Physics Laboratory
  • Mr. Christopher Jenks, Assistant Law Professor, Southern Methodist University
  • Mr. Andrew Kim, Manager of Government Affairs and Public Policy, Google
  • Dr. Lydia Kostopoulos, Member, IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee
  • Dr. Larry Lewis, Director, Center for Autonomy and Artificial Intelligence, CNA
  • Mr. Ashley Llorens, Chief of Intelligent Systems Center, Johns Hopkins University Applied Physics Laboratory
  • Dr. Alex London, Professor of Ethics and Philosophy, Carnegie Mellon University
  • Dr. Mark MacCarthy, Senior Fellow, Institute for Technology Law and Policy, Georgetown Law
  • Dr. Jason Matheny, Director, Center for Security and Emerging Technology, Georgetown University
  • Mr. Brendan McCord, President, Tulco Labs
  • Dr. Chris Meserole, Fellow in Foreign Policy, Brookings Institution
  • Mr. Michael Page, Research Fellow, Center for Security and Emerging Technology, Georgetown University
  • Ms. Lindsey Sheppard, Associate Fellow, International Security Program, Center for Strategic and International Studies
  • Dr. David Sparrow, Research Staff, Institute for Defense Analyses
  • Dr. Molly Steenson, Senior Associate Dean for Research, College of Fine Arts; Associate Professor of Ethics & Computational Technologies; and Associate Professor, School of Design, Carnegie Mellon University
  • Mr. Craig Ulsh, Project Leader, MITRE Corporation
  • Mr. Steve Welby, Executive Director, Institute of Electrical and Electronics Engineers (IEEE)
  • Mr. Brian Williams, Research Staff Member, Joint Advanced Warfighting Division, Institute for Defense Analyses
  • Hon Robert Work, Distinguished Senior Fellow, Center for a New American Security