As the Department of Defense (DoD) moves forward in integrating artificial intelligence (AI) and machine learning across its functions, it is imperative to keep pace with modern business practices to maintain our strategic and technological advantage. Such an approach involves serious reflection and inquiry, a commitment to responsible research and innovation, and the safe and ethical use of these technologies.
We have witnessed how deeply committed the women and men who work in the Department are to ethics: avoiding civilian casualties, adhering to international humanitarian law, and collaborating with allies in international fora to advance international law and norms. Additionally, the Department extensively tests all its systems, especially weapons systems, and systems employing AI will likely be subjected to more scrutiny than ever before.
The DIB noted that -- as with all new technologies -- rigorous work is needed to ensure new tools are used responsibly and ethically. The stakes are high in fields such as medicine or banking, but nowhere are they higher than in national security.
For this reason, in July 2018, the Department asked the DIB to undertake an effort to help catalyze a dialogue and set of consultations to establish a set of AI Principles for Defense, particularly while the adoption of this technology is at a nascent stage. The DIB is well-suited to enjoin business, academic, and non-profit perspectives with their accumulated insights into defense, and to engage a diverse array of stakeholders in our society who have important views to bring to this discussion. To that end, the DIB intends to make this process as robust, inclusive, and transparent as practical.
Since July 2018, we have done the following:
DIB Chairman Dr. Eric Schmidt tasked the DIB Science & Technology Subcommittee to take on this effort.
The Department formed an informal, internal DoD Principles & Ethics Working Group that includes FVEY partners and meets regularly to assist the DIB in gathering information and to promote cooperation. This effort aides us in mapping the internal ecosystem of stakeholders in DoD working on AI.
The DIB’s plan to gather further input that has several elements:
First, in 2019 the DIB will hold a series of open public listening sessions around the country to give any member of the public the opportunity to be heard and provide input to this endeavor directly to the DIB members. These will take place at Carnegie Mellon University on March 14, and Stanford University on April 25.
Second, the Subcommittee will host a series of three expert roundtable conversations with a diverse cross-section of business, academic, and non-profit leaders. The DIB will introduce hypothetical scenarios that highlight ethical tensions around AI and are designed to stimulate discussion among the convened AI experts, whose analysis of these scenarios will help guide the DIB members as they craft the AI Principles. Though these specific conversations are not for attribution, the DIB will later publish their participants’ name, with their consent.
Third, the Subcommittee will reach out to consult with national and international organizations and professional associations in the AI community that work on ethics and safety.
Fourth, we provide here an online option for providing written input to this process. Each submission will be made public and posted to the DIB website before the end of this effort. Please submit a public comment below.
Fifth, we will continue to update the public on the progress of this project at DIB quarterly meetings.
The experts that the DIB will convene are mix of academics, researchers, ethicists, lawyers, business executives, non-profit leaders, venture capitalists, policy experts, and others who work in the AI field. We are taking care to include not only those familiar with the Department’s operational and business environment, but also experts who are skeptical or critical of DoD’s potential use of AI, as well as leading AI engineers and technicians who have never worked with DoD. There will be disagreements among this group, as these matters are necessarily controversial, and we have attempted the greatest diversity and inclusivity of stakeholders. We will not shy away from disagreements, as respectful and forthright dialogue should lead to meaningful understanding on all sides, and a robust contest of ideas should generate new insight.
After gathering public commentary and invited expert advice, the DIB will deliberate and make a recommendation to the Secretary of Defense. At that point, the DIB’s role -- and the role of the public -- will conclude, as the Department’s leaders will then conduct an internal review process and make a decision about adopting the proposed AI Principles.
The potential benefits of a set of AI Principles for Defense, if adopted, include informing future strategies, doctrine, and concepts of operation; influencing future budgets and investment strategies; guiding professional military education; and shaping activities of the Joint AI Center, and other components across the Department, as well as individuals who will likely be faced with novel scenarios upon which they will need to make independent judgments in the future. A clearly articulated set of AI Principles for Defense would also offer greater clarity to companies considering whether or not to work with the Department and serve as a model to other agencies in the U.S. Government, and potentially other governments as well.
Ultimately, these AI Principles should demonstrate DoD’s commitment to deter war and use AI responsibly to ensure civil liberties and the rule of law are protected. In doing so, we seek to build trust between the U.S. Government and the technology community, an endeavor essential to harnessing the National Security Innovation Base, as described in the National Defense Strategy (NDS). This trust is foundational to achieving the goals of the NDS so we remain both militarily effective and morally strong.