As the Department of Defense races to develop AI-enabled tools and systems, there are outstanding questions about exactly where their investments are going, and what benefits and risks might result. One key unknown: will commanders and troops trust their new tools enough to make them worth the effort?
Drawing on publicly available budgetary data about the DOD science and technology program, the Center for Security and Emerging Technology, or CSET, examined the range of autonomy and AI-related research and development efforts advanced by the U.S. Army, Navy, Marines, Air Force, and DARPA. Among the main research lines are programs dedicated to increasing automation, autonomy, and AI capabilities in air, ground, surface, and undersea unmanned vehicles and systems; increasing the speed and accuracy of information processing and decision making; and increasing precision and lethality throughout the targeting processes. We recently released a two-part analysis of the scope and implications of these efforts. One of the most consistent themes is an emphasis on human-machine collaboration and teaming.
Indeed, while in the public imagination the integration of AI into military systems foretells a future where machines replace humans on the battlefield and wartime decisions are made without human input or control, in our assessment of U.S. military research on emerging technologies, humans remain very much in the loop.
The U.S. Army’s flagship Next Generation Combat Vehicle research program is a good example of human-machine teaming cutting across different use cases and applications of autonomy and AI. One of