Google downplays the effects of the pilot project saying the technology is for non-offensive uses only.
Google is not usually shy about touting its accomplishments in the artificial intelligence space, but one win that company does not seem particularly keen on broadcasting is a recent pilot project with the U.S. Department of Defense.
In a widely quoted report this week Gizmodosaid it had learned about Google quietly partnering with the DoD on a project to help the Pentagon develop technology for analyzing footage gathered by aerial drones.
Google is working with a Defense Department group called Project Maven that was established last year to accelerate the military’s adoption of artificial intelligence and machine learning capabilities for analyzing big data sets.
One of the primary missions for the group—as described in this memo—is to find technology the speed up the evaluation process for the massive number of photos and videos that U.S. military drones are gathering daily in support of the Defeat-ISIS campaign.
The Algorithmic Warfare Cross-Functional Team (AWCFT)—as Project Maven is also known—has been tasked with providing the military with computer vision algorithms for better detecting and classifying objects in drone footage.
According to the Pentagon memo, the goal is to reduce human involvement in full-motion video analysis, to increase actionable intelligence and enhance decision-making capabilities. The deep learning technology that Project Maven is developing will be used to help drone analysts better target bombing strikes against ISIS, the Intercept reported quoting unnamed sources.
News that Google is involved in the project has caused some anger inside the company, Gizmodo said. Some employees are concerned that Google is offering its AI tools for surveillance purposes while others apparently have questioned whether Google’s involvement is ethical given the company’s “don’t be evil” motto adopted as part of its corporate code of conduct in 2000.
The consternation inside Google reflects broader concerns within the industry about the use of AI by military for offensive purposes.
A Google spokesman on March 7 confirmed the company’s work on the project, but downplayed its significance. In an emailed statement, the spokesman said the technology that Google is helping with “flags images for human review, and is for non-offensive uses only.”
The statement noted that Google has long worked with government agencies to provide technology products and services. “This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesman said.
The statement acknowledged that military use of machine learning and artificial intelligence tools raises valid concerns. “We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”
Analysis of drone images is just one way that the military plans to use machine learning and AI tools. Going forward AWCFT will also work on integrating the technologies in other defense intelligence missions as well. In addition, AWCFT will be responsible for consolidating what the Pentagon describes as all algorithm-based technology initiatives that develop, employ or field AI, deep learning and machine learning.