Weeks after facing both internal and external blowback[1] for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles[2] that explicitly states it will not design or deploy AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."

Google committed to seven principles to guide its development of AI applications, and it laid out four specific areas for which it will not develop AI. In addition to weaponry, Google said it will not design or deploy AI for:

  • Technologies that cause or are likely to cause harm.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

While Google is rejecting the use of its AI for weapons, "we will continue our work with governments and the military in many other areas," Google CEO Sundar Pichai wrote in a blog post. "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

Google's contract with the Defense Department came to light in March after Gizmodo published details[3] about a pilot project shared on an internal mailing list. Thousands of Google employees[4] petitioned the contract and some quit in protest. Google then reportedly told its staff it would not bid to renew the contract[5], for the Pentagon's Project Maven, after it expires in 2019.

In his blog post, Pichai said the seven principles laid out Thursday "are

Read more from our friends at ZDNet