Google outlined its artificial intelligence principles in a move to placate employees who were worried about their work and research winding up in U.S. weapons systems.
Guess what? It's already too late. There's no way that Google's open source approach and its headline principle to not allow its AI into weapons is going to mesh. Chances are fairly good that the technology already open sourced is in some fledgling weapon system somewhere. After all, TensorFlow and a bunch of other neural network tools are pretty damn handy.
In a blog post[1], outlining Google's approach going forward[2]--think 'do no evil AI style'--CEO Sundar Pichai gave the company's open source efforts[3] props high up. He said:
Beyond our products, we're using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.
And that's all true. It's also true that any technology can be used for good and evil. And that's the real pickle to Google's AI approach, which sounds good in theory, but carrying it out is going to create a few issues.
Google employee protest: Now 'Googlers are quitting' over Pentagon drone project[4]
What happens when an AI approach that's good is open sourced and used for evil? And who's definition of evil is it anyway?
Google's seven principles go as follows:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be