Artificial intelligence is fast evolving to the point where anyone with the skills now has has access to the tools and platforms needed to make it happen. But is it time to stop and think before we plunge headlong into cognitive chaos?

building-hannover-messe-march-2016-photo-by-joe-mckendrick-1.jpg Photo: Joe McKendrick

Developers and IT managers are now at the front lines of growing ethical dilemmas, as well as a potential partial surrendering by businesses of control over their decision-making to machines. Perhaps its time for greater awareness and education on bringing AI-based decision-making into the light.

At its recent MSBuild conference, Microsoft stated[1] its goal going forward was to "help every developer be an AI developer" on top of its offerings, especially on its Azure cloud platform. At the same time, Google keeps opening up AI access to anyone who wants to work with it, as announced at its Google I/O conference, which also just took place. There, CEO Sundar Pichai famously demonstrated[2] technology passing the Turing Test[3], running an audio of a highly interactive phone call placed by Google Assistant to a hair salon.

With all this great power comes great responsibility, and developers and executives are being cautioned not to build, or rely on, the black boxes that have characterized AI up to this point. Recently, Bank of America and Harvard University teamed up to convene the Council on the Responsible Use of Artificial Intelligence[4], which will bring together, educate and enlighten business, government and societal leaders on the latest technological developments in AI and machine learning, discuss emerging legal, moral, and policy implications, and investigate ways of developing responsible AI platforms.

Bank of America has been working with a range of AI approaches

Read more from our friends at ZDNet