From Google[1]'s commitment to never pursue AI applications that might cause harm, to Microsoft[2]'s "AI principles", through IBM[3]'s defense of fairness and transparency in all algorithmic matters: big tech is promoting an responsible AI agenda, and it seems companies large and small are following the lead.  

Statistics speak for themselves. While in 2019, a mere 5% of organizations had come up with an ethics charter that framed how AI systems should be developed and used, the proportion jumped[4] to 45% in 2020. Key words such as "human agency", "governance", "accountability" or "non-discrimination" are becoming central components of many companies' AI values. The concept of responsible technology, it would seem, is slowly making its way from the conference room and into the boardroom. 

This renewed interest in ethics, despite the topic's complex and often abstract dimensions, has been largely motivated by various pushes from both governments[5] and citizens[6] to regulate the use of algorithms. But according to Steve Mills, leader in machine learning and artificial intelligence at Boston Consulting Group (BCG), there are many ways that responsible AI might actually play out in businesses' favor, too. 

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation[7] (TechRepublic Premium)

"The last 20 years of research have shown us that companies that embrace corporate purposes and values improve long-term profitability," Mills tells ZDNet. "Customers want to be associated with brands that have strong values, and this is no different. It's a real chance to build a relationship of trust with customers." 

The challenge is sizeable. Looking over the past few years, it seems that carefully drafted AI principles have not stopped algorithms from bringing reputational damage to high-profile companies. Facebook's advertising algorithm, for

Read more from our friends at ZDNet