Artificial intelligence and machine learning have come a long way in recent years, with solid business cases, powerful algorithms, vast compute resources, and rich data sets now the norm for many enterprises. However, AI managers and specialists are still grappling with seemingly insurmountable organizational and ethical issues that are hamstringing their efforts, or even sending things down the wrong path.
That's the conclusion of a recent in-depth analysis[1] that looked at the pressures and compromises faced by today's AI teams. The researchers, Bogdana Rakova (Accenture and Partnership on AI[2]), Jingying Yang, (Partnership on AI), Henriette Cramer (Spotify) and Rumman Chowdhury (Accenture), found that most commonly, "practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure."
Still needed to achieve accountability with most AI initiatives are more use of organization-level frameworks and metrics, structural support, proactive evaluation and mitigation of issues as they arise.
AI teams not only need to have the skillsets to build, test and refine AI models and applications, but they also need to step up as transformational leaders, Rakova and her co-authors advocate. "Industry professionals, who are increasingly tasked with developing accountable and responsible AI processes, need to grapple with inherent dualities in their role as both agents for change, but also workers with careers in an organization with potentially misaligned incentives that may not reward or welcome change." This is new ground for most as well: "practitioners have to navigate the interplay of their organizational structures and algorithmic responsibility efforts with relatively little guidance." The researchers call this ability to balance organizational requirements with responsible and accountable AI as "fair-ML."
The four leading issues the researchers found impeding responsible