Decades before today's deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human.  

IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls "AI 360 Explainability." 

It remains to be seen whether yet another tool will solve the conundrum of how people can understand what is going on when artificial intelligence makes a prediction based on data. 

The toolkit consists of eight different algorithms released in the course of 2018. The IBM tools are posted on Github as a Python library[1]. The project is laid out in a blog post[2] by IBM Fellow Aleksandra Mojsilovic.

Thursday's announcement follows on similar efforts by IBM over the course of the past year, such as its open-source delivery in September of "bias detection" tools[3] for machine learning work. 

The motivation is clear to anyone. Machine learning is creeping into more and more areas of life, and society wants to know how such programs arrive at predictions that can influence policy and medical diagnoses and the rest. 

Also: IBM launches tools to detect AI fairness, bias and open sources some code[4]

The now-infamous negative case of misleading A.I. bears repeating. A 2015 study by Microsoft[5] describes a  machine learning model that implied pneumonia patients in hospitals had better prognoses if they also happened to suffer from asthma. However, the above-average prognosis was actually a result of the fact that asthma sufferers received aggressive treatment in the ICU because

Read more from our friends at ZDNet