Researchers at the Massachusetts Institute of Technology (MIT) have developed what is likely a world first -- a "psychopathic" artificial intelligence (AI).
The experiment is based on the 1921 Rorschach test[1], which identifies traits in humans deemed to be psychopathic based on their perception of inkblots, alongside what is known as thought disorders.
Norman[2] is an AI experiment born from the test and "extended exposure to the darkest corners of Reddit," according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence.
TechRepublic: Why human-AI collaboration will dominate the future of work[3]
"When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it," the researchers say. "The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set."
See also: MIT launches MIT IQ, aims to spur human, artificial intelligence breakthroughs, bolster collaboration[4]
Norman is an AI system trained to perform image captioning, in which deep learning algorithms are used to generate a text description of an image.
However, after plundering the depths of Reddit and a select subreddit dedicated to graphic content brimming with images of death and destruction, Norman's datasets are far from what a standard AI would be exposed to.
In a prime example of artificial intelligence gone wrong, MIT performed the Rorschach inkblot tests on Norman, with a standard image captioning neural network used as a control subject for comparison.
The results are disturbing, to say the