A new technical paper has been released demonstrating how businesses can identify if their artificial intelligence (AI) technology is bias. It also offers recommendations for those making AI systems to ensure they are fair, accurate, and comply with human rights.
The paper, Addressing the problem of algorithmic bias, was developed by the Australian Human Rights Commission, together with the Gradient Institute, Consumer Policy Research Centre, Choice, and the Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61.
Human Rights Commissioner Edward Santow, in his foreword, described algorithmic bias as a "kind of error associated with the use of AI in decision making, and often results in unfairness".
He continued, saying that when this occurs it can result in harm, and therefore human rights should be considered when AI systems are being developed and used to make important decisions.
"Artificial intelligence promises better, smarter decision making, but it can also cause real harm. Unless we fully address the risk of algorithmic bias, the great promise of AI will be hollow," he said.
In developing the paper, five scenarios were used to highlight potential attributes of algorithmic bias. For instance, one scenario had out-of-date historical data that was no longer representative of current world scenarios demonstrated bias.
In another scenario, the paper uncovered that label bias could arise when there are disparities between the quality of the label across groups that are distinguished by protected attributes, such as age, disability, race, sex, or gender.
Read also: AI and ethics: One-third of executives are not aware of potential AI bias[1] (TechRepublic)
The paper revealed there are five general approaches that could be taken to mitigate algorithmic bias. These include acquiring more "appropriate" data, such as data of under-represented cohorts to help reduce inequality