Artificial intelligence has struggled with a major Public Relations or PR problem: whether or not it’s intentional, proponents keep programming biases into their systems, making algorithms that reflect the same prejudiced perspectives common in our society.
The aforementioned problem gives a strong reason why researchers from MIT and Harvard University pushed to developed an algorithm that can scrub the bias from AI — like sensitivity training for algorithms.
The tool audits algorithms for biases and helps re-train them to behave more equitably, according to the research paper at the Conference on Artificial Intelligence, Ethics and Society.
And even then, once complex AI systems deploy in the real world, it becomes very difficult to explain how exactly they’re making their decisions.
It is firm reason why automating the process is so important — the new tool can go in and reconfigure how much value the AI system gives to each aspect of its training data, according to the research.
For instance, if an algorithm was trained to determine that black people would be poor candidates for a job, the new tool would feasibly be able to teach the algorithm to evaluate candidates on the relevant factors of their applications instead.