Merriam-Webster defines ‘algorithm’ as step-by-step procedure for solving a problem… In an analog world, ask anyone to jot down a step-by-step procedure to solve a problem – and it will be subject to bias, perspective, tacit knowledge, and a diverse viewpoint. Computer algorithms, coded by humans, will obviously contain similar biases.
The challenge before us is that with Moore’s Law, cloud computing, big data, and machine learning, these algorithms are evolving, increasing in complexity, and these algorithmic biases are more difficult to detect – “the idea that artificially intelligent software…often turns out to perpetuate social bias.”
“Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities.” What is the role of HR in reviewing these rules? What is the role of HR in reviewing algorithms and code? What questions to ask?
In December 2017, New York City passed a bill to address algorithmic discrimination. Some interesting text of the bill, “a procedure for addressing instances in which a person is harmed by an agency automated decision system if any such system is found to disproportionately impact persons;” and “making information publicly available that, for each agency automated decision system, will allow the public to meaningfully assess how such system functions and is used by the city, including making technical information about such system publicly available where appropriate;”
Big data, AI, and machine learning will put a new forward thinking ethical burden on the creators of this technology, and on the HR professionals that support them. Other examples include Google Photos incorrect labeling or Nikon’s facial detection. While none of these are intentional or malicious, they can be offensive, and the ethical standards need to be vetted and reviewed. This is a new area for HR professionals, and it’s not easy.
As Nicholas Diakopoulos suggests, “We’re now operating in a world where automated algorithms make impactful decisions that can and do amplify the power of business and government. As algorithms come to regulate society and perhaps even implement law directly, we should proceed with caution and think carefully about how we choose to regulate them back.”
The ethical landscape for HR professionals is changing rapidly.