When searching for talent, sometimes the best person for the job is a machine. Robots make sense for repetitive and dangerous tasks, but they also work well as a check against bias.
Artificial intelligence already outperforms judges in choices about setting bail because humans on the bench tend to overthink the defendants’ demeanor, a poor predictor of flight risk. Likewise, hiring algorithms do better than recruiters at screening resumes because humans in HR show too much favoritism for traditional applicants.
Unfortunately, smart technology also has blind spots. Computers don’t care about things like race or gender, but they rely on inputs that reflect past correlations based on human actions. Some biases get baked into the data.
People who Google black-sounding names, for example, see more ads for criminal background checks. And keywords like “healthy complexion” mostly produce images of light-skinned women. “You won’t see a man for pages,” says Deb Raji, a tech fellow at the AI Now Institute at New York University.
USCIS Estimates H-1B Visa Numbers But Ignores Green Card Problem
Want To See Your Therapist In-Person Mid-Pandemic? Think Again
Amazon and IBM initially dismissed her 2018 paper, co-authored with Joy Buolamwini at MIT, on bias in facial recognition software. But both companies announced a pause on police use of their products in the aftermath of George Floyd’s slaying in Minneapolis.
Nationwide protests since then have focused on racial prejudice, but any type of false assumption can lead to costly errors—even in contexts detached from thorny social issues. The good news for leaders who must operate in a world of bad information is that data-based decision makers can overcome bias through conscious commitment and action.
Data Have Baked-in Bias
The first thing to understand is that algorithms make predictions and facilitate decisions based on existing patterns, which are shaped by human behavior. Crime statistics, college enrollment figures, market trends and other datasets do not emerge from nature in a neutral form. They reflect thousands or millions of individual decisions, aggregated to show trends.
Facts in isolation are not the problem. Biases emerge—usually by accident—when people combine information in a particular order and draw conclusions. Some details get omitted, while others get emphasized. Some questions never get asked at all. People tend to see what they look for.
Smart decision makers do not throw out the data just because of the potential for bias. But they interact with the information carefully, recognizing that generalities do not apply in every case. Above all, they continue striving to eliminate bias. The process is not a once-and-done proposition.
Along the way, people who make decisions based on the best available data remain kind to themselves and others. Armed with a commitment to objectivity, they pivot when conclusions prove false. But they don’t beat themselves up or get consumed with blame or guilt. Such responses are not productive.
People May Have Agendas
Although bias often happens unintentionally, my coauthors and I explore a different scenario in our new research. People in many situations withhold or distort data intentionally to rig outcomes in their favor.
Aside from social issues, this happens in many business circumstances, such as when policyholders file insurance claims or when job applicants submit resumes. Even if they don’t outright lie, people with agendas emphasize positive details about themselves while strategically hiding weaknesses.
Web content writers do something similar with search engine optimization. They try to trick Google algorithms to drive traffic to their sites. Applicants also have incentives to game the system at the U.S. Patent and Trademark Office, the subject of our study.
Whether or not these applicants deserve a patent, their goal is to win one anyway by showing the novelty and “non-obvious” nature of their inventions. Many applicants insert irrelevant information, omit citations or assign new meanings to existing words to disguise weak claims.
The distractions often work due to something called “salience bias.” This is the human tendency to focus on more prominent information while overlooking or downplaying less prominent information.
Machines Can Help, But They Need Support
Biases baked into the data create problems. So do assumptions based on salience bias, especially when people with agendas strategically manipulate the data. Both of these challenges make the job harder for the patent examiners in our study, who must sift through ever-expanding amounts of “prior art.”
Fortunately, smart machines can help. Their processing power, combined with their immunity to boredom and fatigue, allows them to identify the most relevant matches at breakneck speed with staggering precision. This can backfire, however, when inputs are strategically entered to direct machines in the wrong direction.
To overcome biased predictions, artificial intelligence must learn to treat certain information as adversarial. In effect these programs must overcome the first law of computer science: “Garbage in, garbage out.”
They must take unreliable inputs and turn them into reliable outputs. Usually this requires human assistance.
Allies Complement Each Other
The best results come when people and machines work together as allies. Teams that complement each other’s strengths and weaknesses gain a collaborative advantage, a key differentiator when tackling complex problems.
Humans who work best with machines to detect and eliminate bias tend to have relevant domain expertise, which refers to the skills and knowledge accumulated through prior learning within a specific field.
Patent examiners looking at mechanical inventions, for example, need a theoretical background in engineering as a baseline. But they also need real-world experience sifting through prior art, seeing the tricks that applicants employ and learning to counteract them. Our study shows that machine learning tools sometimes make worse predictions than older technology when paired with improperly trained humans who take inputs at face value.
People protesting in the streets do not care about bias at the patent office. Many business operations lack social urgency. But the same principles apply when confronting big issues like racism.
Algorithms can move fast, but they need people with the right skills to make sure they move in the right direction. Bias will not disappear, but decision makers who stay vigilant can take solace in at least one objective truth: Machines can learn, and so can humans.