Can bias and prejudice be totally eliminated from machine intelligence enabled hiring practices? HBR suggests that algorithm is in part, our opinions embedded in code. They reflect our innate biases, and prejudices which can lead to machine learning mistakes and misinterpretations. For HR professionals, machine learning and artificial intelligence has the potential to revolutionize the industry. Let's talk about how we can navigate ethical pitfalls as this tech gets embedded into recruiting operations.
I'll start by suggesting, like Big Data, the term AI or artificial intelligence is widely overused. This article will cover themes around robotics, automation, machine learning and machine intelligence; That is, "the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions."
While these terms have been used synonymously to describe this technology revolution, I want to differentiate between AI, and machine learning/machine intelligence. The state of true AI, as coined by John McCarthy in 1956, describes a state where computers can think for themselves. This definition of AI largely does not exist today and some experts suggest it will not for probably another decade. Machine learning, an application of AI, enables the analysis of large amounts of data, to predict future trends. This is very much a reality we all contribute to, and experience daily.
The AI Apocalypse
Two years ago, Stephen Hawkins was asked a question. Will AI kill or save humankind? His answer, simply put, suggested that we are all doomed. While not as apocalyptic as Hawkins, other tech leaders such as Elon Musk and Bill Gates have come forward to express similar concerns. While Google's head of AI is not worried about the impending AI apocalypse, John Gianandrea comments on the relationship between humans, AI systems, and learned human prejudices. He says, “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.”
Last year, Facebook's artificial intelligence robots started talking to each other in their own language. The AI used by Google for its Translate tool created its own language. Both organizations were happy with the development, and Google opted to keep it going.
There's no denying the exciting promise of
Think, disease, and poverty eradication, or autonomous energy and water systems. But then there's the things we probably don't want to see. Anyone fancy the idea of autonomous weapons? How about machines that develop a will of their own, that goes against the fundamental nature of our humanity?
Ethics in Hiring Practices
For the HR industry, the importance of building ethics into AI delivered recruitment and hiring processes is especially crucial, as digital transformation leaders continue to ensure the human threat of bias is mitigated.
How do we do that? It's a complex set of answers and ideas. This was the focus of the track I led at #truLondon, a recruiting event that's part of a global series of non-conference style discussions. In this track, HR and tech pros came together to debate the value of AI in HR, and how we can ensure that the technology enables ethical hiring practices.
We landed on a few considerations.
Karrie Karahalios, professor of computer science at the University of Illinois, presented research at a Google conference recently. She demonstrated how difficult it can be to identify bias in even the most basic algorithms. She says that people generally don't even understand how Facebook filters posts in their newsfeed, essentially illustrating how hard it can be to analyze an algorithm. She says that while its not always as simple as publishing details of the data employed, researchers are working to make these systems give approximations of their workings to engineers and end users.
Kristin Sharma, VP of Bots and AI at Sage spoke at Mobile World Congress in Barcelona last year. She stressed the importance of developing AI systems that promote diversity. This starts by employing developers of diverse background. She says, “The lack of diversity in the AI developer community means that we aren’t getting enough variety in the information which we are inputting to AI machines. This means that AI systems are working with incomplete data, which is skewed to the perspectives of the engineers who develop them.”
During my research for this article, I tried to find recent statistical data on gender profiles for developers in the U.S. My findings were incomplete, however, with the content available, it still demonstrates a gross under representation of female coders and developers. In the gaming industry alone, 74 percent of game developers were men worldwide. (Statista) In a 2017 article by HackerRank, they suggest women make up less than a third of the tech talent pool in Silicon Valley. At Google and Facebook, women make up just 17 percent and 15 percent of technical positions.
"The lack of diversity in the AI developer community means that we aren’t getting enough variety in the information which we are inputting to AI machines."
If we believe developer diversity is key to ensuring biased data doesn't creep into AI algorithm, we have a much bigger problem to solve.
Tech Should Augment, not Replace
I had a respected leader once say that decisions should be made by two data points and a gut feeling. I think that rings true here. AI can have many useful applications, including assisting HR professionals with menial tasks, and streamlining a recruiters job and effectiveness. It also has the potential to revolutionize an organizations operation and efficiencies. Machine learning should enable human decision making, not replace it fully. There's something to be said for the human connection, in combination with data points, that drive hiring decisions. While some can argue that it is that human element, which can create subconscious bias. I think we should consider the importance of the face-to-face connection, a synergy which can be developed through personal communication, as well as looking to other hiring sources including the more traditional tried and true referrals.
Don't Forget the Human Connection
While we're on the topic of being human, let's not forget the value of connecting to each other in person. I'm excited about the potential of AI and Machine learning, but we're people for goodness sake. Yes, I work remotely, and thrive in a flexible environment, but there's still part of me that craves that personal connection; something that can't be fully replaced by even the best of distributed workforce enablement technologies. #TruLondon track participants were keen to mention that the human-connection is critical to the success of teams, and technology can't fully replace that innate ability to connect as people, in the case of engagement and recruitment.
The Race to Digitize
Leaders in HR and Workforce Solution are racing to to understand the impact of machine learning, robotics and automation for their business today and in the future. Many #truLondon attendees were encouraged by the promise of AI and how it can optimize their roles, and the talent experience. We should not forget that we have a social responsibility to ensure these systems do not inherit bias that can further perpetuate stereotypes, and socioeconomic gaps that we may see in the hiring process today.
Mona (Wehbe) Ketterl
I am a technologist, and marketing pro with 15+ years of experience in corporate, consulting, and agency. My 2018 research and consulting focus is on understanding the real business, social, cultural and ethical impacts of digital technologies such as AI, voice user interfaces, predictive analytics, IoT, wearable devices, and AR/VR/MR. I'm inspired by the musings of post modern theorists including Donna Haraway (The Cyborg Manifesto), and the work of William Blake.