What's the Difference Between "Good AI" and "Bad AI" in HR?

By Emily Lambert

Blog Home


Last year, information leaked that Amazon tried to build an algorithmic system to analyze resumes and suggest the best hires. It failed. Hard. 

After Amazon trained the algorithm on 10 years of its own hiring data, the algorithm repeatedly became biased against female applicants. The word “women,” as in “women’s sports,” would cause the algorithm to rank applicants lower. 

We talk about this at length in our latest e-book – AI is becoming increasingly widespread and democratized in its uses. So, it’s no surprise that HR and talent professionals are turning to AI to future-proof their talent processes and save resources. A stretched HR professional, swamped with resumes and interview scheduling while still trying to meet the needs of current employees and engage in strategic workforce planning, can turn to AI to automate routine tasks. With software solutions processing information efficiently without tiring – not something we can often say about humans – the advantages are obvious.  

Looking at the Amazon example, it’s clear that no two AI is created equal. AI is not inherently biased or unpredictive – but its outcomes depend on how it’s trained. 

In this blog post, we'll touch on 3 ways to differentiate "good AI" and "bad AI" – automation vs. augmentation, data quality, and human intervention. 

FY19Q3_Good-ai-vs-bad-ai-table@2x

Why "Augmentation?"

To realize AI’s full potential in talent acquisition and talent management, it’s essential to identify the difference between automation  artificial intelligence making current practices (which are often biased and unpredictive) more efficient - and augmentation, i.e. “helping humans do countless complex tasks that are either beyond human cognition and/or inefficient for human beings to do.” In other words, AI that’s smarter. AI that utilizes augmentation over automation is often referred to as Expert Automation & Augmentation Software (EAAS).

Genuine attention should be paid to the lessons you’re teaching your AI’s decision-making process. If AI is trained to make choices the same way humans would i.e., automation  this simply perpetuates a flawed system. Amazon’s failed hiring algorithm is one example of this automation going horribly wrong. 

It’s up to talent and HR professionals to augment AI software with their expertise, drawing from lessons already learned to check validity and attempt to remove risk by considering in detail how algorithms are structured or the data input is classified. 

Augmenting with I/O Psychology 

To truly gain AI’s advantages, your organization must incorporate the vast knowledge of the global community of Industrial/Organizational Psychologists. Scientists at the likes of Harvard, Northwestern, MIT, and Columbia have dedicated decades of research to identify more predictive and objective talent acquisition and talent management methods. Companies like Apple, AT&T, PepsiCo, and General Motors employ I/O Psychologists to improve facets of talent management.  

I/O Psychology can take the practice of, for instance, automated resume scanning – which can’t distinguish truth from fiction, perpetuates racial bias, and doesn’t predict on-the-job performance – up a notch. Marrying the advancements of I/O Psychology and augmented AI technology helps prepare for the future of work by focusing on core traits that are transferable to the new jobs on the horizon. This strategy holds value beyond the hiring stage, translating to long-term success in strategic workforce planning.  

While the move to chatbots and video interviewing tools with facial scanning technology represent adoption of AI in HR in the name of efficiency, augmenting AI tools with psychometrics offers both efficiency and predictability. Instead of automating human error into talent processes, an enlightened AI approach works better, not just faster.  

Human-in-the-Loop Remains Essential 

Note that augmented AI doesn’t automate people out of jobs. Ultimately, human input is necessary for augmented AI to succeed. AI that incorporates human oversight (called human-in-the-loop machine learning) is the best way to avoid “black box” AI that makes decisions without traceable explanation or reasoning.  

Bring a diverse group of business stakeholders together to determine AI objectives and outcomes. This can help program an AI algorithm to ask the right questions and make the best judgement calls and decisions, which pays off in more nuanced insights. 

AI must complement rather than replace human expertise. HR and talent professionals know their stuff; machine learning can't do all the thinking for them.

AI must complement rather than replace human expertise. HR and talent professionals know their stuff; machine learning can’t do all the thinking for them. As a CA Technologies blogger noted, “While machine-driven decisions may be right 80% of the time, the (sometimes disastrous) consequences of being wrong 20% of the time wipe out the productivity gains.”  

Simply incorporating AI automation risks turning human resources into a process-driven machine that manages people as a binary batch to be shifted from one process or department to another. With the right augmented AI tools in place, hiring and talent professionals can get to better know candidates and employees and offer more personalized attention to each unique human in the company.  

For a more detailed look into humanizing your talent processes with AI, read our latest e-book, Talent and the Future of Work: The Essential Guide.