The enormous leaps in technology and HR have created many opportunities for companies to rethink how they address processes around human resources. Services range from software identifying a candidate’s employability through facial recognition and body language to on-boarding services for new graduates. The potential for efficiency and speed is unlike anything considered, and at a time when there is more choice than ever to draw talent from a global pool. Still, there is a greater need to ensure successful recruitment and retention. As organizations grapple with moving towards the next normal after successive lockdowns, they also need to continue a proactive approach to handling diversity challenges in the workplace. While HR AI tech creates numerous opportunities, we are at the stage where many solutions still require further iterations and development.
When dealing with HR AI tech, the limitations around diversity are the by-product of how solutions are designed. We are rapidly moving into space where solutions provide emotional recognition. AI analyzes facial expressions or body posture to determine decisions around recruitment. Current estimates expect emotion recognition is projected to be worth $25billion by 2023. Despite extraordinary growth in this area, there are challenges and significant kinks to be addressed, namely, ethical elements concerning the creation of the algorithms. Companies are grappling with HR AI and ethics. Recent examples demonstrate the enormity of the ramifications when things don’t go according to plan. In other words, when things go wrong, they go badly wrong. Consider, for example, Uber, when fourteen couriers were fired due to a failure of recognition by facial identification software. In this case, the technology based on Microsoft’s face-matching software has a track record of failing to identify darker-skinned faces, with 20.8 percent failure rate for darker-skinned female faces. The same technology has zero percent failure for white men. There are additional problems when even organizations work with AI ethicists. We saw this recently when Google was in the spotlight for firing two AI ethical researchers; Margaret Mitchell in February 2021 and Timnit Gebru in December 2020.
Building an effective HR AI strategy relies on the leadership team working with HR to identify the solutions. Still, the processes’ effectiveness needs alignment with scrutiny on the algorithms’ compositions that make the decisions. Currently, we have little or no legislation governing the development of AI applications; over seven hundred AI applications are available in an unregulated market. As the field of HR AI tech is evolving, albeit, at unprecedented speed, it is even more essential to create a benchmark of what constitutes good quality AI. In the United Kingdom, the government is drawing together a wide range of stakeholders to develop a regulatory framework. In the United Kingdom, the AI Council provides cross-governmental guidance on how partners engage with AI by creating a sixteen-point roadmap. The strategy provides the framework for an AI ecosystem, drawing on the government’s views and over a hundred experts.
Dr. Robert Elliot Smith, Chief of Science and Technology for Centigy, a company that provides audit programs for sustainable AI, explains the importance of these questions. He states, “AI is completely changing the world of HR. How people find and benefit from their workplace is becoming more influenced by machine decision-making. Unfortunately, much of the tech developed is unaware of emerging standards and practices for protecting people from potentially biased AI decisions.” Centigy has created ethical frameworks or sustainable AI proposed by IEEE in the US, The Alan Turing Institute in the UK, and the European Commission for AI.
By combining global insights from AI ethics authorities, Centigy is establishing itself as the first HR software-focused AI ethics certification body. Proving the certification allows organizations to provide more equitable approaches to human resources and supporting colleagues in the workplace. As AI and HR tech are rapidly developing, leaders have the opportunity to engage with conversations and help shape thinking some of the challenges identified in this article.
Smith suggests the critical questions leaders should consider asking when exploring HR AI tech options for their organization:
- What are the ethical implications of the software, and how does this impact our organization?
- What lens is used as a decision-making framework?
- What artifacts have been developed to illustrate this thinking?
Kevin Butler, CEO of Centigy, suggests leaders need answers to these questions. He states, “if vendors can’t answer these questions, then it’s a red flag, and they probably can’t handle the complex questions. Leaders need to get more comfortable to ask these questions and know what to expect in terms of strong answers”.
Butler explains the particular challenge for leaders; “we see similarities to the D&I space, where leaders get the concept, and at a very high level they recognize its importance. The moment you go into the granularity, there is a tendency to leave this to the subject matter expert – which is great. However, you still need strategic leadership to shape this thinking for the organization.” To create effective AI HR tech, you need conscious thinking and challenge assumptions embedded in the algorithms.
As leaders become more comfortable engaging with AI tech and HR, they become more knowledgeable about the processes and algorithms. They can identify areas where bias is in the program and, more importantly, what is done to reduce biases’ likely impact. Butler emphasizes the need to recognize where responsibility lies; “there’s a certain amount of responsibility on the vendor right to ensure that the users are proficient. But at some point in time, it’s really up to the actual users and management within that organization to ensure that everybody knows what they’re doing.” Doing Good HR tech doesn’t just rest with HR; leaders are essential to this discussion.