myHRfuture

View Original

Ethical Considerations in Using AI for HR

With the exponential growth in people analytics, the ethical implications of gathering and workforce intelligence in human resources (HR) management have become a critical area of focus.

Organisations are increasingly leveraging AI to streamline HR processes such as recruitment, performance evaluation, and employee engagement. And with the advent of generative AI large language models, such as ChatGPT, ethical concerns of its usage, particularly regarding privacy protection and data security, are paramount.

As such, countries worldwide are imposing stringent standards on data collection and management to mitigate potential risks and protect individuals' fundamental rights. Such regulations include the European GDPR, Canada's PIPEDA, and California's CCPA. In the UK, we also have the Data Protection Act and the Equality Act 2010. However, the EU AI Act, the world’s first comprehensive legal framework on AI, has set a new standard, that can potentially revolutionize the way AI is governed and used in HR.

Key Provisions of the EU AI Act

In short, the EU AI Act aims to protect people in the EU who will be affected by AI systems. However, it is not solely confined to the EU. The regulation also applies to companies outside of the EU who intend to recruit and/or employ in the EU. 

This new regulation should not just be seen as a mere compliance requirement but rather as an opportunity for organisations to establish responsible AI practices and build trust with their employees. 

Four Risk Levels of AI 

The EU AI Act categorises AI systems into four risk levels—unacceptable, high, limited, and minimal risk—each with specific requirements and prohibitions:

Unacceptable Risk: AI systems that pose an unacceptable risk are those that endanger people's safety, livelihoods, and rights. Examples include AI systems that deploy subliminal techniques to manipulate behaviour, systems that exploit vulnerabilities of specific groups (such as children or disabled persons), and social scoring by governments.

Article 5: Prohibited Artificial Intelligence Practices (Source: EU Artificial Intelligence Act)

High Risk: High-risk AI systems are those that can significantly impact safety or fundamental rights. These systems include AI used in critical infrastructure, law enforcement, education, employment, and essential services. The Act mandates strict compliance measures for these systems, including rigorous risk assessments, high-quality datasets, detailed documentation, human oversight, and robust cybersecurity measures.

Limited Risk: Limited risk AI systems involve specific transparency obligations. These include AI applications like chatbots and deepfake detection tools. Users must be informed that they are interacting with AI, and AI-generated content must be clearly labelled to maintain transparency and trust.

Minimal Risk: Minimal risk AI systems pose little to no risk and can be used freely. Examples include AI applications in video games and email spam filters. These systems are subject to minimal regulations, primarily focused on maintaining transparency where necessary.

In regards to HR and people analytics applications, many AI systems used in the field fall under the high-risk category. These include AI tools for recruitment, performance evaluation, and employee monitoring. 

Recruitment AI tools, for instance, automate the screening of CVs and match candidates to job requirements. However, if not carefully managed, these tools can perpetuate existing biases in hiring practices, disadvantaging certain demographic groups. AI in performance evaluations can lead to biased assessments if the underlying data reflects historical inequities.

Similarly, employee monitoring tools, which track productivity and behaviour, can infringe on privacy and create a surveillance environment that negatively affects employee morale and trust. Given these risks, ensuring that AI systems used in HR are fair and transparent and respecting employees' rights is essential.

What Can HR Do to Comply? 

Figure 1. Source: Insight222 People Analytics Trends Survey 2023

Insight222 research has found that leading companies in people analytics hold strong ethical concerns. However, with the likelihood that the rules for using high-risk AI will come into play in the summer of 2027, preparing more stringent processes must be an absolute priority. To comply with these regulations, organisations must ensure they follow several key practices: 

Continuous Risk Management 

Implementing continuous risk management involves regularly monitoring AI systems to identify and mitigate risks throughout their lifecycle. This includes conducting regular audits to ensure that the AI systems do not introduce biases or unfair practices and implement safeguards to prevent such occurrences. 

Therefore, organisations should establish a risk management framework with clear policies and procedures for addressing AI-related risks. This framework should be integrated into the organisation's overall governance structure, ensuring that AI ethics are prioritised at all levels. 

Maintain High-Quality Datasets

As people analytics and data-driven HR leaders, we already strive to maintain high-quality data sets. We know this is essential for setting the foundations for effective data management and insight generation. However, in the context of AI applications, this means ensuring that the data used to train AI models is accurate, comprehensive, and free from biases.

Demographic data should be balanced to prevent the over-representation of certain groups. Continuous data quality checks should be performed to ensure that the data remains relevant and up-to-date. Data governance practices that include regular data audits, validation checks, and procedures for correcting inaccurate or outdated information should also be implemented.

Ensure Transparency

(Source: CIPD)

Transparency is critical to building trust in AI systems. This involves clear communication with employees about the presence and role of AI in their workplace, providing explanations for AI-driven decisions, and allowing employees to understand and, if necessary, challenge these decisions.

For example, candidates who are rejected by an AI-driven recruitment tool should be given an explanation of the decision and the opportunity to contest it. The same goes for performance. Employees should be able to understand how AI has assessed their performance and dispute any inaccuracies or biases. 

Transparency also involves making the workings of AI systems understandable to non-experts. Organisations should provide documentation and resources that explain how AI systems function, what data they use, and how decisions are made. This can help demystify AI and reduce fears about its use in HR processes.

Enable Human Oversight to Mitigate Potential Risks

AI systems should support and enhance human decision-making rather than replace it. That is why HR must retain the ability to intervene in AI processes, particularly in high-stakes situations such as hiring and performance evaluations. This requires a level of upskilling in data literacy and understanding algorithmic patterns to enable them to verify and make educated decisions based on AI recommendations.

Creating a governance council that oversees AI in HR can also help institutionalise human oversight. This council should include representatives from various departments, including HR, legal, and IT, to provide a multidisciplinary perspective on AI governance. As the council governing AI in HR, their responsibilities would include reviewing AI systems, ensuring compliance with ethical guidelines, and addressing any concerns raised by employees. 

Preparing for Future Regulations

Ethical AI use in HR is not just about compliance; it is about building trust, fostering a positive workplace culture, and ensuring that technology serves the best interests of all employees. As AI continues to evolve, so too must our commitment to ethical principles and responsible AI governance. 

Creating an ethics charter can provide a structured approach to ethical AI use in HR. This charter should outline the organisation's commitment to ethical principles such as transparency, fairness, accountability, and privacy. It should also define the roles and responsibilities of the AI ethics governance council and provide a framework for continuous risk management.

By taking a proactive approach to ethical AI use in HR, organisations can ensure that they are prepared for future regulations and create a workplace where employees' rights are respected and technology is used responsibly. HR has an essential role to play in leading the way towards a more ethical and fair use of AI in the workplace, and we must continue to prioritise this as AI becomes increasingly integrated into our daily lives.

So, it is imperative for HR professionals to stay informed about evolving regulations and best practices for ethical AI governance, collaborate with cross-functional teams, and establish robust processes to ensure responsible AI use in their organisations. With these efforts, HR can lead by example in promoting ethical and fair AI practices within the workplace.


Discover How the People Analytics Ecosystem Can Support Your Business Challenges

Pre-Register for our Operating Model v2.0 to Unlock Business Success with the People Analytics Ecosystem!

Don't get left behind in the evolving world of people analytics! Our latest Insight222 research, "Building the People Analytics Ecosystem: Operating Model v 2.0," is packed with groundbreaking insights that will revolutionise your HR strategy.

Be among the first to access exclusive findings that reveal how top companies are leveraging people analytics to drive business success. This is your chance to gain a competitive edge and propel your organisation forward with cutting-edge strategies and actionable insights.

Don’t miss out on this opportunity to transform your HR function. Pre-register now and be at the forefront of people analytics innovation!