Episode 212: Responsible AI in HR: The Ethical Roadmap for Success (Interview with Keith Sonderling)

 
 

In a world where artificial intelligence and workplace technologies are revolutionising how we work, the stakes have never been higher. How can organisations harness these powerful tools while ensuring fairness, ethics, and compliance in a rapidly changing landscape? 

In this episode of the Digital HR Leaders podcast, David Green welcomes Keith Sonderling, former Commissioner of the United States Equal Employment Opportunity Commission (EEOC), to tackle these critical questions.  

With a front-row seat to the challenges and opportunities at the intersection of AI and employment law, together, David and Keith explore: 

  • The promises of AI in HR—and the hidden risks leaders need to watch for. 

  • How the global regulatory landscape is shaping the use of AI in the workplace. 

  • Who’s accountable when AI gets it wrong—and how liability is determined. 

  • What HR tech vendors and HR leaders must do now to stay ahead of evolving regulations. 

  • Real-world advice for embracing innovation without compromising on ethics or compliance. 

Whether you’re an HR leader navigating the rise of AI, a tech innovator shaping the future of work, or someone passionate about building a fairer workplace, this episode, sponsored by TechWolf is a must-listen. 

TechWolf is an AI-powered solution focused on one mission: delivering reliable skills data for every role and every employee in your organisation. 

With TechWolf, companies like HSBC, GSK, IQVIA, Workday, and United Airlines have accelerated time-to-hire by 32%, boosted internal mobility by 42%, and saved around $1,000 per employee annually on talent management. 

Visit techwolf.com for more information.  

Learn More: 

[0:00:00] David Green: One of the most pressing topics in HR today is the intersection of artificial intelligence and employment law.  Questions around the ethical use of AI in hiring, the potential for algorithmic bias and the evolving regulatory landscape are at the forefront of many discussions for HR, people analytics and business leaders.  To learn more about critical issues in this area, I invited Keith Sonderling, who at the time of this recording was serving as the Commissioner at the United States Equal Employment Opportunity Commission, the EEOC, to the Digital HR Leaders podcast.  Keith's work at the EEOC has been instrumental in advancing fair practices across workplaces in the US, especially as new technologies reshape HR.  And with a core mission of protecting employees and applicants from discrimination, Keith has tackled the challenges posed by AI and workplace tech, working to ensure these tools align with employment laws and ethical standards.   

Today, we'll be exploring the benefits and risks of AI and HR, global approaches to regulation, and what HR leaders and tech vendors can do to stay ahead of regulatory shifts.  So without further ado, let's get started by hearing from Keith. 

Welcome to the show.  We finally managed to get this podcast recorded.  Please could you start by sharing a little bit about yourself and your core focus? 

[0:01:35] Keith Sonderling: Yeah, and first of all, thank you for having me on this podcast.  As we'll discuss, you've been a tremendous resource for me throughout my time here at the EEOC, and especially to introducing me to the world of people analytics.  Before I dive into the EEOC, briefly, when I got here to the EEOC, and we are the regulator of human resources, I really had to dive in and learn all the different angles and aspects of Human Resources.  A lot of what the regulations are based on, or what the claims are, are on the front end and the back end, hiring and firing.  But there's just so much more to Human Resources these days.  Everyone talks about how HR leaders are having a seat at the table from a business perspective, and I didn't really understand what that meant.  That's where I learned about people analytics, and that's where I learned about your world and how we were introduced, and listening to your podcast and your great book, which I still have a copy of in my office, which if you could see the video, I'm holding it up.  So you've been a great resource to me in really understanding the world of people analytics and we'll talk about the role that plays in the future.   

So I'm Keith Sonderling, the Commissioner of the United States Equal Employment Opportunity Commission.  The EEOC is the agency based here in Washington, DC that, as I just said, is the regulator of Human Resource.  So, we were founded in the 1960s after Martin Luther King marched in Washington, DC for civil rights.  And the passage of Title VII of the Civil Rights Act, which celebrated its 60th birthday this year, led to the creation of this agency.  And why we really have a global impact for HR leaders across the world, is the United States in the 1960s from that civil rights movement really set the floor for corporations worldwide on what civil rights should be protected in the workforce.  And since then, you've seen a lot of countries around the world use us as the model on how to legislate employment discrimination protections.   

So, our mission is to prevent and remedy employment discrimination and promote equal employment opportunity for all in the workplace.  So, breaking that down, the prevent part in the United States is really what we're known for.  We are a civil law enforcement agency.  So, most HR leaders' experience with the EEOC frankly hasn't been that great because if the EEOC is there, something has happened, employees have filed a federal action against that corporation saying that they were discriminated or there was some unequal treatment within the workforce.  But we also have a really, also an important mission to remedy discrimination before it ever occurs and promoting equal employment opportunity.  And that's where I look at our mission and say, we're more like HR departments than we're not, from the compliance side, from ensuring that all programmes are fair and equally given to all employees, no matter what.  It's very similar to what HR departments do in addition.   

So, just a quick breakdown of what this is.  When we say discrimination in the United States, it's really broad.  It's A to Z of the employment relationship, from employment advertising to job descriptions, to hiring, to promotion, to wages, to trainings and benefits, to even where you sit in the workplace, all of those have to be done without being discriminated against, against for instance age, race, sex, national origin, colour, disability, sexual orientation.  A lot of these big ticket items that HR departments have dealt with, we are the regulator for all of that.  So, it's a long-winded way of saying, everything in HR is what we regulate! 

[0:05:19] David Green: No, that's really good, and thanks Keith for giving the wider context.  I think it's helpful for our listeners.  Love to hear from you what you see as some of the current trends maybe in human capital management, and maybe what you see as being on the horizon that HR leaders should be aware of, so they can address some of these issues more proactively? 

[0:05:38] Keith Sonderling: Yeah, so in the United States, a lot of people don't realise you cannot just go to court and sue your employer directly if you feel you're discriminated.  Whether you work for the federal government, state or local government, or the private sector, we see every case of discrimination first.  They all have to come to the EEOC first.  So, it really puts us in a good position to see what the trends are.  And unfortunately, over the last two years, employment discrimination is going in the wrong direction.  We're starting to get more cases than we had before and it's increasing around 10% per year.  So, what those cases actually look like, the number one claim of discrimination in the United States, year after year, is retaliation, meaning that an employee complains about an unfair practice or helps another employee complain about their workplace conditions, and then something happens to them; they're fired, they're demoted.   

But putting that aside, the number one actual underlying claim in the United States for discrimination is disability discrimination, and I'll talk about that in a moment when it comes to trends.  After that, we see race discrimination, then sex discrimination, which is very broad.  It includes everything from pregnancy, gender discrimination, sexual harassment, sexual orientation discrimination; then age discrimination, national origin discrimination, discrimination based on colour, then religion, then pay, and then discrimination based upon genetic information, just so you can see the order of where these claims are.  But what's fascinating to me, and what I try to really inform HR leaders about, is the changing dynamics of these claims.  And a lot of that is driven by what's going on in the news. 

So, if you look historically where we were, when the Me Too movement happened, that was global news.  It was the top of mind for all, not only HR leaders, but CEOs and boards across the world.  And we're the agency responsible for ensuring that sexual harassment doesn't occur.  So, after that, we saw a correlation between a large spike of sexual harassment cases, even though it's been illegal to sexually harass since the 1960s, but national news drove some of those claims.  So, that required us to turn around and say, we need to put out more guidance, we need to make sure that HR departments have the right statements and ability to do that.  Then we saw that same thing with pay.  The US women's soccer team filing a case was also global news, because how they were performing and being paid less than the men's team, and certainly something you appreciate in Europe with soccer.  But then we had to talk about pay, and then COVID happened, and it was all about everything about accommodations to vaccines.  So, very much like HR departments we have to move and shift to seeing what those priorities are.   

But to answer your question, looking forward, what is in the horizon, what you as HR leaders should be concerned about?  And to me, the number one issue, as far as the trends, is around mental health and around disability discrimination.  So, I said disability discrimination and most of you as HR leaders throughout your career have dealt with disability discrimination in that context of accommodation saying, "We have a worker that has a physical impairment, that has a health condition, and we know we have to accommodate them by buying a chair to help if they have a back issue, or making sure that workplaces are accessible, or if they have cancer or heart condition, how do we adjust their work schedule accordingly?"   

Now, the shift, post-pandemic, in the types of disability claims is really fascinating, where mental health is now becoming one of the biggest drivers of disability claims in the workforce.  And the big drivers of that are anxiety, PTSD, and depression.  And if you look historically of all disability claims, in 1993, those three claims, PTSD, anxiety and depression, made up 0.1% of all disability claims.  This year, they were 30% of all disability claims, and mental health claims overall are almost 40% of all disability claims.  So, that's another example where HR departments now need to shift to understand, what are the drivers of the issues that employees and applicants are having with mental health in the workplace, and how, as HR leaders, are you going to get ahead of that, if I'm telling you this is where all the law enforcement and litigation and claims your employees are bringing?   

So, that's really a part of the job that I think is so important, whether you're listening in the United States or across the world, because these HR issues, it's going to be the same wherever you are.  With this global economy, HR departments, no matter where you are, have to prepare for these issues.   

[0:10:31] David Green: That's really interesting, Keith, as you said, that that correlation between stuff coming into the news and maybe greater awareness about topics, such as mental health.  It's certainly a topic that people are becoming much more aware of and feel much more safe to speak about, maybe than they had previously.  I wonder too whether there's quite a focus now around neurodiversity.  We've had Nancy Doyle and Maureen Dunne, experts on that, on the podcast over the last few years, and consistently the message they gave, that 15% to 20% of the population is neurodiverse.  So, I wonder if we'll start to see more claims in that area as well, as people become more aware of that and more able they feel they can speak up as well. 

[0:11:14] Keith Sonderling: Absolutely, and that's going to fall 100% on HR departments.  And we talk about so much about upskilling and reskilling the workforce, where HR leaders are going to have to play that role as well, and they're going to have to learn the differences of now, okay, we have a worker who, let's say they're missing a limb.  It's very easy in a sense to understand what those accommodations are, because we can see it, or the doctors provide exactly what's going on.  But when it comes to employees' mental health, each claim is going to be different, and each analysis that has to be done to see what that accommodation is, if the employee can actually do the essential functions of their job with that accommodation, is really going to be so individualised and so much work for HR departments because, David, you and I can be both diagnosed with anxiety, but the ability for us to perform our jobs, the accommodation we may need is completely different.  For me, I may have to work from home and can't be around other people.  But for you, with the same condition, you may be able to come in the office but have to work in a low-light office, or need noise-cancelling headphones because that triggers your anxiety.  So you see, it's such a per case basis.   

This is where I talk about how HR leaders really need to take the lead here and really go back to the basics.  And even though these claims may be foreign to them, especially with Gen Z in the workforce, and they're speaking up more about mental health issues, whether it's cultural, whether it's driven by other things, HR leaders really just need to slow down and say, for each person, we need to go through and say, "Can we accommodate them knowing this is a very, not only sensitive area, but an area we never really dealt before in mass". 

[0:12:58] David Green: The topic that I think we've discussed the most since quite early in your tenure, I think we've talked, and then I think you spent some time in London prior to the pandemic as well, and you made it your highest priority when you took on the role at the EEOC, very prescient of you I think as well, given how things have panned out, was your highest priority was ensuring that artificial intelligence and workplace technology are designed and deployed in a way that is consistent with employment laws.  So, out of all the issues confronting global workforces, other than the obvious, why did you decide to make this your priority? 

[0:15:00] Keith Sonderling: Well, as I just talked about, there's so many things that are always going to push HR leaders in different directions driven by the news, like the Me Too movement or COVID, right?  We're in the same position at the EEOC.  So, I really wanted to be proactive and I said, what is the next biggest issue that is going to impact HR leaders, and how can we get ahead of it now?  How do we prevent the next big disaster in HR?  And it's the only way to say it, right, because a lot of these things have led to significant discrimination over the years, and how do we prevent that now?  So, when I first started looking at this, I started talking to Chief Human Resource Officers, the General Councils through trade associations saying, "What is the biggest issue coming down in HR?"  And a lot of them said, "Artificial intelligence in the workplace".  And like many, I had no idea what that meant.  I just immediately went to this vision of robot armies displacing human workers.  Because if you remember, a lot of the initial buzz around technology in HR was going to be, "Okay, how do we replace some workers with actual robots?"  And that was limited to certain industries, such as manufacturing or logistics.  And so, at the EEOC, we regulate every industry, so I need something a little more impactful from that. 

That's when I started digging in what it actually really meant.  And that's when I found that there's AI being used to actually make employment decisions, to actually do the work that HR leaders have been doing their entire career, and it was vast.  There's literally AI software out there, and this is years ago, from A to Z of the employment relationship, from drafting a job description, to advertising the job description, to seeking candidates, to reviewing the resumés, to completely conducting the interview, to determining who will get a job offer and where and at what compensation.  Then, when you actually got into the workforce, there's AI that would tell you what your job is, to tell you how much you had to produce that day, there's AI that would do your performance review.  There's even AI out there, if you don't meet your standards, that will tell you you're fired, all without potential human interaction.  And what I found was, a lot of the basis for using and developing this software was obviously the more efficient, more economic, but also removing the biggest issue that has plagued HR since HR has been around, and that's the human.   

A lot of these softwares originally came on the market saying, "Well, obviously humans are the problem within your HR departments because they're the ones with bias, they're the ones who aren't as efficient.  And if we can have software make those decisions for you, not only will we be faster, but you won't have to worry about discrimination, because AI can't discriminate".  So, that's the sort of market I walked into.  And as I started diving into this, realising the vast impact that the software was having, that there is a lot of truth to a lot of these statements and these products out there, is that if it is carefully designed and properly used, and those are two qualifiers that we'll get into, it can actually help employers reduce discrimination.  It can help employers reduce bias from those decision-making tools, because it's looking at neutral characteristic and can actually help employers take that skills-based approach and have no other factor come into play, which as we know, those factors come into play, that's the reason my agency exists. 

But then I saw, at the same time, if you just flip what I said, if the programmes are not properly designed, or if they're not carefully used internally by the HR departments, this could potentially scale discrimination to the likes we've never seen before, far greater than any one human can do.  So, that's what I walked into, and there wasn't many guidelines at times.  Obviously, the state capitals and capitals across the globe were not into regulating AI just yet, so that's where I started.  And the whole experience has led me to believe that these tools can really help HR very much and can really actually help HR get to where they want to go.  But that comes with a lot of responsibility, a lot of things that HR departments already know that they have to implement when using technology. 

[0:19:32] David Green: Yeah, and we're going to dig into quite a few things related to that, that you covered there in high level, Keith.  And one of the things I think that really impressed me about watching you over the years is you proactively engage, not just with CHROs, but with HR technology companies, with academics and thinkers in this field as well.  I lost count of the amount of conferences I saw you speaking at, which I think is really important, because you were listening as well as regulating, effectively, which I think is really important.  I'd love to get your take on, from an HR technology vendor perspective, how do those vendors ensure proper design to create products that remove human bias, based on all the conversations that you've had over the last seven years? 

[0:20:22] Keith Sonderling: Yeah, and that was very important for me to do.  That's how you and I met.  To regulate properly in this area to understand these products, you really have to get in the mindset, and not only of the entrepreneurs who are developing them and want to put them in the market, but also the buyers as well, those people in talent acquisition, those people who are in these HR positions that are going to be using this to see what are the challenges they face in developing the products and the challenges that the buyers face as well.  So, that's why that was really important to do that.   

So throughout this experience, I've really become a believer in the technology, believe it or not.  I think that this is no longer a question if you are in HR, "Are you going to use AI within HR?"  The question that I flipped and really is the most important one is, how are you going to use it?  Which vendor are you going to use?  How and why are you going to select them?  And how are you going to make sure when you implement it within your organisation, or if you build it yourself within your own organisation, which I know you've discussed about building things internally instead of buying it, it's going to be the same equation, how are you going to use it?  What purpose are you going to use it for?  How are you going to comply with your own long-standing business principles within your organisation?  And more importantly, how are you going to comply with long-standing civil rights laws?   

At the end of the day, I like to remind everyone, and I think this really has gotten lost with the amount of technology out there, AI cannot create a new employment decision you're already not making in your organisation, right?  There's only a finite amount of employment decisions, hiring, firing, promotion, demotion, transfer, wages, training, benefits; you know all of them.  So, what these softwares are promising to do obviously, is come in and do it better without bias.  So, how do we get there?  And I think it's important to understand the law in this, and I don't want to sound like a boring lawyer here, which of course I am at my core.  But it's really important for HR leaders to understand the frameworks of how things can go wrong.  And there's essentially two theories of discrimination.  One is intentional, where we're saying, "I'm not going to hire you because you're a woman".  And two is the unintentional.  We're saying, "Well, we have this neutral policy in place, and it has an impact on a certain group".  Both are unlawful.  What's unique about employment law is that whether or not HR leaders intend to discriminate, you can have the best motivation, if there's discrimination, you're going to be liable for that.   

So, that raises the two issues when it comes to using the AI software and how you can proactively work with your vendors on this.  One is that what most people talk about and most people fear is that unintentional discrimination when it comes to using AI.  And that's some of the horror stories and examples we've heard.  And a lot of that is based upon that data-set discrimination.  And what does that mean in our space?  What is the data in our space?  Well, it's pretty simple.  It's your applicant flow, it's your current workforce, potential workforce.  And if that is skewed of one protected category over another, the AI may unintentionally and unlawfully believe that that is the most important characteristic and may make a discriminatory decision based upon that.  And there's a lot of very classic examples out there.  My favourite stat in talent acquisition, and you can tell me if the milliseconds has changed, is that before technology, the TA leader would give you around six-and-a-half seconds to read a resumé.  That's sort of the legendary stat there.  So, think about that.  If a person wanted to discriminate and let's say not hire an older worker for this position, not hire somebody who's whatever country, whatever race, whatever religion, it takes time to discriminate because you got to go through and see, well, is this a foreign sounding name?  They have their religion on there.  It is, I'm going to put it in the trash.   

But now with these tools, in a millisecond you could potentially preclude millions of applicants from consideration on the basis of their characteristics that the algorithms can find and unlawfully show you, in some situations, quickly.  So, you can see how quickly this can scale if you don't have those proper guidelines in place within your organisation.  And that's intentional discrimination there.  So many people want to talk about how algorithms can't discriminate because they're just looking at the characteristics you feed them.  But look, there's still going to be users in there.  And in the HR context, a few clicks can cause a lot of harm.  So, that's just the two different ways the government is going to look at these kinds of cases, "Well, did you discriminate through what you asked the algorithm to look for?  Or was your data set potentially discriminatory versus, okay, how was it used within your organisation?" just like any other HR decision? 

[0:25:19] David Green: Yeah, very good, and thanks, it's a great explanation, Keith.  And I think, being clear, if the AI does make a discriminatory employment decision, who is responsible?  Who is liable for the work?  Is it the vendor or is it the organisation that's using the vendor tool? 

[0:25:38] Keith Sonderling: And this is one of the hottest topics out there, and one of the number one questions everyone has across the board, who is liable if the AI tool makes a decision that discriminates, under either of both theories there?  And from us, when this law was passed in the 1960s, Congress told the EEOC, and when they graded this law, they basically said that three people can make an employment decision, (1) companies, (2) staffing agencies, and (3) unions.  Those are the only kind of individual organisations that can make employment decisions in the United States.  So, we've been operating since then as, only they can make an employment decision.  And if you think about it, when you go to work, only that company can pay you, fire you, hire you, demote you. 

So, from our perspective, from a law enforcement perspective, it's pretty simple.  We are going to look at the results of the decision, and whether that decision had bias because it was made by an algorithm or whether it was biased because it was made by a human.  For us, we're in an easy position because we're going to say, "You as an organisation made that decision.  We don't care if it was from an algorithm or human.  There's discrimination, you're liable for it".  However there's a lot of clarity needed around that, as state and local and foreign governments start to regulate in this space.  There's big pushes here in the United States from a state level, and in the EU and other proposals, to make vendors liable for those employment decisions, equally with the employer.  So it's saying, "If you're using an AI tool, then you as a vendor are also going to have liability for discrimination", which would be a very significant change from the state of the law here, which we've been operating under.  And knowing, though, that the employer is going to be liable for using their tools, that's where really the vendors also have to make tools that are going to comply with the laws, or no one will buy them.   

That's where the tricky part is between the HR buyer and the vendor, is saying, "What access to the algorithm, what access to the information are you going to allow us?"  Because as you know, a lot of these interviewing tools or assessment tools or resumé reviewing tools get routed to the vendor and then come back to the employer after their algorithms going in there.  So, that's why we're seeing a lot of discussions before purchasing these products from the employer side saying, "Vendor, how are you going to ensure that these algorithms that you're selling us are doing exactly what they're saying they're going to do?  Not in the aggregate, not in a test you've done to show that your product doesn't discriminate", and most vendors do that, "but how is it going to work on this job description in this part of the country with these skills, with this hiring pool; show us here how it's going to not discriminate or make better decisions than we're making", number one.   

Number two, and then if we're challenged, how do we have that information to show of what we asked the algorithm to do?  Because you want to be able to show, you only asked for these skills, these were the skills necessary based upon our business judgment, based upon the industry, and that your algorithm only looked at those when that's proprietary to them. 

[0:28:54] David Green: And on the regulation side, Keith, you highlighted, whether it's New York City or the EU, there's a lot of legislation that has been enacted or is going to be enacted in the coming months and years.  Without going into chapter and verse on each one, can you give us sort of a broad overview of the global landscape and maybe some of the varying approaches to regulating AI and HR?  

[0:30:12] Keith Sonderling: And this is really important for HR leaders to pay attention to, especially your listeners, who are operating across the world.  And what we're seeing is obviously, like I said, the EEOC, we regulate all employment decisions, whether you're using AI or not.  All of this is regulated by the EEOC and those bias laws we discussed.  However, a lot of states really want to dive into this, and a lot of foreign governments as well, and they want to dive into it in the employment space because it really impacts a lot of their constituents.  Everyone at some point will be or enter in the workforce, and it's a lot easier to understand AI's impact on the workforce than it is for making a new pharmaceutical product.  So, it's much more impactful, the amount of people it's going to impact.   

So, what you're seeing is a lot of state and local governments starting to sort of pick and choose what is important for them when they look at it.  So, the first AI laws we started to see was in Illinois, where they said basically, "If you're going to use facial recognition during a video interview, AI software that scans face or looks for emotional affects, that it's basically banned".  There's just so many requirements you have to do for disclosure, for use, that it makes it almost impossible to use.  And then Maryland came along and said, "We're going to do the same thing", but that was just on the facial recognition side.  Then the New York City Local Law 144 was the big one.  And it was the first really comprehensive one.  And I say comprehensive, not as comprehensive of what the EEOC requires or what HR leaders are already doing, because it says, obviously it's only going to apply in New York City, which is one of the biggest business centres in the world, but it said that, "If you are going to use AI, an AI system is only going to be one that almost essentially", and I'm simplifying this, "makes the employment decision", right?  "And if it's going to do that, and there's essentially almost no human intervention there, then you're going to have to do pre-deployment audits, you're going to have to do yearly audits, you're going to have to give consent, and there's going to be a lot of other consumer protection".   

So, obviously that's fairly easy to get around, and there was a whole Wall Street Journal article about you just have to say, "Well, it's one of many factors we're making for employment decision", then you don't have to do all those additional requirements.  But I want to highlight what those additional requirements are.  So, it's a pre-deployment audit and a yearly audit.  Well, that's great, I encourage that at the EEOC, because if you're doing that, auditing these systems for any HR practice, you could see if it's actually doing what it's supposed to be doing or if it's discriminating against certain groups.  But then New York said, "Well, we're only going to require audits for hiring and promotion and only for the categories of race, sex, and ethnicity".  So, that may lull compliance into, if you're in New York City, "Well, we're compliant because we did this audit for race, sex, and ethnicity and hiring promotion".  "Well, the EEOC would require you to do it on age, disability", so all these other areas.   

So, you kind of see the problem when you have this absent a federal positive requirement for specific AI use in HR, that you're going to have to be doing certain things that may not matter in other jurisdictions for national employers.  So, that's a tricky part too.  But I want to conclude this in, we're starting to see commonality, whether it's in the EU proposals in Colorado, in California, and what are those common threads about using AI and HR that these legislators want to see.  And a lot of that is around employee notification and employee rights.  Here's how I like to remind HR, more than other areas, you've already been dealing with some of the highest risk decisions within your organisations for some time before AI.  You're talking about people's livelihoods, your people's ability to stay and enter in the workforce and provide for their families.  So, when you're seeing these AI in HR being designated as a higher risk category than others, well that's what already you're familiar with.  And don't let that scare you in a sense, because you already have policies, practices, and procedures generally for all your employment practices.  That should be familiar to you.  And I talk about all the time how CHROs can really be the leader in AI governance broadly, and in this area, because you already have policies.  And now, you can amend and add to those policies that you already have your humans making, adding that AI component to, and then you could help other parts of your organisations, who are also struggling with AI governance, model what you've done in HR because you've been doing that since the beginning. 

[0:34:44] David Green: Lots of regulation, Keith.  Just thinking about our listeners, HR professionals, HR leaders, maybe some HR technology vendors listening as well, what can they do to keep up with the ever-changing regulatory AI landscape, what sort of tips or guidance would you give around that?   

[0:35:05] Keith Sonderling: Well, I think it's going to start to level out.  I think from a federal level, as you know, in Washington, DC, it's not very often new laws get passed.  And if you look at a lot of the interest in DC around generative AI, a lot of it was around copyright material or artistic work.  So, I do think a lot of this is going to be done at the state levels.  And that's just really important to pay attention where you operate.  But look, HR leaders just went through this.  And let me tell you exactly how they went through it and how they were prepared.  So, you think about pay transparency loss, right?  I know there's so much focus around that, especially after the pay equity.  Colorado in the United States was one of the first states to say that in your employment advertisements in this state, you have to disclose the salary range.  And then other big states like California, Washington came out later and said, "Yeah, you have to do that here too".  So, as HR departments operating at a multistate or even global level, it's not easy to make changes in very specific areas saying, "Oh, we're only going to allow a job description to have pay in these couple states", but then we have to design a whole other system.   

So, the reality of what's likely going to happen here is just like what you went through in pay transparency and saying, well we have to make a business decision.  Because these ten states, that happen to be some of the largest in the United States, require us to disclose the pay, we're going to just have to do it everywhere in the United States.  And if we have to do it everywhere in the United States, absent it being illegal in certain countries, we're just going to have to do it worldwide.  And that's the effect that it had in pay transparency.  And I guarantee you, most of the listeners of this podcast have had to deal with this pay transparency and just said, "It's just too hard to do it separately.  We're going to make this our global standard".  And if you see, most job descriptions now have them.   

So, I think that's what's going to happen with AI too.  As some of these larger states, mainly California, start pushing through not only the liability side with the vendor, but also some of these consent and disclosure requirements, I think it's going to get a lot of a lot easier.  And I think it's going to nationalise or even globalise some of these potential changes, and it won't be as complicated.  But that's what I keep trying to remind HR leaders.  It's different, it's technology.  We may not understand it as much because we're not technologists, but the core of it is still related to your HR practices, and you just went through it with pay transparency. 

[0:37:35] David Green: And I suppose that leads on to the next couple of questions around governance.  So, firstly, how do we reach a global consensus on standards, ethics, laws, or governance for using AI in the workplace? 

[0:37:48] Keith Sonderling: You know, it's tough, but I think if you look at what's happening in the workplace and the amount of interest in AI, not only from the job displacement side, you're seeing a lot of commonality.  You're seeing global organisations that impact workers in every single country in the world come together, whether it's the United Nations, OECD, the World Economic Forum, a lot of these intergovernmental and governmental organisations coming out and saying, "Look, these AI products, it doesn't matter where they're being designed, it doesn't matter what country of jurisdiction they're being built at, they're going to be deployed in every country in the world, they're going to impact every worker on the planet", which is far different than any other kind of technology in the HR space we've ever seen.  Because think about a global organisation.  They're going to buy, think about it, a hiring assessment tool.  Likely, that's going to be deployed to their entire workforce and they're not going to have different tools in different countries.   

So, I say that and I think we're getting close.  There's some potential proposed standards.  Again governments, whether it's the United States or the UK with the AI summit they had there, or in the EU, all these task forces, I think we understand because this technology in HR, in AI is industry-agnostic, that there has to be some sort of acceptable global standard.  And let me tell you, we're really where that's leading to, to where we started this podcast.  Okay, so if we're going to require these standards in HR and AI across the world, or say, "These are your best practices in doing an AI employment audit to make sure it's not discriminating", well, what is the standard for that?  How do we actually do the underlying testing?  And when you look at that, well how do you test for employment discrimination to see if there's disparate impact in a neutral tool.  And it's the way that you do it, if you're doing any other kind of employment assessment.  Like, this gets into the industrial and organisational assessment field, and those standards for auditing employment tests come from the EEOC's 1978 guidelines, right?  So, even when you're seeing all these countries come together and say, "Yes, we need these audits".  Well, how do you do an audit?  We don't know.  Well, yeah, you do.  It's the same type of audit you'd be doing if you had an employment test or assessment or qualifications based upon a pen and a piece of paper.   

So, as it gets more and more complicated, again, I just like to make it boring.  Well, Okay, it's going back to the same practices and that is not changing.  So, that's where I think a lot of this is going to go. 

[0:40:29] David Green: It seems to be taking over the AI discussion, a large focus on generative AI in the workplace as well.  What are your thoughts for HR professionals who are looking to implement gen AI products throughout their workforces? 

[0:40:43] Keith Sonderling: Well, this is obviously the hottest category in AI; everyone wants to talk about this; everyone understands this.  Like I said, David, you and I were talking about these more traditional uses in AI, in HR, long before generative AI, but this is the hot thing that everyone wants to talk about.  And for HR leaders, it's very confusing, and I just have to sort of break it down into the real, right-now impact for all of you.  So, number one, let's talk about generative quickly, because the same issues we were just discussing, using generative AI to make job descriptions, using generative AI to do performance reviews.  Look, you just don't know what your data set is for there, and you don't know if you're asking generative AI to make you a job description.  Say, "ChatGPT, make me the best Python engineer, entry-level job description for this tech coder", right?  What information it's relying on, what biases it's relying on, what requirements that it may be taking out there in the internet that have no relevancy to your organisation or business, that may just wind up excluding certain candidates?  So, that's completely -- you risk your database of not being able to be defensible or not understanding what you're putting on your own employees outside of your organisation.  And you wouldn't do that before, you wouldn't rely on a company in a different industry to tell you how to operate in your industry, and that's the issue with generative AI. 

But I think it's the same kind of understanding of what we've been talking about.  But the new issues for Generative AI that's largely falling on HR leaders is that your boards and your C-suites are seeing all the news about Generative AI, they're seeing all those studies, how it's going to displace 300 million jobs, it's going to make workers more efficient, and it can completely eliminate some of the big cost centres within your organisation.  And we're not terminating because of bias or any other issue, we're terminating because of efficiency.  But the breakdown of that, let's look at historical reductions in workforces, because that's all it's going to be.  It's basically a technologically-induced reduction in workforce.  And what do we know as HR leaders that happens in RIFs?  Who are the most people that are impacted?  Generally, it's the older workers.  Why?  Because they're the highest paid.  And that's just a metric that is often used in these big RIFs.  Now, your workforce is much more diverse than ever before. 

So, what's another metric you use to look at in doing reductions in workforces?  Well, let's just get rid of our newer workers.  Now, I'm not saying younger workers, newer workers, first in, first out.  They haven't been trained yet or they're having assimilated to the company, it's not going to be as big a loss.  And if you look at some of the studies, some of these big McKinsey or Deloitte studies, the generative AI impacts are going to impact females over males more.  They're going to impact Hispanic and African-Americans over white Americans just by the job quadrants.  There's some interesting charts, which I think you've linked to, that show how this job displacement is going to occur.  So, just taking some of these metrics you used before, now is going to lead to some potential issues for older workers and some of your more diverse workers.   

But let's go to the third category now, which I think is the most relevant for all of you as HR leaders.  So everyone's saying, "We don't want to fire our workers, we just want to make them more efficient.  We want to use all this generative AI".  I think there's a study that came out today, "Give you 12 hours more on your workweek, or we're going to get down to the 3-day workweek, 4-day workweek, your workers are going to be 80% more effective".  That's really what the pressure is on HR leaders now, how do we do that, and how do we implement that within our organisations?  And what are the potential issues I see there, from my perspective?  They're significant, because workers do not trust.  They read the same studies you're reading and they do not trust that the companies are actually implementing this to their benefit.  They don't trust that they're implementing it so you don't have to do these administrative tasks so you could focus on the work you like, right?  What they believe is that this is going to be a robot replacement and that, "I now need to train the computer that is going to displace me".  So, before we get into legal issues just know that's going to be largely on HR too, of how you ensure that the human element of that is going to occur. 

[0:44:53] David Green: Your seven years as Commissioner at the EEOC is coming to a close and actually, by the time this podcast goes out, it will be over.  I'd be interested, maybe some brief thoughts on your reflections on that time and what you see as your legacy. 

[0:45:09] Keith Sonderling: So, I was a labour and employment lawyer in Florida, defending HR departments, in cases brought by the EEOC and Department of Labor before I decided to go into government.  And like I say, it was literally like going, the past seven years, of getting an advanced PhD in employment.  And it's really, when I look back, my legacy and everything I've done, I've really just -- the most important thing for people in these roles.  And it's difficult because we are a law enforcement agency, and our first mission is to ensure that victims of discrimination get their remuneration, whatever that may be.  But also, I just tried to flip the script and say, "Well, how do we just prevent that?"  Instead of just focusing on law enforcement, and so much of how HR compliance was done after the fact, after a big litigation, after a big enforcement action, how can we give everyone the tools they need?  And that's the employers, and employers who know what their obligations are to their employees, and have the guidance and the tools to be able to comply with the law, that will prevent discrimination.   

But I also thought too, how do we educate employees of what their rights are?  And that shouldn't be controversial to businesses and HR leaders either, because a well-educated employee that knows their rights is more likely to come forward and complain and give you the chance to fix it, than hearing from the outside, from a lawyer or somebody else, that the company has wronged them.  So, I think it's a two-fold approach to compliance, and I say that too to this audience, as HR leaders, I think you're in a similar role too.  A lot of it, when you're on the HR side, it's like, "Well, maybe we don't want our employees to know what's really going on".  But the more transparency you have, the more you build that trust, and that's what I've tried to do here.  The more guidance on both sides we can do, the more we could focus on compliance, the less law enforcement will come for them and discrimination won't happen.   

So, I look back at that as my legacy and something that I was able to do, coming in from a perspective of having defended HR departments, and knowing that 99.9% of HR leaders do not want to discriminate, they do not want to do bad things to their workforce.  So, changing that perception has been important to me. 

[0:47:36] David Green: So, this could be the last question that you answer as Commissioner in a podcast.  So, this is the question we're asking everyone on this series, and I'd be interested on this because I think this is probably something you've come across in all those conversations that you've been having with CHROs and vendors over the years.  How can organisations leverage skills intelligence to make more informed decisions? 

[0:47:57] Keith Sonderling: I'm going to really give you a very simple response.  And I'm saying, this skills intelligence, this skills requirement, this really understanding what skills are in the workplace, that is how you've been forced to make employment decisions since civil rights laws came into effect in the 1960s.  I really think we've lost that, okay, because the law requires you to make not necessarily a good or bad decision, but a lawful decision.  And what is a lawful decision?  Does the person have the qualifications for that job, and are no protective factors, such as race, age, sex, etc, playing into that role?  So, I just like to go to the back of the basics in saying, that's how employment law was based and that's how employment law was designed, to make a skills-based approach to hiring and nothing else, right?  So, I think that's really important that we keep in mind too the focus now on skills.   

I actually think with your work and others, I actually think it's going to help us get to the missions of these laws, because we talk about, like in pay, why are there these huge discrepancies we see in all those pay charts for pay and equity, or why have certain individuals not been able to ever get these jobs, right, based upon their national origin or race?  Well, a lot of that is, okay, well what is the actual skill?  And what is the more deep-rooted issue here of why these candidates or why these employees don't have skills?  And for large organisations, with the amount of skilling technology out there, with employee learning, not to get into a different world, it's no longer an excuse saying, "Well, we need to require this skill.  And if this group doesn't have it, then they're just never going to get in the workforce, and that's lawful".  Well, to an extent it is.   

But now that you have this information and technology is driving not only to help you find this information, but then go skill on them, I think actually potentially lowering some of those skill requirements, and then teaching them faster, more efficiently and better to your organisation's culture yourself, is the way to actually diversify these work pools, is the way to close the pay gap by just saying, "Oh no, men and women are making less, we don't know what to do about it".  Okay, well how do we then identify that skill that it's going to make that equal?  And I think that's really where a lot of the focus needs to be, on how we think about skills, leading into diversity. 

[0:50:28] David Green: That's a great way to end our conversation.  Thank you so much for sharing your time and expertise with listeners of the Digital HR Leaders podcast.  How can people stay in touch with you? 

[0:50:34] Keith Sonderling: So, I'm always available on LinkedIn, feel free to reach out to me.  The EEOC.gov/AI is where our resources are.  But I really just want to point to, for HR leaders around the world, to use this as a website.  Whatever issues you're thinking of, whatever compliance challenges you have, we have guidelines out there.  So, we talked about some of these mental health accommodations.  We have a list and website resources in the government that you can type in the name of the diagnosis for that mental health condition, and give you a list of approved accommodations that you can deal with.  When we talk about some of the issues that happened during COVID with religious vaccinations, a lot of people didn't know in HR, what questions can we ask or not ask?  The EEOC, we have on our website questions that you are allowed to ask and the questions that we ask our own employees.  So, there's just a tremendous amount of resources out there.  So, look to our website first and you'll really find a lot of these complicated questions, we have answers to. 

[0:51:41] David Green: Keith, thank you very much.  I look forward to learning what you do next in due course and hope to see you again soon.  Thank you. 

[0:51:49] Keith Sonderling: Thank you for having me. 

AIDavid GreenAI, EthicsComment