We all know the trend: AI is replacing human workers. We all know that trend is accelerating. One of the industries where AI is replacing humans is...ironically, human resources. And even more ironically, increasingly, we humans will be hired, or not hired, as decided by an AI agent. I think we deserve to know why. Privacy law requires it.
Let’s take the example of a company called Workday. It’s a human resources outsourcing company using AI to replace humans in human resources. https://www.workday.com/en-us/homepage.html
Lots of companies, including my former employer Google, have outsourced human resources functions to Workday. When I worked at Google, I was forced (like all employees) to provide very detailed, sensitive information to Workday. I had little idea what Workday did with that information, or where they sent it (e.g., worldwide back to the US, I’m guessing, since it’s a US-based company?). Workday has been sued in the US, alleging Workday’s tech discriminated against people over 40 from getting hired. https://www.cnn.com/2025/05/22/tech/workday-ai-hiring-discrimination-lawsuit
Companies like Workday are now using AI to replace humans in the human resources functions, like filtering CVs or conducting “interviews” between AI agents and human job applicants. You will be hired, or not, based on the automated decision made by an AI agent.
AI agents work by defining “success” based on the models on which it was trained. Then they assess job candidates or job workers based on how closely they correlate to these models of “success”. They can collect and assess hundreds or thousands of characteristics, far more than any human could. I’m concerned that AI models will reinforce whatever forms of bias exist in the training models of “success”. Imagine an AI “interview”: what data is it collecting? It’s not collecting data like a human interviewer. An AI “interviewer” can measure pupil dilation, eye movements, head angles, speech rhythms, vocabulary spectrum, verbal biometrics…you get the idea, collecting vastly more sensitive data than a human could in order to build an assessment.
Let’s take the example of an AI agent that is tasked with hiring a CEO for a tech start-up. The AI agent will search the public data of the world about successful CEOs of tech companies. Which characteristics will it find? How will it assign a weight to each of those characteristics? Let’s play a thought experiment, based on fairly obvious assumptions of what an AI agent would find based on the current crop of successful tech CEOs. I’m not going to name names, but they’re all household names anyway.
Ethnicity: high correlation for being Jewish or Indian. Negative for being Hispanic or Black
Gender: high correlation for male. Negative for being female.
LGBT: positive correlation for being cis male gay. Negative for all other categories of LGBT.
Neurodivergence: high correlation for being on the “high functioning” autism spectrum. Negative correlation for all other categories of neurodivergence.
Personality: high correlation for narcissistic personality disorder. Negative correlation for introversion.
Age: high correlation for 25-40. Negative correlation for over 40, increasing by age.
You may want to dispute some of my assumptions above. I’ll accept that. The list of characteristics above may vary from function to function. What’s “good” in the list above for a tech CEO may be “bad” for a mid-level tech worker.
As companies turn over their human resources functions to outsourcing companies that have in turn outsourced these functions to machines…it’s essential that we ask these companies the criteria they’re using to make these automated decisions about our most basic human aspirations, like getting or keeping a job.
I know privacy regulators are under-resourced. So, to my friends in the privacy regulatory community, I’ve drafted the simple questions you should send to companies in your countries.
Do you engage third party companies to provide human resources functions? If so, please provide their names.
Do you provide these companies with criteria to apply to recruiting or evaluating your employees? If so, please list them. Can your employees or applicants object to sharing their data with these companies?
Do these companies transfer such data outside of your country, e.g., to the US? If so, under what legal basis? Can your employees object to this?
Do these companies use AI? If so, on what data has it been trained?
What steps does it take to eliminate bias (relating to race, age, gender) in its automated decisions?
What transparency do these companies provide in terms of how it is making its automated decisions? List the criteria used to make these automated decisions. Can an employee or applicant object to the use of such automated decisions? What would be the consequences of such an objection? Is there an ability to appeal such automated decisions? Is there a human in the loop?
Look. I know we (humans) are on a path to automating more and more of our human jobs. Human resources is one of them. Technology is taking humans out of human resources. But it’s up to us to keep humanity in the machines. Privacy law is central to this. Let me conclude by citing Google’s own AI-generated overview of the topic:
European privacy laws, primarily the EU's General Data Protection Regulation (GDPR) and the upcoming AI Act, regulate automated decision-making by providing individuals with the right to an explanation, to human intervention, and a qualified right to opt-out of certain automated decisions. The GDPR's Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects, unless specific conditions are met, such as explicit consent or a necessary contract with safeguards. Key rights include transparency, the right to a human review, and the right to challenge such decisions.