You have surely read the dire predictions of the leaders of the field of AI: everyone from the leaders of OpenAI, Anthropics, Bill Gates, etc etc have been speaking about the massive challenges that AI will thrust onto our societies, including in particular the massive imminent wave of job destruction. So, it’s refreshing to hear from Google’s President that AI “will cure cancer”. https://fortune.com/2025/10/26/google-ruth-porat-cure-cancer-in-our-lifetime-with-ai/
Some of you will be cynical, and suggest that she is just whitewashing AI in the PR interest of her own employer, and her own. She’s repeated that same line about AI curing cancer more times than Brittney Spears has gotten wasted.
I’d like to hear more leaders of companies building AI tools engage in a public discussion about the good and bad consequences of their inventions. The tech industry is famous for privatizing gains and socializing losses. In other words, building their businesses and their share prices based on the good use cases of their inventions, but letting other people, governments or societies deal with the negative fall-out. Heads, I win, Tails, you lose. For example, a company could make a fortune “curing cancer”, but would it be held responsible if the same AI tool that it built to cure cancer could also be re-purposed to build bio-weapons?
Let’s take the example of using AI to screen CVs. Many people looking for jobs today will be auto-rejected by an AI bot. There’s no transparency, and they won’t be told why. One reason might be that they’re screened out for being “old”, amongst many other possible, but non-transparent, reasons. https://www.wired.com/story/ageism-haunts-tech-workers-layoffs-race-to-get-hired/ And indeed, in the US, a lawsuit on precisely this topic has been launched, accusing Workday of using AI tools to discriminate against older job applicants: https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged
AI in recruitment is pretty simple. Ask the AI to study the characteristics of “successful” employees, and go find me more like them. So, if the data about “successful” employees skews heavily to the age range of 25-39, well, the AI will look for more of the same and auto-reject the rest. That’s how AI will reinforce discrimination in our societies. The category of ageism is not unique: sexism, homophobia, racism (some races, but not others) are all categories that AI will discriminate against, simply to pursue the goal of advancing the types of people who meet its training model of “success”.
Europe’s privacy law (GDPR Article 22) has a very clear provision dealing with machines that make automated decisions:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
I’m not aware of any serious regulatory attempt to enforce GDPR Article 22 against companies using AI to screen job applicants. But they should. If companies are using machines to decide whether to hire or auto-reject an applicant, then those decisions “significantly affect him or her” under the law.
AI has already begun to accelerate its trend to replace human workers with machines. Companies are already replacing entry-level functions with AI agents, making it hard for young workers to get their first rung on the employment ladder. Older workers have long been pushed out of Silicon Valley. The trend is just starting, but even today, hardly a day goes by without some big company announcing plans to slash human jobs and hire machines. https://seekingalpha.com/news/4508911-amazon-plans-to-cut-up-to-30000-jobs---reuters
You may despair of getting or keeping a job in this new environment. But you don’t have to accept, in Europe at least, that a machine will make an automated decision not to hire you, in violation of existing law. Often regulators don’t act, until complaints are filed. If your CV is being auto-rejected by a machine, you can file a complaint with your local data protection authority, which might prompt them to intervene.
AI is a tool for automated decision making: that’s the whole point of it. I cited one example from the world of job applications, but there are thousands of other examples, today and soon. It’s high time to start applying and enforcing the laws against automated decision making. The biggest imminent disruption in human history is around the corner: so far, I can confidently state that privacy regulators have had very close to zero impact on the development of how AI is being developed and used. The Italian Garante is one of the few to have tried, and I applaud them for their leadership. The others seem to be content with conducting blah blah public consultations. If we want AI to respect human values and the laws, we need to speed up, urgently, because the machines won’t slow down. Enforcing the laws on automated decision making would be a good place to start.
No comments:
Post a Comment