Wednesday, November 12, 2025

Data centers coming to a town near you

 




You may think of your personal data as a digital asset, but it does have a physical home.  It exists in a data center (or multiple data centers) at rest.  As you’ve read in the press, tech companies are building data centers at a frenzied pace.  The massive computing needs of AI require them to build them as fast as they can, regardless of the cost.  It’s now becoming debatable whether we’re living through a period of boom or bubble in data center construction:  https://www.theguardian.com/technology/2025/nov/02/global-datacentre-boom-investment-debt


Google plans to spend something like 90 billion dollars investments, mostly on data centers (not humans) this year.  This week alone it announced something like 5 billion to be spent on data centers in Germany.   This made the German politicians very happy, crowing about Germany’s high-tech investment environment.  https://www.tagesschau.de/wirtschaft/unternehmen/google-investition-deutschland-klingbeil-100.html


But should local people really be happy to see a data center built in their backyard?  Should politicians really welcome them?  Benefits are few:  data centers create very few long-term jobs, mostly junior technicians.  The servers, chips and high value tech work are done remotely, often on the other side of the planet.  Harms locally can be more significant:  data centers consume vast amounts of power and water (to run and cool them).  This can put stress on either the electricity grid or the water supplies or both, and sometimes leads to rising electricity costs for all.  https://fortune.com/2025/11/08/voter-fury-ai-bubble-high-electricity-prices-offseason-elections-openai/


The environmental impact of data centers is now the subject of study.  https://www.theguardian.com/technology/2025/nov/10/data-centers-latin-america


Back in 2007, I was asked by my then-employer Google to officiate publicly the opening of its big new data center in Belgium.  The investment was valued at around 250 million.  (Compare that to Facebook’s recent announcement of planning to spend 600 billion on data centers …https://finance.yahoo.com/news/meta-plans-600-billion-us-175901659.html ).  But back then, 250 million was a big deal.  Even the Belgian Minister President (later Prime Minister) Elio Di Rupo came along to celebrate it, and he and I took a long pleasant stroll through the town of Mons after the ceremony.  Like the German politicians this week, welcoming Google’s data center announcement.  


But shouldn’t we all know better now?  Data centers are the physical concrete of the industrial infrastructure of the digital world.  The industrial revolution needed coal mines and steel mills, until it didn’t. The Google data center I opened in Belgium was in a depressed region full of abandoned coal mines and steel mills.  I fear history repeats itself. 


I understand that local politicians will welcome “investment” in their countries, even if 99% of the value of that investment comes from servers and chips and software created far away.  But we all need to look at the local environmental cost of these data centers.  If not, you’re just breathing the pollution of someone else’s beautiful shiny Ferrari.  


Sunday, November 9, 2025

Taking the Humans out of Human Resources

We all know the trend:  AI is replacing human workers.  We all know that trend is accelerating.  One of the industries where AI is replacing humans is...ironically, human resources.  And even more ironically, increasingly, we humans will be hired, or not hired, as decided by an AI agent.  I think we deserve to know why.  Privacy law requires it.  

Let’s take the example of a company called Workday.  It’s a human resources outsourcing company using AI to replace humans in human resources.  https://www.workday.com/en-us/homepage.html


Lots of companies, including my former employer Google, have outsourced human resources functions to Workday.  When I worked at Google, I was forced (like all employees) to provide very detailed, sensitive information to Workday.  I had little idea what Workday did with that information, or where they sent it (e.g., worldwide back to the US, I’m guessing, since it’s a US-based company?).  Workday has been sued in the US, alleging Workday’s tech discriminated against people over 40 from getting hired.  https://www.cnn.com/2025/05/22/tech/workday-ai-hiring-discrimination-lawsuit


Companies like Workday are now using AI to replace humans in the human resources functions, like filtering CVs or conducting “interviews” between AI agents and human job applicants.  You will be hired, or not, based on the automated decision made by an AI agent.  


AI agents work by defining “success” based on the models on which it was trained.  Then they assess job candidates or job workers based on how closely they correlate to these models of “success”.  They can collect and assess hundreds or thousands of characteristics, far more than any human could.  I’m concerned that AI models will reinforce whatever forms of bias exist in the training models of “success”.  Imagine an AI “interview”:  what data is it collecting?  It’s not collecting data like a human interviewer.  An AI “interviewer” can measure pupil dilation, eye movements, head angles, speech rhythms, vocabulary spectrum, verbal biometrics…you get the idea, collecting vastly more sensitive data than a human could in order to build an assessment.  


Let’s take the example of an AI agent that is tasked with hiring a CEO for a tech start-up.  The AI agent will search the public data of the world about successful CEOs of tech companies.   Which characteristics will it find?  How will it assign a weight to each of those characteristics?  Let’s play a thought experiment, based on fairly obvious assumptions of what an AI agent would find based on the current crop of successful tech CEOs.  I’m not going to name names, but they’re all household names anyway.


Ethnicity:  high correlation for being Jewish or Indian.  Negative for being Hispanic or Black


Gender:  high correlation for male.  Negative for being female. 


LGBT:  positive correlation for being cis male gay.  Negative for all other categories of LGBT.


Neurodivergence:  high correlation for being on the “high functioning” autism spectrum.  Negative correlation for all other categories of neurodivergence.


Personality:  high correlation for narcissistic personality disorder.  Negative correlation for introversion.  


Age:  high correlation for 25-40.  Negative correlation for over 40, increasing by age.


You may want to dispute some of my assumptions above.  I’ll accept that.  The list of characteristics above may vary from function to function.  What’s “good” in the list above for a tech CEO may be “bad” for a mid-level tech worker.  


As companies turn over their human resources functions to outsourcing companies that have in turn outsourced these functions to machines…it’s essential that we ask these companies the criteria they’re using to make these automated decisions about our most basic human aspirations, like getting or keeping a job.


I know privacy regulators are under-resourced.  So, to my friends in the privacy regulatory community, I’ve drafted the simple questions you should send to companies in your countries.  


Do you engage third party companies to provide human resources functions?  If so, please provide their names.


Do you provide these companies with criteria to apply to recruiting or evaluating your employees?  If so, please list them.  Can your employees or applicants object to sharing their data with these companies?  


Do these companies transfer such data outside of your country, e.g., to the US?  If so, under what legal basis?  Can your employees object to this?


Do these companies use AI?  If so, on what data has it been trained?  


What steps does it take to eliminate bias (relating to race, age, gender) in its automated decisions?


What transparency do these companies provide in terms of how it is making its automated decisions?  List the criteria used to make these automated decisions.  Can an employee or applicant object to the use of such automated decisions?  What would be the consequences of such an objection?  Is there an ability to appeal such automated decisions?  Is there a human in the loop?  


Look.  I know we (humans) are on a path to automating more and more of our human jobs.  Human resources is one of them.  Technology is taking humans out of human resources.  But it’s up to us to keep humanity in the machines.  Privacy law is central to this.  Let me conclude by citing Google’s own AI-generated overview of the topic:


European privacy laws, primarily the EU's General Data Protection Regulation (GDPR) and the upcoming AI Act, regulate automated decision-making by providing individuals with the right to an explanation, to human intervention, and a qualified right to opt-out of certain automated decisions. The GDPR's Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects, unless specific conditions are met, such as explicit consent or a necessary contract with safeguards. Key rights include transparency, the right to a human review, and the right to challenge such decisions.