Thursday, October 30, 2025

Worried about AI? Don’t worry, it will cure cancer

You have surely read the dire predictions of the leaders of the field of AI:  everyone from the leaders of OpenAI, Anthropics, Bill Gates, etc etc have been speaking about the massive challenges that AI will thrust onto our societies, including in particular the massive imminent wave of job destruction.  So, it’s refreshing to hear from Google’s President that AI “will cure cancer”.  https://fortune.com/2025/10/26/google-ruth-porat-cure-cancer-in-our-lifetime-with-ai/

Some of you will be cynical, and suggest that she is just whitewashing AI in the PR interest of her own employer, and her own.  She’s repeated that same line about AI curing cancer more times than Brittney Spears has gotten wasted.  


I’d like to hear more leaders of companies building AI tools engage in a public discussion about the good and bad consequences of their inventions.  The tech industry is famous for privatizing gains and socializing losses.  In other words, building their businesses and their share prices based on the good use cases of their inventions, but letting other people, governments or societies deal with the negative fall-out.  Heads, I win, Tails, you lose.  For example, a company could make a fortune “curing cancer”, but would it be held responsible if the same AI tool that it built to cure cancer could also be re-purposed to build bio-weapons?  


Let’s take the example of using AI to screen CVs.  Many people looking for jobs today will be auto-rejected by an AI bot.  There’s no transparency, and they won’t be told why.  One reason might be that they’re screened out for being “old”, amongst many other possible, but non-transparent, reasons.   https://www.wired.com/story/ageism-haunts-tech-workers-layoffs-race-to-get-hired/  And indeed, in the US, a lawsuit on precisely this topic has been launched, accusing Workday of using AI tools to discriminate against older job applicants:  https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged  


AI in recruitment is pretty simple.  Ask the AI to study the characteristics of “successful” employees, and go find me more like them.  So, if the data about “successful” employees skews heavily to the age range of 25-39, well, the AI will look for more of the same and auto-reject the rest.  That’s how AI will reinforce discrimination in our societies.  The category of ageism is not unique:  sexism, homophobia, racism (some races, but not others) are all categories that AI will discriminate against, simply to pursue the goal of advancing the types of people who meet its training model of “success”. 


Europe’s privacy law (GDPR Article 22) has a very clear provision dealing with machines that make automated decisions: 


  • The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

I’m not aware of any serious regulatory attempt to enforce GDPR Article 22 against companies using AI to screen job applicants.  But they should.  If companies are using machines to decide whether to hire or auto-reject an applicant, then those decisions “significantly affect him or her” under the law.    


AI has already begun to accelerate its trend to replace human workers with machines.  Companies are already replacing entry-level functions with AI agents, making it hard for young workers to get their first rung on the employment ladder.  Older workers have long been pushed out of Silicon Valley.  The trend is just starting, but even today, hardly a day goes by without some big company announcing plans to slash human jobs and hire machines.   https://seekingalpha.com/news/4508911-amazon-plans-to-cut-up-to-30000-jobs---reuters


You may despair of getting or keeping a job in this new environment.  But you don’t have to accept, in Europe at least, that a machine will make an automated decision not to hire you, in violation of existing law.  Often regulators don’t act, until complaints are filed.  If your CV is being auto-rejected by a machine, you can file a complaint with your local data protection authority, which might prompt them to intervene.   


AI is a tool for automated decision making:  that’s the whole point of it.  I cited one example from the world of job applications, but there are thousands of other examples, today and soon.  It’s high time to start applying and enforcing the laws against automated decision making.  The biggest imminent disruption in human history is around the corner:  so far, I can confidently state that privacy regulators have had very close to zero impact on the development of how AI is being developed and used.  The Italian Garante is one of the few to have tried, and I applaud them for their leadership.  The others seem to be content with conducting blah blah public consultations.   If we want AI to respect human values and the laws, we need to speed up, urgently, because the machines won’t slow down.  Enforcing the laws on automated decision making would be a good place to start. 


Thursday, October 23, 2025

The Age of Discovery, seen from Seville




I enjoyed a week in enchanting, intriguing Seville.  The photo is the tomb of Christopher Columbus in the Seville Cathedral. In the Age of Discovery, Seville had a Spanish monopoly on ships to/from the New World.  Historic Seville can teach us a lot about our own AI-driven age of discovery.  Both eras have a lot in common, driven by science, greed and missionization. 

Europe won that technological race:  compared to the indigenous populations of the “New World”, it had superior sailing/navigation/mapping tech, it had superior military tech, it had deep capital markets to fund the expeditions, and it had a belief in its cultural and religious superiority.   That’s a good description of the people leading the AI race today.  


Europeans in the Age of Discovery expanded human knowledge and science dramatically, and AI will do the same now.  But even though some actors were driven by science and a pure search for knowledge, most were driven by greed.  Leaders in the field of AI are now accumulating vast (perhaps bubble) riches, just as the riches of the New World poured into Seville in the Age of Discovery.  As a tourist in Seville, you can still visit the architectural gems financed by plundering the New World indigenous populations.  Then as now, some people got very rich, but most people got much poorer.  The Spanish royal house got rich, the indigenous populations were plundered.  The tech bros of today have gotten obscenely rich, the soon-to-be-unemployed legions of workers of today replaced by AI agents will get poor.  


The biggest losers of the Age of Discovery were the indigenous populations, wiped out by European-introduced diseases.  90% of indigenous populations were wiped out within one century of contact with European colonizers.  AI will probably do the same to us:  it’s becoming a consensus that superintelligence (whenever that happens) will eventually similarly cull or eliminate homo sapiens.  Lots of leaders are calling for a (temporary) ban on developing superintelligence, until our species can figure out how to build this safely.  My former colleague, Geoffrey Hinton, a Nobel prize-winning AI genius is amongst them. 


History tends to repeat itself.  As we enter into our own new AI-driven age of discovery, ask yourself if you think you and your society will become winners or losers.  A lot of people today think they’ll be winners:  tech bros (obviously), governments and businesses looking for new tech-driven growth and profits, scientists, entire countries like China or the US which are currently leading the race.  But lots of people will be losers:  in particular, looming job destruction and unemployment, in turn leading to social disruption, which in turn historically tends to lead to revolutions.   Which do you think you’ll be, winner or loser?  Even if AI doesn’t destroy humanity, yet, it may well destroy democracy.  It will destroy privacy too (I’ll blog about that separately).   


Privacy is anchored in the idea of the dignity of the individual human being.  There wasn’t much dignity in being an indigenous person dying of smallpox during the Age of Discovery, or an African victim of the trade routes of the Age of Discovery that evolved into the slave trade.  Can we do better today?  Machines don’t believe in privacy:  they consume data to output data to accomplish a task.   The rise of AI is the challenge of our age.  You might ask where to start:  how about stopping private companies from plundering other people’s intellectual property or personal data to train their AI models, as the Spanish conquistadors plundered the wealth of the indigenous populations.  


Lots of us need to step up to confront this challenge.  Or we can leave it in the hands of the tech bros and gullible politicians and impotent regulators, who are welcoming AI like Montezuma welcoming the Spanish.  


Wednesday, October 1, 2025

The world’s largest surveillance system…hiding in plain sight

 

The world’s largest surveillance system is watching you. 


It’s capturing (almost) everything you do on the web on (almost) every website.  And it’s hiding in plain sight. 


And it’s “legal” because it’s claiming that you know about it,

and that you consented to it. 

But do you know what it is?  

Do you know what “analytics” is?  Websites use analytics services to give them insights into how their users interact with their sites.  Every website wants to know that.  And analytics providers can give them that information.  For example, an analytics provider can give detailed statistical reports to a website about their users and how they interact with its site:  how many people visited the site, where did they come from, what did they view or click on, how did they navigate the site, when did they leave/return, and many, many other characteristics.  This data can be collected and collated over years, over thousands or millions of users.  

There are many providers of analytics service, but according to analysts, there is only one 800-pound gorilla, Google Analytics.  

“Google Analytics has market share of 89.31% in analytics market. Google Analytics competes with 315 competitor tools in analytics category.

The top alternatives for Google Analytics analytics tool are Tableau Software with 1.17%, Vidyard with 0.78%, Mixpanel with 0.59% market share.”

And according to other third party analystsAs of 2025, Google Analytics is used by 55.49% of all websites globally. This translates to approximately 37.9 million websites using Google Analytics.”

You get the point:  one company, one service is capturing the bulk of the web traffic on the planet.  Websites get statistical reports on the user interactions on their sites.  Google gets individual-level information on the actions of most everyone on the web, on most websites, click-by-click, globally.  Wow. 

Legally, a website that uses Google Analytics is contractually obligated to obtain "consent" from its visitors to apply Google Analytics.  But often the disclosure on those websites is cursory, or even incomprehensible:  “we use analytics”, or “we use analytics software for statistical purposes”...which sounds harmless, but hardly would explain what’s happening to the average user.  Technically, what happens is simple, but invisible to the average user:  when they click on a website, that website auto-transfers to Google, in real time, detailed information about every step a user takes on its site. What’s happening is very simple.  A site using Google Analytics incorporates a small piece of code on its site which auto-transfers to Google, in real time, information about every interaction its users have:  every visit, every click, and information about each of those visitors, on an identifiable basis.  

In fairness, Google Analytics has some privacy protections.  Its reports to its client websites are statistical, rather than reports at individual users.  But even if the websites don’t get information about users at an individually-identifiable level, Google does…. And Google does not do cross-site correlation, i.e., it does not profile users across sites, for Analytics purposes.  (Note, Google does exactly this cross-site correlation in the context of its Ads businesses, but that’s a different topic than this blog.)  

All this is “legal” if it’s based on consent.  A phrase disclosed in a privacy policy, or a cookie notice, no doubt you’ve seen, or maybe clicked on, is deemed to constitute “consent”.  But really, did you or the average user have a clue?  

I’m in the school of believing that analytics tools represent a relatively low level of privacy risk to individual users.  But what do you think if one company is getting real-time information about how most of humanity is engaging with websites on a planetary level?  A user goes to any random site, but their data also auto-transfers to Google, did they know?  Since the scale of this service vastly exceeds any other service on the web, the scale of this data collection is the largest on the web.  Please respond with a comment if you can think of anything of similar surveillance scale.  I know you can’t, but let’s engage in the thought experiment.  I’m not picking on Google (I love my former employer), but in this field, which is essential to privacy, it’s the 800-pound gorilla, surrounded by a few mice.  

And the photo, if you’re interested, is Chartres Cathedral, built in the era when we believed only God was all-knowing.