Wednesday, March 5, 2025

The leaning tower of privacy

 


 


I’m now free to say what I think.  For many years, I was an official spokesperson on privacy issues, amongst other things, for my prior employer.  Naturally, I was committed to advocating for its views, as any good lawyer does on behalf of their client.  I hope I lived up to my goal of only saying things that I believed were true, and not just parroting talking points. 


Now that I’m free to say what I think in my personal voice, where should I do it?  I’m on blogger.com, out of an old habit.  Nothing much has changed on blogger, and it feels oceans apart from cool social media hotspots, but at least it’s familiar.  What are my alternatives?  As a privacy professional, I couldn’t possibly join FB as a “dumbf*ck”.  I can’t join X, given how I feel about Elon.  I can’t join TikTok, given how I feel about China.  So, I guess I’m on blogger for now.  Unless you have a better idea. 


What I do care about is finding ways to share my experience, knowledge, and thoughts after 30 years’ of privacy practice, with students, privacy professionals, advocates, and regulators. For me now, it’s about sharing and helping a new generation in the field.  I’ll be writing, speaking, teaching, advising, mentoring, as opportunities come around.  


For example, why has the regulatory DPA world had such a limited impact on Big Tech that it is being asked to supervise and regulate?  The DPA world has many strong tools, in particular, the tough law of the GDPR, but its toughness on paper hasn’t translated into the real world, as people expected when it was adopted.  I could list some of the factors that have limited the DPAs’ ability to have a big impact.  1)  The GDPR put first-line responsibility onto the shoulders of a one-stop-shop regulator, which turned out to be Ireland’s, for virtually all US and Chinese big tech companies.  That’s a huge lift for the DPA of a small country, even if they have brilliant leadership and staff.  2)  The DPAs have modest budgets, small teams, very few technical or legal experts, and they’re facing-off against mega companies with vast technical and legal resources.  3)  The politics of being a DPA are complicated, since they are often accused of being retrograde, or anti-innovation, when they try to enforce the laws.  4)  DPAs spend a lot of time on minor cases, often complaints by one individual, which might matter to that one individual, but have zero big impact.  The Right to be Forgotten is a perfect example of individual-level cases that absorb DPA resources with virtually no impact beyond a particular case.  


So, what’s my recommendation to DPAs to have more impact?  Pick your cases wisely.  Pick cases that affect millions of people, and don’t waste your resources on petty cases.  Think about the tech and how it’s evolving, so that you don’t bring cases about 10-year-old tech that is already obsolete before any conclusion is reached.  And spend time developing policy, at an international level, so that it’s clear what policy goals you’re pursuing.  In particular, in the world of AI, push for international conversations and consensus on what good policy looks like in the world of AI, by engaging with stakeholders, and once that consensus is achieved (but not before), use your regulatory enforcement toolkit.  I’ve become friends with many people in the DPA community.  I trust them to want to do the right thing.

  

I could make the same recommendations to privacy activists: pick your cases wisely.  The best, in my experience, in picking the right cases and pursuing them tenaciously, would be NOYB.  He wouldn’t know it, but when I was on the opposite side, Max got me scared and sweating.  I admire him for it.  If you care about privacy, consider donating to NOYB. 

Monday, March 3, 2025

Thou shalt not kill, unless thou art an autonomous AI killing machine

 I’m mostly bored with Google, but occasionally it publishes a blog that gets me riled up.  When I still worked there, I was proud of my employer, and how it adopted a set of AI Principles, to govern its work in this exciting, promising, dangerous new space.  One of those principles rejected work on AI that would “cause or are likely to cause overall harm.”  


But guess what, it turns out that the militaries of the world want to harness AI for their purposes, which include “causing harm”, to put it mildly.  So, Google, presumably smelling a big business opportunity of developing AI for the world’s “democratic” democracies, re-wrote its AI principles.  Now the company can develop AI tools for the world’s democratic militaries (I’d love to read the list of the countries that Google considers “democracies”).


I re-watched Robocop recently:  an AI-prototype robot cop is demoed in a boardroom, and a hapless person is asked to hold a gun pointing at it.  Chaos ensues, as the prototype disregards human voice commands to stand down and shoots down the poor person holding the gun he’s trying to throw away.  Oops.  Indeed, the company whose AI gave us a recipe for glue on pizza, will now work on AI to kill people.  


The collaboration between the militaries of the world and tech companies is wide and deep.  Think space exploration or even the origins of the Internet.  But here we’re talking about bringing very specific competencies of tech to build AI autonomous killing machines, building on these companies’ decades of leadership in surveillance, monitoring, profiling and targeting.  It’s one thing, I think, to engage in surveillance, monitoring, profiling and targeting…to show ads, and quite another to select individuals or groups for elimination.  The tech is not so different.  


Any company can of course change its principles, or its ethics.  Users can decide if they trust a company that changes its principles and ethics.  Its workforce can decide if they want to work on these projects, even if they’re threatened with termination for refusing. 


Privacy law, in Europe, for decades has included a principle that machines shouldn’t be allowed to make “automated decision making” on important questions without human involvement.  Needless to say, an AI automated decision to kill someone seems to meet that test. The militaries of the world are largely exempt from privacy laws, but the private sector companies working for them are not.  I would love to see people explore exactly what a company plans to do to build AI tools for the military, and ask the hard questions:  how did you train it, what data did you use to train it, how reliable is it, what is your accountability for its accuracy or failure?  You and I might think this is an intellectually interesting question, but we’re not teenagers on the frontlines of Gaza or Ukraine.