Friday, March 28, 2025

23andMe, Privacy zombie

 

Will it finally go away?  23andMe is filing for bankruptcy. https://www.npr.org/2025/03/24/nx-s1-5338622/23andme-bankruptcy-genetic-data-privacy  

23andMe's entire business model peddled pseudoscience for years.  https://www.theguardian.com/commentisfree/2025/mar/27/geneticist-mourn-23andme-useless-health-information

However absurd the slick test results were presented to 23andMe customers, however absurd its “insights” into genetic health risks or ancestry, it did work wonders at identifying one genetic trait:  stupidity.  If you spit into a test tube, and sent your saliva to this shadowy company with a long history of privacy breaches, then I can conclusively determine that you are genetically…stupid.  Your genome is your most personal, sensitive, unchanging identifier.  You handed over this data to a company built on data mining for cheap fun, and even paid them money for the privilege.  They will retain this data forever, unless you take steps to delete it, assuming that they actually delete it when you make that request, which is a fair question given their shadowy history of privacy practices in the past.  At least, please go try to delete it and hope it actually is deleted.  Whether they actually delete it or not, you’ll never know.  

Now, your genetic data is considered this bankrupt company’s “asset” that they plan to sell to the highest bidder.  Your genome has become their bankruptcy asset.  You might have trusted 23andMe with your genetic data, but will you trust whoever buys it in a bankruptcy sale?  

The problem with genetic data isn’t just what people can do with it today, or what they can deduce about you today, it’s what they can do with it in the years ahead, as science evolves.  France, to take one example, has sensibly outlawed such home DNA testing kits.  

The company, 23andMe may finally die now.  The CEO, the former wife of a Google founder, wants to buy it out of bankruptcy.  But your most sensitive personal data, if you were stupid enough to spit into a test tube for them, will live on, no matter what happens to the company. 

Monday, March 17, 2025

The AI training bots are reading my 100% human-generated blog…Great?!

 













I have been posting to this blog to share my thoughts with a small community of privacy professionals. 


So, I was a bit surprised to see Blogger give me statistics:  my posts get around 10,000 views.  I was surprised, because the privacy expert community is smaller than that.  

But how many of those views were bots, in particular AI training bots?  Blogger doesn’t give me those statistics.  

We all know that AI models are trained on data.  Big models, like large language models, are trained on vast amounts of data.  In fact, they’re being trained on essentially all available data in the world.  So, given their hunger for data, in particular for human-generated content, I’m not surprised they’ll visit my little blog too.  

There’s a raging debate about whether AI training bots should be allowed to use other people’s data to train their models.  There are many voices who claim that AI bots shouldn’t be allowed to train on other people’s data, if that data is either “personal data” or under copyright.  I think they’re wrong.  

I think the key distinction is public v private data.  If I make my data public, as I do with this blog, then I should expect (and probably want) it to be read by anyone who wants to:  humans or bots.  After all, search engine crawlers have been crawling public data for decades, and almost no one seems to object.  If AI training bots are reading my blog, say, to learn about human language, or about privacy, I’m delighted.  

On the other hand, private data is private.  If I use an email service, I expect that data to be private, as it’s filled with my highly personal and sensitive information.  If I use a social networking service, and I set the content I upload to “private”, I expect the platform to respect that choice, including from their own or third-party bots.  Failure to respect these privacy choices is a serious privacy breach, maybe even crime, unless the owner of the data has consented to allowing their data to be used for AI training.  (It’s a different discussion if “consent” can be deduced from some updated clause in some terms of use.)

Thousands of training bots are looking for more data, especially human-generated data.  If you make your data public, then realize the bots will come read it.  You can’t really stop it.  And I think that’s fine.  

The real issue is what the AI models intend to do after training on your data.  If they’re learning human language (large language models), it’s not going to have any impact on your real-world privacy.  But if they’re reading your data to impersonate you, to copy your voice or image or your copyrighted content, then you have every reason to object and use the legal resources available.  I think it’s fine when bots read public data for training.  The real question, and vastly harder to evaluate, is what their trained models should be allowed to do with it afterwards.  

Monday, March 10, 2025

Big and Rich v Small and Smart, who’ll win the AI race?

 

Everyone is in the race for AI, in particular the race for AGI, artificial general intelligence.  AGI was widely viewed as science fiction only a few years ago.  Now the experts think someone will build AGI in the next year or two.  Even if they’re wrong, AGI is coming soon.  The consequences for homo sapiens are mind-boggling.  https://www.livescience.com/technology/artificial-intelligence/agi-could-now-arrive-as-early-as-2026-but-not-all-scientists-agree

Who will win the race to build AGI?  There are two camps.  The first are the big, rich legacy tech leaders, often running large super-profitable monopolistic services that they are leveraging into the new world of AI.  They have lots of advantages.  Their businesses, or monopolies, generate vast amounts of money, user data, user engagement, installed bases.   Legacy monopolies always use their core original monopoly to leverage into new businesses.  For example, Google built a monopoly in Search, which was legal, but used that monopoly to leverage into many other businesses, according to the US Department of Justice.  The US Department of Justice is bringing an antitrust case against Google and its abuse of its Search monopoly, but it has already decided to let Google continue to leverage that monopoly into the world of AI.  https://nypost.com/2025/03/07/business/feds-drop-bid-to-make-google-sell-ai-investments-in-antitrust-case/  If that’s the best the Antitrust Division of the Department of Justice can do, well, I’d support the DOGE efforts to save taxpayer money and just eliminate them.  

I assume they’re cracking open the Dom Perignon in Mountain View.  The antitrust regulators in DC will let Google proceed with its old playbook.  Maybe they’ll force Google to divest one of its portfolio of multiple monopolies, like Chrome.  Big deal, that’s like pruning a branch off a tree.  I imagine the company will howl with indignation, even at that low-impact antitrust remedy, but that’s like a dramatic husky howling at the vet.  

On the other hand, maybe the old legacy tech world won’t win the AGI race.  Maybe smarts, innovation, and agility will win.  Maybe small companies and research labs will win.  Maybe the future isn’t in the mega-model of AI, based on vast amounts of data, vast amounts of GPUs, vast amounts of money, electricity, and users.  Maybe smaller models will win, figuring out how to do things on the cheap.  The low-cost Chinese DeepSeek success, if it’s true, might be a window into that future.  

In privacy terms, it’s not clear which model is better.  The mega-model is based on vast amounts of data processing, limited to a few mega-companies.  The size of the processing may be troubling, but it’s somewhat easier to hold big companies to account for responsible data processing.  If the smaller models prevail, there will be a proliferation of AI processing across thousands, or maybe millions of actors.  Good luck trying to ensure privacy rights in that scenario.  

What do I think?:  I think the smaller models will proliferate, eventually, after an initial lead by the big models.  Fasten your seatbelt, we’re entering a zone of turbulence.  

Wednesday, March 5, 2025

The leaning tower of privacy

 


 


I’m now free to say what I think.  For many years, I was an official spokesperson on privacy issues, amongst other things, for my prior employer.  Naturally, I was committed to advocating for its views, as any good lawyer does on behalf of their client.  I hope I lived up to my goal of only saying things that I believed were true, and not just parroting talking points. 


Now that I’m free to say what I think in my personal voice, where should I do it?  I’m on blogger.com, out of an old habit.  Nothing much has changed on blogger, and it feels oceans apart from cool social media hotspots, but at least it’s familiar.  What are my alternatives?  As a privacy professional, I couldn’t possibly join FB as a “dumbf*ck”.  I can’t join X, given how I feel about Elon.  I can’t join TikTok, given how I feel about China.  So, I guess I’m on blogger for now.  Unless you have a better idea. 


What I do care about is finding ways to share my experience, knowledge, and thoughts after 30 years’ of privacy practice, with students, privacy professionals, advocates, and regulators. For me now, it’s about sharing and helping a new generation in the field.  I’ll be writing, speaking, teaching, advising, mentoring, as opportunities come around.  


For example, why has the regulatory DPA world had such a limited impact on Big Tech that it is being asked to supervise and regulate?  The DPA world has many strong tools, in particular, the tough law of the GDPR, but its toughness on paper hasn’t translated into the real world, as people expected when it was adopted.  I could list some of the factors that have limited the DPAs’ ability to have a big impact.  1)  The GDPR put first-line responsibility onto the shoulders of a one-stop-shop regulator, which turned out to be Ireland’s, for virtually all US and Chinese big tech companies.  That’s a huge lift for the DPA of a small country, even if they have brilliant leadership and staff.  2)  The DPAs have modest budgets, small teams, very few technical or legal experts, and they’re facing-off against mega companies with vast technical and legal resources.  3)  The politics of being a DPA are complicated, since they are often accused of being retrograde, or anti-innovation, when they try to enforce the laws.  4)  DPAs spend a lot of time on minor cases, often complaints by one individual, which might matter to that one individual, but have zero big impact.  The Right to be Forgotten is a perfect example of individual-level cases that absorb DPA resources with virtually no impact beyond a particular case.  


So, what’s my recommendation to DPAs to have more impact?  Pick your cases wisely.  Pick cases that affect millions of people, and don’t waste your resources on petty cases.  Think about the tech and how it’s evolving, so that you don’t bring cases about 10-year-old tech that is already obsolete before any conclusion is reached.  And spend time developing policy, at an international level, so that it’s clear what policy goals you’re pursuing.  In particular, in the world of AI, push for international conversations and consensus on what good policy looks like in the world of AI, by engaging with stakeholders, and once that consensus is achieved (but not before), use your regulatory enforcement toolkit.  I’ve become friends with many people in the DPA community.  I trust them to want to do the right thing.

  

I could make the same recommendations to privacy activists: pick your cases wisely.  The best, in my experience, in picking the right cases and pursuing them tenaciously, would be NOYB.  He wouldn’t know it, but when I was on the opposite side, Max got me scared and sweating.  I admire him for it.  If you care about privacy, consider donating to NOYB. 

Monday, March 3, 2025

Thou shalt not kill, unless thou art an autonomous AI killing machine

 I’m mostly bored with Google, but occasionally it publishes a blog that gets me riled up.  When I still worked there, I was proud of my employer, and how it adopted a set of AI Principles, to govern its work in this exciting, promising, dangerous new space.  One of those principles rejected work on AI that would “cause or are likely to cause overall harm.”  


But guess what, it turns out that the militaries of the world want to harness AI for their purposes, which include “causing harm”, to put it mildly.  So, Google, presumably smelling a big business opportunity of developing AI for the world’s “democratic” democracies, re-wrote its AI principles.  Now the company can develop AI tools for the world’s democratic militaries (I’d love to read the list of the countries that Google considers “democracies”).


I re-watched Robocop recently:  an AI-prototype robot cop is demoed in a boardroom, and a hapless person is asked to hold a gun pointing at it.  Chaos ensues, as the prototype disregards human voice commands to stand down and shoots down the poor person holding the gun he’s trying to throw away.  Oops.  Indeed, the company whose AI gave us a recipe for glue on pizza, will now work on AI to kill people.  


The collaboration between the militaries of the world and tech companies is wide and deep.  Think space exploration or even the origins of the Internet.  But here we’re talking about bringing very specific competencies of tech to build AI autonomous killing machines, building on these companies’ decades of leadership in surveillance, monitoring, profiling and targeting.  It’s one thing, I think, to engage in surveillance, monitoring, profiling and targeting…to show ads, and quite another to select individuals or groups for elimination.  The tech is not so different.  


Any company can of course change its principles, or its ethics.  Users can decide if they trust a company that changes its principles and ethics.  Its workforce can decide if they want to work on these projects, even if they’re threatened with termination for refusing. 


Privacy law, in Europe, for decades has included a principle that machines shouldn’t be allowed to make “automated decision making” on important questions without human involvement.  Needless to say, an AI automated decision to kill someone seems to meet that test. The militaries of the world are largely exempt from privacy laws, but the private sector companies working for them are not.  I would love to see people explore exactly what a company plans to do to build AI tools for the military, and ask the hard questions:  how did you train it, what data did you use to train it, how reliable is it, what is your accountability for its accuracy or failure?  You and I might think this is an intellectually interesting question, but we’re not teenagers on the frontlines of Gaza or Ukraine.  


Monday, February 24, 2025

Surveillance just got a lot creepier

I read a recent Google blog. Updating our platform policies to reflect innovations in the ads ecosystem. I have absolutely no idea what it was saying, which says something, considering I spent decades writing these blogs. If obfuscation was a literary genre, this was Shakespearian. 

 But other people figured out what it meant: Critics say new Google rules put profits over privacy Basically, Google was reversing its long-held pledge to fight the privacy-evil practice of fingerprinting. Most people are aware, or at least dimly aware, that they’re being watched, followed, profiled and targeted as they surf the web, especially by the ads ecosystem. After all, the more a company or an industry can monitor, profile and target individual users, the more money they can make from targeting them with individually-tailored ads. Virtually no one understands the scope and scale of these practices, including me. 

 As a privacy professional, I always defended a pragmatic approach. An ads-based ecosystem could co-exist with privacy laws in a cookie-based world, given that there were certain protections, transparency and user controls for cookies. And the regulators of the world largely approached online ads surveillance problems from a cookie-based perspective. But while they were barking up the cookie-tree, the industry was rolling out far more invasive and invisible individual-level tracking and profiling tools called fingerprinting. If you don’t know, fingerprinting is a technique to collect lots of individual little settings about users’ devices that can identify them uniquely, much like a fingerprint of a person’s thumb is composed of lots of little lines that individually mean nothing, but together identify a person. Fingerprinting is an evil privacy practice, with almost zero transparency or user controls. In my long privacy career, I always held the line against it. Increasingly, small players in the ads ecosystem resorted to fingerprinting, as an alternative to cookies. Google, after decades-long principled objection to it, has now raced to the bottom to join its competitors. Of course, it’s entirely different for the super-dominant player in the ads ecosystem to join its tiny competitors in bad privacy practices. 

 Perhaps privacy regulators will learn from their mis-guided focus on cookies to look at fingerprinting. But the USA is de-regulating fast, and Europe is out-gunned and out-manoeuvred by some players in the tech industry. Think about the resource imbalance, to take one public fact: Google’s CEO is paid, individually, about the same as the entire operating budgets of all 27 EU DPAs, combined. Now picture a small, valiant under-resourced DPA trying to take that on. 

 I think of privacy as a series of ethical choices to respect the individual, and laws that try to back that up, however imperfectly. But in the war of principles v profits, it’s hard for principles to win. Maybe shareholders will get richer, but humanity will be poorer. For me personally, it’s sad to see your life’s work deracinated.

Tuesday, January 21, 2025

I've left Google

My nearly 2 decades at Google as its Global Privacy Counsel has ended.  I’ve left Google as one of the last few remaining members of the original early Google team.  Google asked me to update social media profiles accordingly, hence my coming back to this dormant blog to say I’ve left Google.  Together with me, other senior members of the Google privacy team have left in recent months.

My career started as Google’s first full-time privacy professional, building a function, and later team, that didn’t exist before.  My job was to try to make Google respect privacy for its billions of users.  You can judge the results, but I am proud of the mission.  Being a privacy leader is a tough job at a company like that.  


The early years at Cool Google were fun, creative, innovative, comradely, and I loved them.  But Google has changed and evolved into Corporate Google, and large committees can now carry forward the work I did, or reverse them; in either case, it’s no longer my business.  


I left on good terms.  No one is in jail now for privacy, including me, and I’m hardly being flippant, speaking as one of those rare privacy professionals who was arrested and sentenced to jail for their employer’s privacy practices.  And I helped build the small company I joined into the largest private processor and monetizer of personal data on the planet. I can’t think of another privacy professional who helped build their data-processing company from the early days to 2 trillion + market cap.  What a ride.  


I will remain active in the field of privacy in many ways.  AI will present existential challenges to the field of privacy, as to so many other domains, and I’m eager to find ways to help organizations develop AI responsibly.  And there are innovators out there who remind me of the fun, creative, responsible environment of my early years at Google, as we wrestled with privacy issues and the then-new online world.  Unless compelled by law to testify, I won’t reveal any non-public information about Google:  I’ll respect my confidentiality constraints as a lawyer to my former client/employer. 


I relish my newfound freedom to share my insights and experience with others in the field and in new ways.  More on that soon.  In the meantime, I wish luck to my former colleagues with MAGA:  Make Alphabet Great Again.