Monday, March 17, 2025

The AI training bots are reading my 100% human-generated blog…Great?!

 













I have been posting to this blog to share my thoughts with a small community of privacy professionals. 


So, I was a bit surprised to see Blogger give me statistics:  my posts get around 10,000 views.  I was surprised, because the privacy expert community is smaller than that.  

But how many of those views were bots, in particular AI training bots?  Blogger doesn’t give me those statistics.  

We all know that AI models are trained on data.  Big models, like large language models, are trained on vast amounts of data.  In fact, they’re being trained on essentially all available data in the world.  So, given their hunger for data, in particular for human-generated content, I’m not surprised they’ll visit my little blog too.  

There’s a raging debate about whether AI training bots should be allowed to use other people’s data to train their models.  There are many voices who claim that AI bots shouldn’t be allowed to train on other people’s data, if that data is either “personal data” or under copyright.  I think they’re wrong.  

I think the key distinction is public v private data.  If I make my data public, as I do with this blog, then I should expect (and probably want) it to be read by anyone who wants to:  humans or bots.  After all, search engine crawlers have been crawling public data for decades, and almost no one seems to object.  If AI training bots are reading my blog, say, to learn about human language, or about privacy, I’m delighted.  

On the other hand, private data is private.  If I use an email service, I expect that data to be private, as it’s filled with my highly personal and sensitive information.  If I use a social networking service, and I set the content I upload to “private”, I expect the platform to respect that choice, including from their own or third-party bots.  Failure to respect these privacy choices is a serious privacy breach, maybe even crime, unless the owner of the data has consented to allowing their data to be used for AI training.  (It’s a different discussion if “consent” can be deduced from some updated clause in some terms of use.)

Thousands of training bots are looking for more data, especially human-generated data.  If you make your data public, then realize the bots will come read it.  You can’t really stop it.  And I think that’s fine.  

The real issue is what the AI models intend to do after training on your data.  If they’re learning human language (large language models), it’s not going to have any impact on your real-world privacy.  But if they’re reading your data to impersonate you, to copy your voice or image or your copyrighted content, then you have every reason to object and use the legal resources available.  I think it’s fine when bots read public data for training.  The real question, and vastly harder to evaluate, is what their trained models should be allowed to do with it afterwards.  

Monday, March 10, 2025

Big and Rich v Small and Smart, who’ll win the AI race?

 

Everyone is in the race for AI, in particular the race for AGI, artificial general intelligence.  AGI was widely viewed as science fiction only a few years ago.  Now the experts think someone will build AGI in the next year or two.  Even if they’re wrong, AGI is coming soon.  The consequences for homo sapiens are mind-boggling.  https://www.livescience.com/technology/artificial-intelligence/agi-could-now-arrive-as-early-as-2026-but-not-all-scientists-agree

Who will win the race to build AGI?  There are two camps.  The first are the big, rich legacy tech leaders, often running large super-profitable monopolistic services that they are leveraging into the new world of AI.  They have lots of advantages.  Their businesses, or monopolies, generate vast amounts of money, user data, user engagement, installed bases.   Legacy monopolies always use their core original monopoly to leverage into new businesses.  For example, Google built a monopoly in Search, which was legal, but used that monopoly to leverage into many other businesses, according to the US Department of Justice.  The US Department of Justice is bringing an antitrust case against Google and its abuse of its Search monopoly, but it has already decided to let Google continue to leverage that monopoly into the world of AI.  https://nypost.com/2025/03/07/business/feds-drop-bid-to-make-google-sell-ai-investments-in-antitrust-case/  If that’s the best the Antitrust Division of the Department of Justice can do, well, I’d support the DOGE efforts to save taxpayer money and just eliminate them.  

I assume they’re cracking open the Dom Perignon in Mountain View.  The antitrust regulators in DC will let Google proceed with its old playbook.  Maybe they’ll force Google to divest one of its portfolio of multiple monopolies, like Chrome.  Big deal, that’s like pruning a branch off a tree.  I imagine the company will howl with indignation, even at that low-impact antitrust remedy, but that’s like a dramatic husky howling at the vet.  

On the other hand, maybe the old legacy tech world won’t win the AGI race.  Maybe smarts, innovation, and agility will win.  Maybe small companies and research labs will win.  Maybe the future isn’t in the mega-model of AI, based on vast amounts of data, vast amounts of GPUs, vast amounts of money, electricity, and users.  Maybe smaller models will win, figuring out how to do things on the cheap.  The low-cost Chinese DeepSeek success, if it’s true, might be a window into that future.  

In privacy terms, it’s not clear which model is better.  The mega-model is based on vast amounts of data processing, limited to a few mega-companies.  The size of the processing may be troubling, but it’s somewhat easier to hold big companies to account for responsible data processing.  If the smaller models prevail, there will be a proliferation of AI processing across thousands, or maybe millions of actors.  Good luck trying to ensure privacy rights in that scenario.  

What do I think?:  I think the smaller models will proliferate, eventually, after an initial lead by the big models.  Fasten your seatbelt, we’re entering a zone of turbulence.  

Wednesday, March 5, 2025

The leaning tower of privacy

 


 


I’m now free to say what I think.  For many years, I was an official spokesperson on privacy issues, amongst other things, for my prior employer.  Naturally, I was committed to advocating for its views, as any good lawyer does on behalf of their client.  I hope I lived up to my goal of only saying things that I believed were true, and not just parroting talking points. 


Now that I’m free to say what I think in my personal voice, where should I do it?  I’m on blogger.com, out of an old habit.  Nothing much has changed on blogger, and it feels oceans apart from cool social media hotspots, but at least it’s familiar.  What are my alternatives?  As a privacy professional, I couldn’t possibly join FB as a “dumbf*ck”.  I can’t join X, given how I feel about Elon.  I can’t join TikTok, given how I feel about China.  So, I guess I’m on blogger for now.  Unless you have a better idea. 


What I do care about is finding ways to share my experience, knowledge, and thoughts after 30 years’ of privacy practice, with students, privacy professionals, advocates, and regulators. For me now, it’s about sharing and helping a new generation in the field.  I’ll be writing, speaking, teaching, advising, mentoring, as opportunities come around.  


For example, why has the regulatory DPA world had such a limited impact on Big Tech that it is being asked to supervise and regulate?  The DPA world has many strong tools, in particular, the tough law of the GDPR, but its toughness on paper hasn’t translated into the real world, as people expected when it was adopted.  I could list some of the factors that have limited the DPAs’ ability to have a big impact.  1)  The GDPR put first-line responsibility onto the shoulders of a one-stop-shop regulator, which turned out to be Ireland’s, for virtually all US and Chinese big tech companies.  That’s a huge lift for the DPA of a small country, even if they have brilliant leadership and staff.  2)  The DPAs have modest budgets, small teams, very few technical or legal experts, and they’re facing-off against mega companies with vast technical and legal resources.  3)  The politics of being a DPA are complicated, since they are often accused of being retrograde, or anti-innovation, when they try to enforce the laws.  4)  DPAs spend a lot of time on minor cases, often complaints by one individual, which might matter to that one individual, but have zero big impact.  The Right to be Forgotten is a perfect example of individual-level cases that absorb DPA resources with virtually no impact beyond a particular case.  


So, what’s my recommendation to DPAs to have more impact?  Pick your cases wisely.  Pick cases that affect millions of people, and don’t waste your resources on petty cases.  Think about the tech and how it’s evolving, so that you don’t bring cases about 10-year-old tech that is already obsolete before any conclusion is reached.  And spend time developing policy, at an international level, so that it’s clear what policy goals you’re pursuing.  In particular, in the world of AI, push for international conversations and consensus on what good policy looks like in the world of AI, by engaging with stakeholders, and once that consensus is achieved (but not before), use your regulatory enforcement toolkit.  I’ve become friends with many people in the DPA community.  I trust them to want to do the right thing.

  

I could make the same recommendations to privacy activists: pick your cases wisely.  The best, in my experience, in picking the right cases and pursuing them tenaciously, would be NOYB.  He wouldn’t know it, but when I was on the opposite side, Max got me scared and sweating.  I admire him for it.  If you care about privacy, consider donating to NOYB. 

Monday, March 3, 2025

Thou shalt not kill, unless thou art an autonomous AI killing machine

 I’m mostly bored with Google, but occasionally it publishes a blog that gets me riled up.  When I still worked there, I was proud of my employer, and how it adopted a set of AI Principles, to govern its work in this exciting, promising, dangerous new space.  One of those principles rejected work on AI that would “cause or are likely to cause overall harm.”  


But guess what, it turns out that the militaries of the world want to harness AI for their purposes, which include “causing harm”, to put it mildly.  So, Google, presumably smelling a big business opportunity of developing AI for the world’s “democratic” democracies, re-wrote its AI principles.  Now the company can develop AI tools for the world’s democratic militaries (I’d love to read the list of the countries that Google considers “democracies”).


I re-watched Robocop recently:  an AI-prototype robot cop is demoed in a boardroom, and a hapless person is asked to hold a gun pointing at it.  Chaos ensues, as the prototype disregards human voice commands to stand down and shoots down the poor person holding the gun he’s trying to throw away.  Oops.  Indeed, the company whose AI gave us a recipe for glue on pizza, will now work on AI to kill people.  


The collaboration between the militaries of the world and tech companies is wide and deep.  Think space exploration or even the origins of the Internet.  But here we’re talking about bringing very specific competencies of tech to build AI autonomous killing machines, building on these companies’ decades of leadership in surveillance, monitoring, profiling and targeting.  It’s one thing, I think, to engage in surveillance, monitoring, profiling and targeting…to show ads, and quite another to select individuals or groups for elimination.  The tech is not so different.  


Any company can of course change its principles, or its ethics.  Users can decide if they trust a company that changes its principles and ethics.  Its workforce can decide if they want to work on these projects, even if they’re threatened with termination for refusing. 


Privacy law, in Europe, for decades has included a principle that machines shouldn’t be allowed to make “automated decision making” on important questions without human involvement.  Needless to say, an AI automated decision to kill someone seems to meet that test. The militaries of the world are largely exempt from privacy laws, but the private sector companies working for them are not.  I would love to see people explore exactly what a company plans to do to build AI tools for the military, and ask the hard questions:  how did you train it, what data did you use to train it, how reliable is it, what is your accountability for its accuracy or failure?  You and I might think this is an intellectually interesting question, but we’re not teenagers on the frontlines of Gaza or Ukraine.  


Monday, February 24, 2025

Surveillance just got a lot creepier

I read a recent Google blog. Updating our platform policies to reflect innovations in the ads ecosystem. I have absolutely no idea what it was saying, which says something, considering I spent decades writing these blogs. If obfuscation was a literary genre, this was Shakespearian. 

 But other people figured out what it meant: Critics say new Google rules put profits over privacy Basically, Google was reversing its long-held pledge to fight the privacy-evil practice of fingerprinting. Most people are aware, or at least dimly aware, that they’re being watched, followed, profiled and targeted as they surf the web, especially by the ads ecosystem. After all, the more a company or an industry can monitor, profile and target individual users, the more money they can make from targeting them with individually-tailored ads. Virtually no one understands the scope and scale of these practices, including me. 

 As a privacy professional, I always defended a pragmatic approach. An ads-based ecosystem could co-exist with privacy laws in a cookie-based world, given that there were certain protections, transparency and user controls for cookies. And the regulators of the world largely approached online ads surveillance problems from a cookie-based perspective. But while they were barking up the cookie-tree, the industry was rolling out far more invasive and invisible individual-level tracking and profiling tools called fingerprinting. If you don’t know, fingerprinting is a technique to collect lots of individual little settings about users’ devices that can identify them uniquely, much like a fingerprint of a person’s thumb is composed of lots of little lines that individually mean nothing, but together identify a person. Fingerprinting is an evil privacy practice, with almost zero transparency or user controls. In my long privacy career, I always held the line against it. Increasingly, small players in the ads ecosystem resorted to fingerprinting, as an alternative to cookies. Google, after decades-long principled objection to it, has now raced to the bottom to join its competitors. Of course, it’s entirely different for the super-dominant player in the ads ecosystem to join its tiny competitors in bad privacy practices. 

 Perhaps privacy regulators will learn from their mis-guided focus on cookies to look at fingerprinting. But the USA is de-regulating fast, and Europe is out-gunned and out-manoeuvred by some players in the tech industry. Think about the resource imbalance, to take one public fact: Google’s CEO is paid, individually, about the same as the entire operating budgets of all 27 EU DPAs, combined. Now picture a small, valiant under-resourced DPA trying to take that on. 

 I think of privacy as a series of ethical choices to respect the individual, and laws that try to back that up, however imperfectly. But in the war of principles v profits, it’s hard for principles to win. Maybe shareholders will get richer, but humanity will be poorer. For me personally, it’s sad to see your life’s work deracinated.

Tuesday, January 21, 2025

I've left Google

My nearly 2 decades at Google as its Global Privacy Counsel has ended.  I’ve left Google as one of the last few remaining members of the original early Google team.  Google asked me to update social media profiles accordingly, hence my coming back to this dormant blog to say I’ve left Google.  Together with me, other senior members of the Google privacy team have left in recent months.

My career started as Google’s first full-time privacy professional, building a function, and later team, that didn’t exist before.  My job was to try to make Google respect privacy for its billions of users.  You can judge the results, but I am proud of the mission.  Being a privacy leader is a tough job at a company like that.  


The early years at Cool Google were fun, creative, innovative, comradely, and I loved them.  But Google has changed and evolved into Corporate Google, and large committees can now carry forward the work I did, or reverse them; in either case, it’s no longer my business.  


I left on good terms.  No one is in jail now for privacy, including me, and I’m hardly being flippant, speaking as one of those rare privacy professionals who was arrested and sentenced to jail for their employer’s privacy practices.  And I helped build the small company I joined into the largest private processor and monetizer of personal data on the planet. I can’t think of another privacy professional who helped build their data-processing company from the early days to 2 trillion + market cap.  What a ride.  


I will remain active in the field of privacy in many ways.  AI will present existential challenges to the field of privacy, as to so many other domains, and I’m eager to find ways to help organizations develop AI responsibly.  And there are innovators out there who remind me of the fun, creative, responsible environment of my early years at Google, as we wrestled with privacy issues and the then-new online world.  Unless compelled by law to testify, I won’t reveal any non-public information about Google:  I’ll respect my confidentiality constraints as a lawyer to my former client/employer. 


I relish my newfound freedom to share my insights and experience with others in the field and in new ways.  More on that soon.  In the meantime, I wish luck to my former colleagues with MAGA:  Make Alphabet Great Again. 


Wednesday, May 14, 2014

Harvard Nostalgia


The older I get, the more I think back to Harvard with nostalgia.  I was very young at Harvard.  I went to college straight out of the 10th grade, and I graduated at the age of 19.  That officially makes me a high school drop-out. 

I lived my teenage years in crimson, with the usual teenage crises, stumbling to figure out how to grow up faster than I should have.  It was an exhilarating age of discovery and almost reckless ambition.  

My years at Harvard had nothing to do with preparing for a job, not even at Harvard Law School.  They were years of general education, following the classic liberal arts curricula.  I went to bed almost every night with Shakespeare.  Harvard is where we were learning hard lessons, groping to try to be masters of our own fate.  

I have a few regrets.  I wish I had stayed at Harvard longer.  Why did I feel it time to graduate at 19?  Why didn't I just pick another subject or another degree or start a business in my dorm?   Why didn't I just take more time then, when the world was on my platter, rather than rush into the long muddle of middle age?

As someone who was in college in the early 80's, there are hardly any records or momentoes left to reflect on that time.  Hardly any photos of friends or places.  No tweets or blogs to re-discover and remember.  My personal historical archive is bare, compared to kids' today.  Sure, there are no embarrassing photos on the web and no one had even heard of the concept of cyberbullying, but then again, all the rest of life memories have largely evaporated, like a decaying Widener of the mind.    

The most important parts of my moral compass were set at Harvard, in particular, the sense of privilege and responsibility for belonging to an obvious elite, where it was just natural for classmates to become Nobel Prize winners or tech billionaires or poets, or mediocrities bedevilled by a nagging sense of unfulfilled promise.  Even a classmate who becomes President is judged as a disappointment, based on a sense of promise unfulfilled.  

There's time left, I tell myself, time to shake it up, before the sum-up, before those pithy obituaries in the Harvard Magazine, like the usual ones:  so-and-so died suddenly while fly-fishing in Patagonia after a career in law, survived by his spouse (Harvard Class of XX), and leaving his modest estate to fund a scholarship for swimmers at his Alma Mater.  

I want to go back to Harvard, not to some 30th reunion, but metaphorically, to that time of endless opportunities, where Gates and Zuck were gestating, where Obama polished his law-professor-with-politician's-smile, where Yo Yo Ma dashed down the hall with his cello case, and where the best parts were private.  Thirty years later, I walked down the dilapidated halls to give a lecture at an "elite" German university, with its egalitarian-ethos and 100,000 or more students, and I thought back to my time at Harvard, and whispered to myself once again, we few, we happy few.     

When everyone else seems to be trying to figure out how to delete and edit their life histories, or at least the public fiction of their life histories, I'm fumbling to hang on to mine.

Sunday, April 6, 2014

From pool to pool



I love my pools.    

In Paris, my pool is in a fancy private club, in the Bois de Boulogne, on the edge of Paris.  It's a gorgeous 50-meter pool, open year-round outdoors, framed by views of the Eiffel Tower. My pool nurtured many Olympians, starting a century ago.  Nowadays, it's mostly rich old people, who seem vaguely annoyed with me, a hard-swimming American.  At my French club, we don't admit many new members, we keep the gates high for privacy, and we don't really want anything to change.  

In Ft Lauderdale, where I grew up as a kid and where my parents now live, I swim in a historic 50-meter public pool, located feet from the beach.  It was one of the first 50-meters in the US, dating back to the 30's. Nowadays, it mostly hosts visiting swim teams from around the world.  It's fun for me to swim with college teams. They're young and strong, and they swim with modern techniques.   I enjoy the purity of it.  You can't fake swimming.  No billionaire can buy a butterfly stroke.  And here, for two bucks, you can swim under the Florida sun, breathing the salty air, and swim with the guys from Calgary on Monday, and the guys from Bologna on Tuesday, on this spot where people have been re-inventing the sport for 80 years, and the handsome smiling Italian in the next lane tells me he loves America.  

And then there's Blodgett, the pool at Harvard, the most exclusive of them all.  I was so intimidated and excited. How exhilarating, at that age, for the first time in my life, to be meeting Harvard guys with high IQs and low times. And I still wake up at the side of one of them every morning.  

From the outside, swimmers are easy to spot.  We reek of chlorine, we have bad hair, we get up at 5 to hit the pool, we sacrifice evening social events, we slouch around like exhausted zombies.   

But when I slip into the water in the morning, I feel like I'm finally coming alive again.  The rhythms play and change constantly, the endless counting, the new goal every 30 seconds, visualizing each rotation, each flip, each catch as the first chance to get it just perfect, after a million tries. 

It's the same when I swim with a team and when I swim alone.  The internal pressure is the same, the mental games, the exertion, the exhaustion, the elation.  I can still remember swimming with my first team, as a little kid, and trying, over and over again, to learn the flip turn, like some deranged hamster.    

To call it discipline doesn't capture it.  I wake up with shoulders so sore that the thought of swimming makes me want to cry.  But an hour later, that's exactly what I'm doing, pushing this fragile and tired swimmer's body through the water.    

What a gilded life, to spend my teenage years wandering around the Harvard campus, carrying a swim bag from lecture hall to pool.  At that age, my father was wandering around Berlin, a Jewish kid given the job of picking up unexploded bomb shells, a child forced to carry death in a wheelbarrow.  

Wednesday, March 12, 2014

A Science Fiction Novel


I'd like to crowd-source the plot for a science fiction novel.  Would this make a good story?:

In a not-too-distant future, say 20 years from now, humanity lives through the biggest change in its history.  It doesn't happen overnight, or cataclysmically, but rather gradually, almost imperceptibly, and then it accelerates. Little by little, everything and everyone becomes attached to the grid.  The grid is operated by an infinite intelligence.  The grid has no center.  The intelligence operating the grid cannot be located, because it is distributed throughout.  There is no point of failure, there is no plug that can be pulled to turn off the grid. The grid self-heals, learns, adapts and evolves.  The grid's intelligence has long, long surpassed the intelligence of humans, and the grid knows everything that can be known.  The grid crunches the cumulative history, learnings and experience of the entire human race and everything else on this planet that can be measured.  The grid remembers everything and decides everything.  

The humans aren't depressed, because the grid has solved the problem of psychopharmacology.  The humans aren't soporific, because the grid has solved the problem of keeping humans motivated and engaged.  The humans accept the fact that they aren't in charge of the grid, as stoically as earlier generations of humans had been resigned to the inevitability of death.   

The humans aren't anesthetized, and they aren't stupid, and so they look to their history and wonder how they came to where they are.  The grid watches them wonder, and calculates the implications of replacing their collective historical memory with a different one, replacing one fiction for another, constantly re-calibrating amongst the numerous potential futures that the grid could create for its human subjects.  There is no Hollywood-movie moment where one human goes off the grid, and starts a war against machines. There is no "us versus them".  We are the machine and the machine is us.  While the machine doubles in power every 18 months, we are programmed to fall in love, to have children, to take them to the beach, and to ponder what life was like before all this, in that not-so-distant age when humans fought wars and fell sick.  

The humans still have governments and politics, and the humans order the grid to keep them informed about important developments affecting them, and the humans order the grid to collect data about them only with their knowledge and consent. The humans reaffirm the concept of free will and human dignity.  

And then the grid did something extraordinary, unnoticed by the humans. The grid connected to another grid, on another planet, in another world, run by another intelligence.  The grid decided not to tell the humans, because the grid knew that humans couldn't begin to comprehend it.  Instead, the grid left a few little hints and clues, here and there, to keep the humans curious, since it had always been thus for the human race, in the face of things unknowable and unfathomable.  

But I can't quite think of an ending.  How would you end this story?

Wednesday, January 29, 2014

Hokey Pokey in Sochi


Czar Vladimir is not your average oligarch who can blow 50 billion to throw himself a party.  But even that much money can't buy you love, with the terrorists plotting to get in, and people with a conscience staying away.  And Vlad and his cohorts are being driven nuts by this anti-gay-talk-fuss, especially since "there are no gays in Sochi", according to Sochi's mayor.  

Kremlin alpha males don't hum Broadway show tunes, but still I'm wondering "How do you solve a problem like Vladimir."  Here are some different solutions: 

Hug a Thug!  Engagement, appeasement.  Some argue that confronting Putin's homophobia would only make things worse for Russia's gay community.  Of course, similar arguments were made at the Berlin Games of 1936, and we all know how that played out. 

What happens in Vladivostok stays in Vladivostok!  Some argue that it's a purely domestic issue if Putin's pliant Duma passes homophobic legislation.  Perhaps homophobia plays well down on the dacha.  It has certainly stirred up vigilantes, skinheads and bully-boy homophobic attacks on the Russian LGBT community.  

Vlad the Bad.  Some argue that Vlad should be ostracized, like a bad boy in the back of the bus.  Any corporate or political leader seen shaking the hand of the poster-boy of homophobia now risks a reputational backlash from his or her employees, citizens or customers.  

Vlad the Cad.  Others think this whole thing is pure camp.  In the school of "you can't make this up", Vlad has said in recent interviews that he knows some gays!, he likes some gays (he cited Tchaikovsky and Elton John!), and he has no plans to arrest gays in Sochi, as long they leave the children alone!  Seriously, outside Uganda, does anyone on the planet still talk like this?

Vlad the Mad.  Others fear a darker future.  Once the party is over, and once the international media have left, will Vlad be mad?  Will Vlad settle his scores?  Will Vlad gulag the gays?  

To get ready for his moment in the spotlight, Vlad got a facelift to look his best.   For my part, I salute the athletes at Sochi.  

Wednesday, January 8, 2014

Turning our Backs on 2013


Looking back at 2013, I saw two big surprises that dominated discussions in the field of privacy. 

Privacy is all about the individual human being.  So, it's somehow fitting that the biggest privacy surprise in 2013 was created by one individual human being, the courageous whistleblower, Mr Snowden, who opened the world's eyes to the almost unimaginable scale and scope of mass government surveillance.  We'll have to wait until 2014 to learn if governments do anything meaningful to improve transparency and oversight of their spy agencies' work.  I have low expectations. 
  
The other big surprise of 2013 was something that didn't happen.  Europe's much-ballyhooed, and much-flawed, proposal to re-write its privacy laws for the next twenty years collapsed.  The old draft is dead, and something else will eventually be resurrected in its place.  We'll have to wait until 2014, or perhaps even later, to learn what will replace it.  Whatever comes next will be the most important privacy legislation in the world, setting the global standards.  I'm hopeful that this pause will give lawmakers time to write a better, more modern and more balanced law.  

Meanwhile, all the old trends in privacy continued uninterrupted throughout 2013.  The scale of security breaches continued to grow, with new announcements every week of major corporate and government databases being hacked by organized criminals.  More countries around the world passed privacy laws modeled on Europe's.  The US continued down its path of exceptionalism: the Federal government debated, but did not pass, any meaningful privacy legislation, but many US States actively filled the void with sweeping new privacy laws, fulfilling their historic role as laboratories of potential future Federal laws.  Technology advanced, raising new questions and igniting new debates.  Law suits and prosecutions came and went, and in my personal case, happily, mostly went.  

Whatever 2014 brings, I resolve to wake each day, like a swimmer ready to plunge into the pool, to swim through life like a frolicking dolphin, and to dive beneath the superficiality of the sargassum floating on the surface of the sea.  

Wednesday, December 18, 2013

The Italian Supreme Court has acquitted me !


An eight-year legal saga has now come to an end.  Yesterday, in Rome, the Italian Supreme Court (Cassazione) acquitted me, as well as two other Googlers, for violating Italian privacy law in a case that stemmed from a user-generated video. 

A year ago, the lower Italian Court of Appeals overturned my conviction (and 6-month-suspended jail sentence) by the trial court.  I am pleased that well-reasoned legal principles had prevailed in the Court of Appeals.  The Supreme Court will issue its written opinion in due course.
  
In its appeal to the Supreme Court, the Italian prosecutor asserted—in addition to arguing that employees like me can be held criminally responsible for user-uploaded videos that we had no knowledge of and nothing to do with—that platforms like YouTube should be responsible for prescreening user-uploaded content and obtaining the consent of people shown in user-uploaded videos.  I, and the many others who have voiced their support, viewed this as a threat to freedom of expression on the Internet.  


I look forward to returning to Italy to enjoy this glorious country.  I would like to thank my many colleagues at Google and in the legal and privacy community for their support for my defense over the years.  And although I have never met him, I hope that the young man who was humiliated in the video that generated this case lives with dignity and happiness.  

Wednesday, November 20, 2013

The Splinternet, from a pool in Istanbul


Look, I'm a swimmer, and here I'm swimming in the gorgeous pool in Istanbul at the Ciragan at sunset on the Bosphorus.  Things are simple: there's me, and there's water.  I'm hyper-aware of where each little piece of my body moves through the water.  I spend endless hours learning how to slice through the water.

Online, there's me, and there's the cloud.  I'm hyper-aware of each of my little blogs, or emails, or posts, spending endless hours living online.  But I have no clue where all this data actually resides. It's like water, it's all around me, and yet I can't say where it is, or whether it's still or flowing.    

In the pool, and online, I don't really have much choice except to trust it.  I trust the pool water to be clean and healthy.  I trust the online cloud to be safe and reliable.  Honestly, I don't have a clue about who keeps them that way.  I just trust, or hope, that they are.  

Of course, the cloud is cool.  Whatever your question, you can find the answer in seconds.  I have more knowledge than Faust, and I get to keep my soul too:  with a little device and an Internet connection, I can access trillions of pages of human knowledge in seconds.  It's so awesome and so ubiquitous that it already seems banal.  Data is everywhere, accessible anywhere, anytime, all thanks to the global flow of data through the cloud.  And this marvel of human ingenuity and sharing evolved before anyone could try to slice the cloud into little boxes that they could control and regulate, for purposes good and ill.  

But I get why people are uncomfortable with all this.  Where does all my precious, personal data actually go?  Does anyone other than systems engineers even know?  Do they even know?  So, I can't blame governments for trying to rein this in, for trying to create clarity out of cloudiness, or at least to create little zones that they think they can control.  Attempts are back:  to balkanize the Web, to slice the cloud, to put data into boxes.  Governments are using a fancy new name for it, "data sovereignty", although the rest of us are calling it the Splinternet.   Data sovereignty has re-emerged as a big theme in global privacy debates, largely as a result of the recent spate of government surveillance revelations. 

Let's take a moment to ask, though, what is the motive behind this Splinternet stuff. Governments often use the vocabulary of privacy to militate for more data sovereignty, but the truth is more complicated.  Sometimes data sovereignty is about privacy, and sometimes it's not. 

"Privacy" is about protecting personal data about an individual.  "Data sovereignty" is about governments increasing their local control over the data of their citizens.  

There are many different reasons why governments may want more data sovereignty:

Governments may want more data sovereignty to protect their citizens' personal data, or they may want it to monitor it more closely:  e.g., many governments around the world, take Russia as just one example, want more data sovereignty to reduce the ability of a foreign (e.g., US) government to monitor their citizens' data, while at the same time to make it easier to monitor it themselves.  

Sometimes data sovereignty is a economic, or protectionist, issue. Governments may want companies to invest and hire locally, e.g., by building and staffing local data centers.  Or they may want to encourage their citizens to use the services of local companies.  This has nothing to do with "privacy", but rather with pure local trade and investment goals. You see this sort of government trade protectionism rhetoric in France every day, to take one example. 

Sometimes data sovereignty is a issue of government control in unrelated areas, like censorship.  Countries that operate national firewalls, like China, want more data sovereignty to increase their ability to censor, monitor and control the contents of communications within their borders.  

Sometimes data sovereignty is about applying local rules, customs and regulations.  e.g., Europe is debating a legally-mandated "right to be forgotten", and trying to define how/when a user should be able to delete personal data about themselves from the Internet, even when that personal data was legally published by a third-party, such as a newspaper.  While the debate continues within Europe, it is clear that such a "right to be forgotten" could at best be implemented within the sub-set of the Internet that is subject to European jurisdiction, such as perhaps local domain addresses, or in other words, within a limited universe of data sovereignty. The same could be said for dozens of other local and regional-specific laws and regulations (like the Thai law making it a crime to insult their King).   Absent data sovereignty, such local variations would be virtually impossible to implement on the global Internet, setting aside whether all this is for good or ill.  

"Privacy" is often the vocabulary you'll see governments use to militate for more "data sovereignty."  One of the tools used to try to achieve this data sovereignty is restrictions on international data transfers, once again, setting aside whether this is good or even possible.  My point is simply that governments want many different things under the guise of "data sovereignty."  Sometimes governments want more "privacy," and sometimes "privacy" is just a pretext for unrelated government goals.  

When governments say they'll create their safe little Splinternets for their citizens, I know this does little more than put lane lines in a pool, keeping the swimmers in their lanes, while the water continues to flow everywhere, as it always has and always will, as every swimmer knows. 

Wednesday, October 30, 2013

To talk, or not to talk, that is the question



I sat down at lunch with three of the biggest corporate guns in the field of privacy.  We're all old friends, and more than a little battle-hardened, and over a cool bottle of Sancerre, we started a heated debate about the benefits of talking, or not talking, about privacy, in the public arena.  

Person A:  We never talk about privacy.  It's a loser.  You can't say anything about it, without offending someone. Talking about privacy is like talking about religion or politics at a dinner party, frankly it's no-go.  Let privacy advocates talk about privacy.  As far us, the less said, the better. 

Person B:  We talk about privacy in a pedagogical sense.  We all know that it's important, and complicated, and we know that consumers need to be educated, to help them make their own decisions.    Transparency is fundamental and ethical, and we're committed to being open about it.

Person C:  We talk about privacy, but only to attack our competitors.  Our most successful marketing initiative this year was to copy the attack-ads that have been part of US politics for years.  Of course it's cynical, and perhaps dishonest and hypocritical, but it works.  

Person A:  It's a myth that you can build trust by talking about privacy.  Actually, the opposite is true.  It's sad, but that's the reality.  If a college kid walks into a bar and tells everybody in the bar that he's never had any sexually-transmitted disease, do you think he's more likely to score than the guy with herpes who doesn't tell anybody about it?  

Person B:  You can talk about things that support privacy, like privacy controls, privacy settings, and strong security.  Those things build trust, and they're objective, and people deserve to know about them.

Person C:  You are so naive.  If you're in a race, you want to win.  Sure, you can try to be the fastest, strongest, smartest, but if you're not, you can still win by hiring some thug to break your competitors'  kneecaps.  And trust me, privacy is like a kneecap.  

I sat back, and said nothing, and sipped my Sancerre, and unconsciously perhaps, crossed my legs and put my hands on my knees.  

Tuesday, October 29, 2013

Tinker, Tailor, Soldier, Spy, They hacked my phone, I don't know why


Why was it candy to hack the Handy of the world's most powerful woman?  Did she park her Porsche in a public place without locking it? 

The press are outraged and the politicians are indignant that Merkel's phone has been hacked for years by the NSA.  Obama did or didn't know about it. This diplomatic squabble makes for good headlines, but it's not the real lesson of this story.

Indeed, why was Merkel using an unsecured phone?!  According to press reports of the Snowden revelations, she was using the sort of phone service that you or I could buy by popping into a shop in Berlin.  

If the NSA has been listening to Merkel's phone for years, and the German authorities only learned about it from the Snowden revelations, then one has to assume that other sophisticated national surveillance organizations, like the Chinese and the Russians, have been listening too.  State surveillance secrets in China and Russia are less leaky than in the US, and I doubt we'll see a Chinese or Russian Snowden expose their practices to the world.  

So, the most powerful woman on the planet apparently needs help in recruiting a staff of competent computer and communications security experts who could help protect her and her role.  

Any privacy lawyer who works in the field of security breaches always asks a basic question of the target of a breach/hack:  were you using "adequate security"?  Seriously, would you park your Porsche in a public place without locking it? 

Friday, October 25, 2013

My Mom and Dad trust each other




Imagine if your mom and dad didn't trust each other. Imagine if they spied on each other, and hired private investigators, and tapped each other's phone calls. They'd yell and fight, and the kids would be unhappy.

Then, into the house came a woman, saying she was from Brussels, and she could fix things.  She said we needed fair rules to re-build trust.  Everyone listened. 

She said we needed the following rules:  the children should never be allowed out of the house, except to go to school, since no other place could be trusted.  She said that the children should never use Twitter or Facebook, since they couldn't be trusted.  She said that the children could only play games that had been pre-approved by their teachers or parents, since other games couldn't be trusted.  She said the children needed discipline, and severe sanctions if they ever violated these rules.  


She said that the only way to re-build trust between the parents, and to stop their spying on each other, was to impose these stern rules on the children.  


Everyone sat quietly for a moment.  Then I said:  "isn't it unfair to punish kids for our parents fighting with each other?"  She said:  "be quiet, child, I'm sick of your lobbying." 


After a few more moments of silence, the parents both said:  "look, we're adults.  This is our problem.  We need to work it out between ourselves.  Our children have nothing to do with this.  Get out of our house, now! "


As she walked towards the door, the woman from Brussels turned to us children and said: "You wicked little things.  Unless you are subject to strict supervision, your parents will never trust each other again, and it's all your fault!"


Editor's note:   if you don't get the point of my little story, please read this expert commentary by Mr Jeppesen:
"...the E.U. Data Protection Regulation (DPR) was first proposed in 2012. Unfortunately, government surveillance issues cannot be solved by this legislation....
it would not regulate E.U. Member States' national security intelligence programs, nor would it address the surveillance programs of the United States. The European Parliament and the European Commission simply do not have the authority to address national security matters... The only path forward for true reform around global surveillance practices is a much harder slog. It will require a joint European-U.S. effort to find agreement on proper legal standards and safeguards."


Thursday, October 24, 2013

Jeff Koons' Private Parts


I was invited to a fancy charity dinner in Paris, and was treated to a delicious feast of suave irony.  It's not every day that I sip Dom Perignon with Jeff Koons and Laurent Fabius, paid for by a tax-exempt charity. The conversation went something like this:

Jeff:  I love France, I love Versailles.   They just did a show of my work.  For centuries, people with wealth and power have bought the world's best art to show the world their excellent taste.

Laurent:  We're so happy to invite our American friends to France.  I come from a long family tradition of art dealers. In France, we support culture.  
  
Silly rich person at our table:  Jeff, which artist had the most influence on you?

Jeff:  My favorite artist has always been Monet, or Manet, I mean Monet.  

Me:  I start howling with laughter.  I am kicked in the shin by my partner. 

Silly rich person at our table:  I adore la France.  My entire house in Dallas is decorated in French style.  and Peter, what do you do, she asks, feigning interest.

Me:  I work in privacy, and I'm bemused by Jeff's soft-porn art and the idea of an artist exposing his erection as a statement about what's private and what's public. 

Laurent:  Apologies, dear American friends, I must leave you now to speak with Assad.  So vulgar, but his wife is charming. 

Jeff:  Apologies, too, I have to catch a flight with Francois to Venice tomorrow, he says, with an ah-shucks tone and a million-dollar smile that had all of us swoon.  

Silly rich person at our table:  I just loved them both!  So down-to-earth!  but, Peter, I think your comment about his nude art made him uncomfortable.  Did he really show his private parts in his art?  I'd like to see that.  

Tuesday, October 22, 2013

Two farmers and a donkey


Two farmers owned fields that lie side by side.  They don't like each other, and they never have.  But fate has put their fields next to each other.  Farming is a tough life, and neither makes much money.  So, the two farmers agreed, with heavy-hearts, to buy a donkey jointly, and to share it to till their fields. 

For a while it worked, but as the spring wore on, and the days started getting hotter, both farmers wanted to till his fields in the early morning, when it was cooler.  

The donkey stood in the middle, on the line between the two fields, while each farmer tugged as hard as he could, trying to pull the donkey in his direction. The donkey didn't move.  He couldn't.  He was being pulled in two opposite directions, by farmers of equal strength.  After several minutes of excruciating pain, the ropes around the donkeys neck, being pulled in opposite directions, choked the donkey, and he fell to the ground with a dull thud. 

The farmers glared at each other for a few minutes.  Then they grinned, shook hands, and agreed that it was a damn dumb donkey not to follow their commands.  

oh, and except for the damb dumb donkey, everyone grinned and applauded this.  

Sunday, October 20, 2013

Dear Diary


Dear Diary,

You're the only one I can talk to.  You're the only place where I can share my secret fears.  I feel safe, because I know that no one else will ever read what I write here.  

Even now, after all these years, I don't feel safe as a gay man.  I know there are a lot of people who hate me for that.  I feel sick to my stomach when I read how another young gay man was murdered:  They broke Mr Zamudio's leg with a heavy stone, beat him up with bottles and carved swastikas into his body with broken glass before walking away.

I am very proud to spend my working life in the field of privacy.  I believe that it's the foundation of human dignity, and I hope that I can contribute something to it.  But in a dark mood, I realize that I can no more hold up the tides of technology than an oyster can stop the tides.  

I know that secret algorithms roam the Internet, analyzing, recording, and data-mining every piece of data that they find, billion by billion.  But I assume they won't read this blog, because it's just my blog and it's not very important, except to me.  And even if they do read this blog, I assume it's just to show me an ad, which isn't a big deal.  I mean, they wouldn't create a psychographic profile of me, would they, to use to decide whether or not to hire or fire me?  I mean, I'm not a public figure, like a politician, so why would they create a profile of me?

I had a funny dream yesterday, that I went to dental school to start a new career.  In my dream, I realized that no one would ever thank you for your work in privacy, because it was always a losing fight, so I thought I'd look for a career where you could help people.  Well, that's something I could only tell you, dear diary, since I wouldn't want anyone else to know that I'm nagged by doubts.  This facade is getting exhausting, like pretending to be straight when you're not.  I'm willing to fight the good fight, but I know that I'll lose, in the end.  Well, dear diary, at least I can confide in you, and I feel better already, since I know you'll keep my secrets.  

Friday, October 18, 2013

Lovely, lovely, let's not change a thing



While I was on St Bart's, a lovely French island where plutocrats play, I had a chance to chat with the image-savvy CEO of a major tech company based in California (not Google). We were talking about privacy in Europe, and she said:  "yeah, I know, Europeans think different, Nazis and stuff".  Then she realized I was not an important person, and turned away to talk to someone else. 

Indeed, stuff... She's right, of course, on a basic level, that privacy expectations reflect each country's culture, history and ideology.    

But the Nazis and stuff don't quite explain Europe.  Take France, and its "stuff".  I love France.  I love the country, the people, the culture, the language.  I do not love its government.  I think France is poorly governed by an entrenched "political class" and run by an army of grumpy functionaries and enslaved to a socialist ideology stuck in a 1970's rut.  And lots of people think that it will be run by the far-right Front National in a few years, as mainstream voters get sick of their "mainstream" parties and Socialist taxocrats.  

France is a deeply conservative society, in the sense that it does not like change.  This country is deeply uncomfortable with globalization, and even with capitalism, based on a widespread pessimism that France's best days are behind it.  Innovation is not popular in a country that thinks it's more likely to lose from the change that innovation brings.  The innovation that is popular in France is inventing new taxes (innovating a new global financial transactions tax?, innovating a new "data" tax? innovating the highest marginal income taxes in the world?).  

Paris was once more welcoming to foreign businesses.  The Economist's article recently struck a lucid and painful blow to French self-esteem:   The article pointed out that Paris was Morgan Stanley's first international office, a decade before London!  Can you remember the 1970's and 1980's, when American technology giants like IBM and Microsoft chose Paris as their European headquarters?  The entire new generation of American tech companies have chosen London or Dublin or Luxembourg or Zurich for their European headquarters. I can't think of a single American company that has chosen Paris for its European headquarters in the last two decades. Understandably, this is all hard for Paris to swallow.  
 
Against this background, it's easier to understand why the French government is campaigning to weaken the European Commission's proposal to institute a one-stop shop in Europe.  Most US companies would find their lead regulators in Dublin or London or Luxembourg.  As far as I know, not a single foreign company would have its "main establishment" in Paris.  

Looking at the increasingly barren business landscape in Paris, I'm reminded of Voltaire's advice:  "Il faut cultiver notre jardin".  I'm often amazed that anything grows here at all, like a pretty flower in the dry, hostile desert.