Thursday, December 6, 2012

My Italian Appeal

My Google colleagues David Drummond, George De Los Reyes (now retired) and myself were convicted in Milan, Italy in 2010 for violating Italian privacy law.  We have appealed these convictions.  The first appellate hearing took place in Milan on December 4.  I attended the hearing in person.  The next hearing will take place on December 11.  I want to describe this appeal, and the broader issues at stake in this appeal, from my personal perspective.

First, a review of the facts:  in 2006, students at a school in Turin, Italy filmed and then later uploaded a video to Google Video that showed them bullying an autistic schoolmate.  Google Video was a predecessor to YouTube.  The video was totally reprehensible and violated Google Video’s terms and conditions of service.  Google took it down within hours of being notified by the Italian police of the presence of the offensive video, consistent with its policy to remove any content that violates the terms and conditions of service. Indeed, Google had clear policies and processes in place to help ensure that objectionable content was dealt with swiftly and effectively. Google also worked with the local police to help identify the person responsible for uploading it and she was subsequently sentenced to 10 months' community service by a court in Turin. Several other classmates who were involved, as well as the teacher who failed to stop the offensive conduct, were also disciplined.

In these rare but unpleasant cases, that's where Google’s involvement would normally end.  Under European law, hosting platforms that do not create content, such as Google Video, YouTube, Bebo, Facebook, and even university bulletin boards, are not legally responsible for the content that others upload onto these sites. But in this instance, a public prosecutor in Milan decided to charge us with criminal defamation and a failure to comply with the Italian privacy code.  None of us, however, had anything to do with this video. We did not appear in it, film it, upload it or review it. None of us knew the people involved or were even aware of the video's existence until after it was removed.  

Nevertheless, in 2010 a judge in Milan convicted the three of us for failure to comply with the Italian privacy code and sentenced us to six-month suspended jail sentences.  We were all found not guilty of criminal defamation. This ruling means that employees of hosting platforms like us can be held criminally responsible for content that users upload, even if we’re completely unaware of the content. We are now appealing this extraordinary decision both to clear our names and because it represents a serious misunderstanding of privacy law online and a threat to freedom on the web.  European Union law gives hosting providers protection from liability so long as they remove illegal content once they are notified of its existence in order to provide protection for hosting providers and their employees in exactly this circumstance.  Sweeping aside this important principle and attacking the very freedoms on which the internet is built threatens the continued availability of sites that accept user generated content.

Although we were convicted of violating the student’s privacy, it is the bullies who took the video and put it up on the site, in violation of the representations that they made to Google regarding the content of the video.  It is those bullies who should be, and have already been, held legally responsible for failing to comply with their obligations under the privacy law.

The European Union's Electronic Commerce Directive, enacted in 2000, sets a clear legal framework for establishing liability for unlawful content on the Internet. It prevents liability for those who merely provide the forum for sharing user generated videos, drawing a clear line between those who develop and control content for the Internet, and those who, in their capacity as technological intermediaries, provide the means and the tools to make this content publicly available.

By establishing legal certainty and creating a single EU-wide standard, the E-Commerce directive allows the development of open platforms that promote free expression on an unprecedented scale and has played a crucial role in speeding the rapid growth of the Internet and the development of the new economy in Europe.

How does the E-Commerce prescription work in real life? Say an Internet user uploads a video filled with illegal hate speech or violence. When notified of this illegal content, the hosting platform is obliged to take it down. The hosting platform, however, is not obliged to monitor and prevent the upload. The guilty party is the Internet user who posts the content. In this case, Google did exactly what the E-Commerce directive requires - it removed the content upon notification, and took the further step of cooperating with law enforcement requests, helping to bring the wrongdoers to justice.

If Google and companies like it were responsible for every piece of content on the web, the Internet as we know it today – and all of the economic and social benefits it provides –could not continue.  Without appropriate protections, no company or its employees would be immune: any potentially defamatory text, inappropriate image, bullying message or video in which third parties appear would have the power to potentially shut down the platform that had unknowingly hosted it.

Google and other Internet hosting platforms require legal certainty with respect to their liability. By retroactively creating new obligations for hosting platforms – and attaching criminal penalties for employees like us – this conviction destabilizes the certainty of law.

The judgment also criticizes Google’s terms and conditions of services included in its agreements with users of its video sharing service, suggesting that Google buried it in difficult to understand privacy clauses characterized as a “prefabricated alibi.” Yet all types of businesses, from financial and retail to Internet companies operate with consumers on the basis of similar contractual terms of service.

The judgment’s reasoning subjects hosting providers and their employees to uncertain and progressively higher standards as technology advances. What new legal obligations might be imposed in the next case before a criminal court? It is this uncertainty which menaces Internet freedom. In his closing lines, the judge himself raises this dangerous possibility -  “There is no doubt that the amazing speed with which technology is advancing will allow the managers of web sites to control the uploading of content,” he writes. “The existence of increasingly sophisticated pre-screening filters will imply great responsibility for operators. Criminal liability (negligent or willful as the case may be) for omitting to carry out checks will be a lot easier to find.” While this may have been the view of the trial judge, it was not the view of the Italian Parliament when it implemented the EU directives providing for protection for hosting intermediaries like Google. We do not share the judge’s view of a future internet where hosting companies monitor and prescreen all of the content uploaded by its users and unilaterally determine what will be available for sharing with others.

By criminally prosecuting individuals like us who were not connected to the video at issue, this case represents a dangerous precedent. To seek criminal penalties against employees just because they work for a company that provided a hosting platform is a chilling prospect, and threatens to have a substantial impact on the future development of the Internet.

The real culprits, the teenagers who bullied their classmate and uploaded the video of it, and the teacher who permitted it to occur have already been identified and punished. 
The entire matter should end there.  

Monday, November 26, 2012

Should you cover your tracks from government snooping?

Most of us store a lot of stuff in the cloud.  For example, most of us keep lots of old emails in the cloud, since storage is free, they're easily searchable, and it's always possible that those old emails could come in handy some day.  In fact, there are a lot of practical reasons to keep stuff like old emails forever.  Yet it's worth taking a moment to consider the risk that governments can access data that you choose to keep. 

Governments are in a unique category, since they can simply pass laws to give themselves the rights to access data.  Some of these laws are wildly out of date, and simply no longer fit for purpose, in particular the US law from 1986, called the Electronic Communications Privacy Act.  For some years now, there have been many calls to Congress to update these laws.  Perhaps the Petraeus scandal will give this movement new impetus, since the privacy debate usually advances only when abstract privacy concepts are given a human face and a story that people can empathize with.  

As a normal user of email, it's fair to ask whether there's any reasonable risk that a government would be interested in accessing my emails.  After all, most of us are not Director of the CIA or cybercriminals.   As a matter of civil liberties, it's important for everyone to have some sense of the balance between privacy and surveillance that the government has chosen.  As a user, I want to know which governments are accessing data, and how often.  I know that published metrics will be imperfect, but I want to have more transparency, so that I can make my own decisions, as a user and as a citizen.  

Seen from a global perspective, it's important to realize that most governments around the world are accessing user data.  It's not just one or two governments.  I can't count the number of times privacy advocates in Europe have warned users that the US government could potentially access their data in the cloud, without mentioning the risks that their own governments could do the same thing.  In fact, to take the French example, the French government is trying to launch a "French cloud", explicitly to try to evade US government surveillance, even though this taxpayer-funded initiative is based on "bad assumptions about cloud computing and the Patriot Act", and even though France's own anti-terrorism law "has been said to make the Patriot Act look "namby-pamby by comparison", as reported on ZDNet.  I think it's fair to assume that most people would be far more uncomfortable with foreign governments, rather than their own governments, accessing their data.  That points to one of the hardest issues in the cloud, namely, that multiple governments can (and do) have the power to demand access to user data, if they follow appropriate legal procedures. 

In light of all this, I believe that it's an ethical imperative for companies that are entrusted with user data to publish statistics on governments' requests for access to user data.  A number of web companies are now publishing data on all this, in addition to Google, which started this trend of reporting on governments' request for user data.  I strongly encourage you to take a look at those statistics, which may challenge some of your long-held intuitions about which governments are most active in trying to access user data.  Other companies have also started publishing statistics: and Twitter  But most companies are still not publishing any such statistics.  

A lot of companies are failing their users now.  The Electronic Frontier Foundations ranked companies "When the government comes knocking, who has your back?"  There are a lot of big names on that list doing very little to give their users transparency.  

In the meantime, as users, we all have to decide if we want to keep thousands of old emails in our inboxes in the cloud.  It's free and convenient to keep them.  Statistics published by some companies seem to confirm that the risks of governments seeking access to our data are extremely remote for "normal people".  But the laws, like ECPA, that are meant to protect the privacy of our old emails are obsolete and full of holes.  The choice is yours:  keep or delete.  I'm a pragmatist, and I'm not paranoid, but personally, I've gotten in the habit of deleting almost all my daily emails, except for those that I'd want to keep for the future.  Like the rule at my tennis club:  sweep the clay after you play. 

Wednesday, November 14, 2012

Book Burning, updated for the Digital Age

We're so much more enlightened than prior Book Burning Generations, aren't we?  Book burning has a long and inglorious history.  History also teaches us that the book burners usually end up getting burned themselves.  

Think of Savanarola in 1497, in the famous Bonfire of the Vanities, burning books and objects that were deemed temptations to sin.  Two years later, Savanarola was himself burned at the stake.

Think of the Nazis in 1933, burning "un-German" books.  Twelve years later, they left Germany burning, along with much of Europe.  

Book burning has been with us in every age.  Books were burned to protect the faith, or to protect the nation, or to protect the regime.  Now, in order to protect "privacy", Europe is creating a poorly-defined, poorly-conceived "Right to be Forgotten", on which I've blogged before.  Are we re-igniting the long tradition of book burning?   

In the digital age, we don't burn physical books.  Instead, we delete data.  

The Right to be Forgotten is more pernicious than book burning.  The Right to be Forgotten attempts to give to individuals the legal rights to obliterate unpalatable elements of their personal data, published in third-party sources, whether they are social networking sites, or newspapers, or books, or online archives.  In the real world, these can be things like a report on a politician taking a bribe.  Or a doctor put on trial for medical malpractice.  Or a person filing for bankruptcy.  You can easily see how the person concerned could have an interest in obliterating any reference to these embarrassing facts, while other people might have a very legitimate interest to know about them. 

Historically, book burning was usually a symbolic, political protest act.  No one burning books was under the illusion of destroying the text of a book being burned.  Only the physical copy of the text was being burned.  The text would survive elsewhere.  But the Right to be Forgotten is attempting to obliterate the text, the source, the facts themselves, and not merely some copy of those facts circulating in a physical book or newspaper or online site.  

Deleting data in the name of the "right to be forgotten" is only the tip of the privacy-ideology iceberg.  One of the core tenets of this ideology is that all personal data should be deleted, as soon as it is "no longer necessary".  This ideology is based on the fear that any personal data could be mis-used to invade someone's privacy, and that the risk of an invasion of privacy should automatically outweigh any potential future benefits of retaining the data.  This is a deeply pessimistic ideology, which concludes that retaining data can give rise to future risks and to future benefits, but since we don't yet know what they are, we should default to deleting the data to prevent the risks, rather than retaining them to enable the benefits.  

As Savanarola might say, in an outburst of data deletion demogoguery, let's burn all those "vanities", those databases of personal data, which are nothing but temptations to sin against someone's privacy.  But the opposite may prove true, that these vanities are databases of great value and beauty, and we will someday learn it would be a sin to obliterate them.  Botticelli is believed to have burned some of his paintings, as he was caught up in Savanarola-fever.  A few years later, Botticelli renounced Savanarola's worldview.  

I can understand that databases should be protected, secured, analyzed responsibly, yes...but obliterated?, just because something could go wrong?   If we took that approach in the rest of our lives, what would be left?  How bizarre that this destructive pessimistic philosophy on data deletion has become conventional wisdom, at least in Europe.  Well, for now.  In the long run, book burning has never been a winning strategy.  If you think our age is more enlightened than prior ages of book burners, why do you think burning books in the name of privacy is more legitimate than burning books in the name of race, religion, or regime?

Monday, November 5, 2012

The Marketplace of Privacy Compliance Programs

The data protection establishment, worldwide, has been inventing a lot of new privacy compliance programs.  All these different, well-intentioned initiatives are meant to serve the same purpose:  improve privacy protections.  All of them are, or likely will soon be, mandatory for most big companies.  I can hardly keep track of all the different initiatives, but here are the ones I have struggled to understand:

  • Accountability
  • Privacy by Design
  • Privacy Impact Assessments
  • Consent Decrees
  • Audits (internal and external)
  • Regulatory reviews
  • Data Processing Documentation
  • Database notifications/registrations
  • Binding Corporate Rules
  • Safe Harbor Compliance programs
Lots of my acquaintances in the privacy field have asked me what I think about all this:   Are these programs meant to run independently, even if they overlap and cover the same ground?  Does anyone have a clue how much all this will cost?   Where do you turn for help to implement these programs?  Can one solid privacy compliance program be implemented to meet all of these goals?  Clearly, all of us privacy professionals are struggling to understand this. 

I'm sure we all believe that privacy programs need a solid compliance-program foundation to be effective.  Most of also probably believe that different actors should have the freedom to develop programs that fit their cultures.  Nimble Internet companies have very different cultures than government bureaucracies, so naturally, these different cultural worlds must have the freedom to design programs that works in their respective cultures.  Clearly, one-size-does-not-fit-all.  Programs have to be customized for the size and sensitivity of the processing.  A government database of child-abuse records is more sensitive than a database of some web site's analytics logs, so it's wrong to try to run the same compliance programs for both. 

On cost:  despite all the good intentions motivating these compliance initiatives, no one has even begun to figure out what all these compliance programs are going to cost.  Take Europe as an example:  I've read some statements from politicians that future EU privacy laws will reduce business' compliance cost.  That is simply not credible.  On the one hand, under the new rules, businesses in Europe will save a little money, once they no longer have to fill out national database notification forms across Europe.  In the scheme of things, that is peanuts.  On the other hand, imposing new compliance obligations (mandatory privacy impact assessments, mandatory data protection officers, mandatory security breach notifications, mandatory data processing documentation) will cost a lot.   The problem is that nobody knows how much all this will cost.  I'm working on the educated guess that the current EU privacy compliance proposals will increase the privacy compliance costs on businesses in Europe ten-fold, starting around 2015.  Yes, ten-fold.  That excludes the costs of fines and sanctions for non-compliance, now proposed to run up to some percentage(s) of a company's worldwide turnover.  This massive increase in compliance costs is largely the result of the proposed EU sanctions for failing to adequately document compliance programs.  I'm still hopeful that more realistic compliance obligations will be created for Small and Medium sized Enterprises, but the big trend is clearly towards costly new compliance obligations in Europe.  

I get the feeling that the many people debating privacy laws have no idea (and perhaps don't care) how much all this ends up costing.  I also haven't read any classic regulatory cost/benefit analysis on these new obligations.  As a lawyer trained at Harvard in the cost/benefit analysis of government regulations, I am surprised to see that there's been essentially zero academic or economic analysis to decide which privacy compliance rules are effective and which are pointless red tape.    

At the writing of this blog, I really don't know how all the compliance initiatives above are supposed to fit together.  I don't know which are superfluous.  All this has yet to be worked out.  While each of the programs above overlaps with the others in some ways, each is also slightly different too.  We've got to figure out how to minimize duplication among these programs, or we're all going to waste our time and money on re-inventing the wheel.  

Privacy compliance initiatives today remind me of the early days of the railroad, when each railroad line had its own track width, meaning trains could only travel on one track.  Eventually, all this will get sorted out, just as railroad track width was eventually standardized, but in the meantime, I fear we're all going to be running around in circles.  Like the early days of the railroad, we're still in the early, experimental, inefficient, non-standardized, frontier-age of duplicative privacy compliance programs.   

Friday, November 2, 2012

Greece: protecting freedom of expression

You may have read about the widely-reported case of the Greek journalist who published the list of 2000 Greeks with Swiss bank accounts.   The journalist was put on trial for criminal breach of data protection rules.  Thankfully, the courts recognized that this journalist published the names in the public interest.  Indeed, the case confirmed the world's strong suspicions that the Greek political and financial elites were protecting themselves from investigations into tax evasion.  Rather than investigate why the Greek tax authorities failed to investigate this list of 2000 names, after having been given the list two years ago by the IMF, the authorities put the journalist on trial.  This was a transparent attempt to use the criminal justice system, and "data protection", as a way to chill this (and other) journalists' attempts to expose tax evasion and political connivance. 

Thankfully, the Greek court dismissed the charges of data protection crimes against the journalist.  
As a privacy lawyer, I note a few things.  Data protection laws in Europe explicitly foresee an exemption from normal privacy laws, for journalistic purposes, such as "necessary to reconcile the right to privacy with the rules governing freedom of expression" and for "substantial public interest".  Articles 9 and 8 of the Directive.   Surely, this Greek example meets both tests, and the court was quick to reach that result.

Nonetheless, I'm very worried about the increasingly criminalization of privacy laws, especially across Southern Europe.  Once privacy laws are inscribed into penal codes, they open the door to prosecutors and criminal judges pursuing such cases with the blunt machinery of criminal justice, backed up with threats of jail.  Many such cases, like this Greek example, are nuanced cases balancing fundamental human rights, like privacy and freedom of expression.  Nothing is more dangerous to freedom of expression than using vague notions of "privacy" to threaten journalists, or newspapers, or Internet platforms, or employees of Internet platforms, with jail time, when they are exercizing their rights to freedom of expression or operating a platform for others to do so.   There are now hundreds of such cases around the world.  

Luckily, the Greek justice system was quick, and resolved this case in days.  But many criminal justice systems are notoriously slow.  As reported in The Economist, to take the example of Italy:  "Italian justice has a reputation for moving very slowly."   My own Italian privacy criminal trial has been dragging on for years, and is expected to begin the appeals phase soon, on December 4, almost 5 years after I was first "detained" by Italian police in Milan.   5 years is a long time to put someone through criminal justice hell, in a landmark case trying to make me vicariously liable for user-generated content uploaded to an Internet video platform.  

Congratulations, Costas Vaxevanis, I respect your courage. Powerful forces try to use criminal privacy statutes to restrict freedom of expression.  Thank you for standing up to them. 

Monday, October 29, 2012

Singapore passes a modern privacy law. Cheers!

Singapore is the latest in the long list of countries that have recently passed privacy laws.  It's joining other Asian countries, like Malaysia and The Philippines, in this year's crop of countries with new privacy laws.

This is in part a tribute to Europe, where modern privacy laws were invented in the 1960's.  Well, they were modern then.  Now, they're pretty much out of date.  There was a school of thought in Europe that data protection laws are a perfect expression of fundamental human rights, a beacon to all mankind, like the Venus de Milo, to be admired and copied by all humanity. 

Singapore has passed a modern privacy law.  Europe, by contrast, is trying to modernize its old privacy law.  Europe should try to learn a lesson from Singapore's new law. 

To me, there is a simple test of whether a privacy law is modern:  how it handles the issue of "international transfers of data".   Indeed, if you asked me to pick the one notion of existing European privacy laws that is most in need of modernization, I'd pick this one:  Europe's restrictions on international transfers.  Bizarrely, in the long list of things that Europe is now proposing to "modernize", the need to create a more rational framework for international transfers is not on the list.  Singapore got this right.  Singapore's new law simply says that a company that transfers data outside Singapore is responsible for ensuring that it continues to respect the provisions of the privacy law.  Simple.  Effective.  Obvious.  

Or to quote Singapore's government minister's speech, when the law was passed:  "We are not adopting a prescriptive approach of restricting transfers of personal data to countries that have an adequate level of data protection. Instead, the Bill adopts a “principle-based” approach, where the onus will be on the organisation in Singapore to put in place measures, such as contractual arrangements, to ensure a comparable standard of protection is accorded to personal data transferred overseas. Therefore, there is no need to further burden organisations with disclosing to consumers where copies of their personal data will be transferred to."

Singapore's approach is a direct repudiation of the European approach, which makes it very hard to transfer data internationally.  The European framework has frankly become bizarre, with an entire legal industry contorting itself in gymnastics for international transfers:  
  • First, European laws declare transfers to be ok to other European countries and to countries deemed to have "adequate" privacy laws, but the list of "adequate" countries is a list of countries which make strange bedfellows, including mostly tax havens (yes, Monaco and Guernsey) and a few (I mean, literally, a few) others, ranging from Argentina and Uruguay to Israel and Canada.  
  • Second, there are a few hypothetical legal mechanisms to enable data to be transferred from Europe to other countries around the world.  Data can flow to the US if the company transferring it signs up the US-EU Safe Harbor Framework.  Data can also flow around the world if the company transferring it signs up to so-called Binding Corporate Rules, and if these Binding Corporate Rules are approved by Data Protection Authorities.  The pure simple fact is that only a tiny handful of Binding Corporate Rules have ever actually made it through the bureaucratic approvals process. E.g., to my knowledge, not a single Internet company has ever had Binding Corporate Rules approved.  So, practically, the option of obtaining Binding Corporate Rules is theoretical. 
  • Third, people can consent to having their data transferred internationally, although there's no consensus on what "consent" means or requires in practice, and no one even knows what a "transfer" is. 
Since we all know that data is being transferred internationally, every day, billions of  times per day, by everyone on the Internet, does that mean that every company, every government, every individual is "illegally" transferring data in Europe today?  Does that also mean that EU privacy laws are hopelessly out of data on this issue?  Well, yes.  And hardly a day goes by without yet another taxpayer-funded study by government authorities on the Cloud, sternly admonishing customers to comply with EU privacy laws on international transfers, and list the locations where data is processed, while at the same time acknowledging that there is no pragmatic, real-world solution to the archaic stuck-in-the-muck rules from the 80's on int'l transfers.  

Europe is attempting to modernize its privacy laws now.  It's proposing a number of sensible ways to modernize the laws.  But, missing an important opportunity, it is doing essentially nothing to try to modernize the single most important piece, namely, simplifying the rules around international data transfers.  Why not just get rid entirely of the reality-divorced restrictions on international data transfers, as most countries around the world have already done?  Whoever collects and transfers data should remain responsible for it, regardless of where the data is processed.  Period.  It's so simple.  

Singapore just passed a modern law, with a sensible provision on international transfers.  A modern law will help build a modern industry and create jobs.  If Singapore can do it, Europe can too.  Or if not, the rest of the world will just move on and build the future without us.  At least, the world will retain a deep affection for the historical treasures of Old Europe, like affluent Singaporean tourists snapping photos of the Venus de Milo in the Louvre, while our diminished-generation of children wait outside and hope to sell them a sandwich.  

Friday, October 26, 2012

Privacy-litigation: get ready for an avalanche in Europe

The US has long been a litigious country.  What's true in general in the US, is also true for privacy.  The US has a vibrant privacy litigation industry, led by privacy class actions.  Within hours of any newspaper headline (accurate or not) alleging any sort of privacy mistake, a race begins among privacy class action lawyers to find a plaintiff and file a class action.  Most of these class actions are soon dismissed, or settled as nuisance suits, because most of them fail to be able to demonstrate any "harm" from the alleged privacy breach.  But a small percentage of privacy class actions do result in large transfers of money, first and foremost to the class action lawyers themselves, which is enough to keep the wheels of the litigation-machine turning.  

Europe, by comparison, is not nearly as litigious as the US.  What's true in general in Europe, is also true for privacy.  In Europe, privacy is mostly handled as a regulatory matter, by Data Protection Authorities, who have the power to investigate complaints, launch enforcement actions and impose sanctions for breaches.  

In theory, any DPA enforcement action or sanction can be appealed to national courts.  In practice, this is rarely done.  Why?  Because European DPA sanctions tend to be very small.  Rationally, would you hire an expensive law firm to appeal a DPA enforcement action resulting in a 100,000 euro fine, if you knew that your outside counsel costs for the appeal alone would exceed that amount?  Even if you knew you'd win, you probably wouldn't appeal, as a purely rational matter. 

One of the unfortunate consequences of the current European DPA enforcement/sanctions model is that very few of its decisions are tested or validated by the courts.  If more of these cases were appealed to the courts, I am absolutely certain that many of them would be over-turned as a matter of law.  So, Europe is building up a body of regulatory "case law", which has never really had the discipline of judicial review, as we'd understand that concept in the US.  

Starting around 2015, when the new EU Privacy Regulation comes into effect, all this will change.  The new laws are almost certain to introduce vast new sanctions and fining levels for privacy breaches, expressed as a percentage (say 2%) of a company's global turnover.  Yes, you read that correctly.  Compare today, when the largest fine ever imposed by the CNIL in its history was 100,000 euros to this near-future, when fines could in theory run to many many millions.  You can do the math.  

Once there is real money at stake, everything changes.  Companies that today shrug their shoulders and pay small fines, rather than be bothered to hire lawyers and launch long legal processes, in the future will be confronted with the risk of massive fines.  Facing massive fines, companies will be required to hire expensive lawyers, launch intense legal battles, and generally handle privacy breach litigation with the full battery of legal process and tools.  Companies already do this in many other areas of law, so extending such practices to privacy law will not be hard.  

DPAs, on the other hand, are completely unprepared for this near-term future.  Many DPAs today operate "prosecution by press release", which is really not meant to withstand legal process, but rather to generate some press and reputational impact.  But DPAs are completely unprepared and un-staffed to launch serious legal actions, with a solid basis in law, and a solid respect of legal process, in a way that would withstand tough legal scrutiny and the judicial appeals process.  It's one thing to launch an enforcement action where the money at stake is 100,000 euros.  It's entirely different when the money at stake is 100,000,000 euros.  

In this post, I'm not commenting on whether creating large sanctions for privacy breaches in Europe makes sense or not.  I'm just saying that the entire legal/procedural game changes when there's lots of money at stake.  Privacy litigation will become an outside counsel growth area in Europe.  Companies will handle privacy in Europe increasingly as a litigation matter, rather than a regulatory matter.  And DPAs are going to have to figure out how to stand up to defendants' legal heavy artillery, something few of them have ever faced.  

Privacy litigation is already a big business in the US.  In a couple years, privacy litigation will go big time in Europe too, once big money is at stake.  Finally, we've found a growth industry in slow-growth-Europe.  

Thursday, October 25, 2012

Microsoft's brilliant master class on how to change a privacy policy

Privacy professionals are often asked how to change or update a Privacy Policy.  There are really just two basic choices:  openly or quietly.  

Naturally, I was professionally curious to see how Microsoft went about changing its privacy policy recently.  It was particularly interesting, because Microsoft made changes that were very similar to those Google made to its own privacy policy in March.  It's interesting when you have two large companies, making very similar changes to their privacy policies at the same time, but annoucing them in very different ways.  

Microsoft made its changes in legalistic language in something called the Microsoft Services Agreement.  

When Google announced its changes, Microsoft launched a worldwide PR campaign to discredit Google.  So, it is striking that Microsoft quietly made similar changes to its privacy policies that it so loudly criticized Google for making.  After Microsoft took out full-page newspaper ads to criticize Google for its changes, did Microsoft take out similar full-page ads to inform its users of the changes Microsoft was making?  Nope.  And "almost no one noticed" Microsoft's changes, as The New York Times reported.  

If the goal was to make changes in their privacy rules that "almost no one noticed", Microsoft was brilliant.  

I can guess what lessons will be drawn by most privacy professionals from this master class.   When the time comes for privacy professionals to update their own privacy policies, they now have two models to compare.  The open and transparent path led to worldword advocacy tirades and intense regulatory scrutiny.  The other path, well, Microsoft brilliantly blazed a trail so that "almost no one noticed".  Which path do you think privacy professionals will pick in the future?  Which path do you think is good for privacy?

Sadly, we all know the answer.   

Tuesday, October 23, 2012

Privacy Professionals peregrinate to Punta

Today in Punta del Este, Uruguay, is the annual conference of the world's data protection commissioners.  It also brings together a large number of people in their orbit, like privacy advocates, practitioners and lobbyists.  These are annual conferences, usually held in Europe, but occasionally in other countries around the world, as a "reward" for adopting European-style privacy laws.  Uruguay has just adopted euro-style-privacy laws, so it's the host this year.  In previous years, other countries that had recently adopted euro-style privacy laws, namely Mexico and Israel, were hosts.  Countries that have not adopted euro-style privacy laws, like the USA or Japan, are not deemed eligible to be hosts.  In fact, until recently, the US Federal Trade Commission wasn't even allowed to vote in the commissioners' meetings, but was only allowed to attend in a sort of second-class "observer" status.  Finally, two years ago, the FTC was admitted as a member of the commissioners' club.  

I have nothing against privacy confabulations.  There are always a lot of interesting things to talk about in the world of privacy.  Of course, all this talk could easily be conducted virtually, using simple Internet technologies, essentially for free. I won't be going to Punta, but I wonder if the Microsoft speaker, who will key-note there, will explain why they changed their privacy rules, as The New York Times reported, in a way that "almost no one noticed".  Or if he'll talk about how they use an army of privacy lobbying proxies, including former privacy regulators, as The Economist reported

I'm sure the conference will provide taxpayer-value-for-money, going by the pictures of the beach and the 5-star hotel in Punta on the Conference website. Flying half-way around the world to hear a Microsoft lecturer on privacy...priceless!  

Monday, October 8, 2012


There's an entire, vibrant privacy conference business.  There are privacy conferences somewhere in the world every week of the year.  Some are commercial, some are taxpayer-funded.  Why are they so boring?

Because they take one of the most interesting topics in the world, privacy, and discuss and debate it from an insular perspective, namely, from the perspective of people who are in the privacy "industry."  I'm very clearly part of this "industry" too.

The privacy "industry", or "privacy industrial complex", as some wags have dubbed it, consists of privacy professionals at companies, privacy advocates, privacy regulators, privacy consultants, etc.  So, conferences tend to be incredibly banal statements about who's more committed to privacy, and begin with stentorian declarations, like "privacy is a fundamental human right, therefore...".  Or they consist of a "debate" between two privacy advocates, which is like listening to two members of the National Rifle Association debate the social benefits of gun control.  Or they consist of paid-corporate advocates trashing their competitors' privacy record, often without disclosing who is paying them to do so.

The interesting privacy debates, in my opinion, are the debates where privacy is balanced against other fundamental human rights, like freedom of speech, or balanced against other social goals, like encouraging innovation, or tested against other yardsticks, like regulatory cost-benefit analysis.  But very little of that occurs at privacy conferences, because virtually no one from outside the privacy "industry" speaks at such events.  E.g., rather than hearing privacy-people talk endlessly about the need for more privacy regulation, I'd like to hear from an economist evaluating whether such regulations are effective, or whether their costs exceed their benefits.  Rather than hearing privacy-people talk about the need to create a "right to be forgotten", I'd rather hear from a free speech advocate on how such a right would undermine freedom of expression.  Rather than hear privacy-people talk about how technology needs to be reined in, and subject to bureaucratic prior approval (in other words, slowed-down), I'd rather hear from people who are committed to building modern and dynamic economies about how (archaic) privacy laws are hampering the creation of innovation-based economies.

But privacy conferences have largely become like any other conclaves of groupthink.  At a Vatican conclave, you don't get a serious discussion about the health benefits of promoting the use of condoms.  At a Tea Party rally, you don't get a serious discussion about whether government welfare benefits are a guarantor of minimal human decency.

I have pretty much stopped going to most privacy conferences, at least for now.  When I go, it's mostly to have a chance to have one-on-one chats with people I'd like to meet or catch-up with.  I think privacy is the most interesting topic in the world.  But groupthink gatherings don't move the debate forward.  If I was at an NRA meeting, I'd advocate for gun control to help reduce the shockingly high murder rates in the US, and I'd probably be run out of the room.  There are so many smart people in the privacy profession, why aren't we challenging each other more, to take a small, wild step outside the privacy-industrial-complex, and actually engage more with the real world?

Thursday, September 20, 2012

The algorithm decided not to hire you: is that legal?

I spend a lot of time thinking about privacy and algorithms.  

The Wall Street Journal carried an interesting story "Meet the New Boss:  Big Data", about how algorithms are now being widely used to make human resources decisions, like hiring and promotion.  The article pointed out that such algorithms could run into legal problems, if they intentionally or unintentionally filter out protected categories of employees, like older employees, under US anti-discrimination laws.  But the article didn't discuss a more fundamental legal issue, at least in Europe.

In Europe, "automated individual decisions" are a violation of EU privacy laws.  Article 15 of the EU Privacy Directive guarantees:  "...the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc".  

Well, that's about as clear as a law can get.  In our age of Big Data, we all know algorithms are being refined and used more and more widely to make decisions about hiring and promotion, and many other topics.  But when these decisions are made solely by algorithms, they are violating EU privacy laws.  Period.  The only way such algorithms can be used legally is to supplement them with certain other measures to safeguard the legitimate interests of the person being evaluated, e.g., by allowing him to put his point of view.  

I'm a great believer that algorithms can help all of us (governments, businesses, individuals) make better decisions.  But when a computer program is making key decisions by itself about whom to hire or fire, or whether or not to extend credit to someone, it's fair to ask for additional safeguards.  The privacy laws in Europe require it.  I'm agnostic about whether algorithms are more or less fair than humans at making a lot of such decisions.  In any case, companies using such algorithms need to consider how to make them comply with European privacy laws.  When algorithms are used to supplement other evaluation tools, they should be legal.  When algorithms are used to make these decisions by themselves, there's a serious risk they would be considered illegal in Europe.  Use with care.  

Wednesday, August 22, 2012

August in Paris: has everyone left?

A VIP friend of mine, Monsieur Banal, rang me up

Mr Banal:  Bonjour, Pierre.  Sorry to interrupt you with a phone call in August, but I can't reach any other foreign executives in France.  Where have all the others gone?

Me:  Monsieur Banal, they have moved to Switzerland, or Belgium, or the UK, to escape your plans to tax them at 91%.  

Mr Banal:  No, it's 75% tax, plus social charges.  Together taxes are over 90%.  That's true.  But only for the rich. 

Me:  But, Monsieur Banal, your tax rates are double those of London or Switzerland.  Mitt Romney pays 13% tax!  Even for people who love France, like me, how can we ever save for retirement if there's nothing left after taxes?

Mr Banal:  You don't need to save for retirement, since we have a generous French pension system, and you can now retire with a full pension at 60, thanks to me, the lowest retirement age in Europe.   

Me:  Monsieur Banal, do you remember when George W Bush said:  "French doesn't even have a word for entrepreneur'.  Ok, it was a very funny line.   Entrepreneurs building new businesses around the world use stock options as a way to incentivize their workforce to create new companies.  So, why would you pursue a policy to make stock options illegal?

Mr Banal:  In the public sector, we don't get stock options, so we see no reason why you should either.  We believe in fairness. 

Me:  Entrepreneurs often complain about suffocating regulation and bureaucracy.  Will things get better here?

Mr Banal:  I have never worked one day in my life in the private sector, but I learned how to regulate the excesses of capitalism at the Ecole Nationale d'Administration. 

Me:  France has well-educated, productive workers, doesn't it?

Mr Banal:  Indeed.  In France, we have a happy workforce.  Our employees get more vacation than almost anywhere in the world (by law, a minimum of 5 weeks per year), and they work fewer hours than almost anywhere in the world (by law, 35 hours per week).  This makes them very happy.  It is true that they sometimes strike, but only when they are not happy.    

Me:  What if my business fails?  

Mr Banal:  My Ministre du Redressement Productif (I cannot translate this into the English) will castigate you in the media, but it's only populist politics.  Don't pay any attention to him.  I don't really hate the rich, I just say that to set the tone.  

Me:  Why would so many French entrepreneurs expatriate to London or Silicon Valley to build their businesses?

Mr Banal:  Indeed, this is completely unacceptable.  We have a tradition of engineering excellence, and my government will help select those French technologies and businesses that deserve to succeed in the future.  

Me:  Let's have lunch in September.  

Mr Banal:  Sorry, I've been invited to lunch in Berlin.  I don't like the food there, but at least they pick up the check.  Will you still be in France when I get back?  

Thursday, August 16, 2012

It's time for a "lead regulator" in Europe

Who's in charge in Europe?  That's a common conundrum for those of us who work in the privacy field in Europe.  When I was at a Berlin privacy conference, dopey picture attached, everyone was talking about it.

Privacy regulators play the key role in enforcing privacy laws.  Most companies (certainly all Internet companies) operate globally.  So, it's a natural question to ask which regulator(s) will or should have jurisdiction to enforce privacy laws.  For many years, I have advocated for the concept of a "lead regulator" in Europe.  It makes a lot of sense for one country's regulator to take the lead on behalf of all of Europe.  It encourages consistency across Europe, it provides for a deeper regulatory-relationship, it saves taxpayer money, when numerous regulators are not all re-inventing the regulatory wheel.  This is exactly what the European Commission is proposing in its re-write of privacy laws for Europe.  

Take the example of Facebook, whose European operations are headquartered (in legal terms, "established") in Ireland.  Normally, the Irish data protection authority would therefore be the lead regulator of Facebook, on behalf of Europe.  And indeed, it has been acting accordingly, conducting a company-wide audit of Facebook's privacy practices.  

The key to making all this work is clear:  the concept of "lead regulator" simply cannot work unless other regulators to defer to their sister-regulator.  That's why this story caught my eye:  German privacy regulators re-open their investigation into Facebook's face recognition software, notwithstanding the fact that the Irish are currently investigating the same thing, and notwithstanding having previously said that they would defer to the Irish audit before proceeding.  

The German regulatory world is a microcosm of the European regulatory world.  Each "Land" in Germany has its own independent data protection authority.  In theory, each is entirely independent, and is free to investigate or regulate separately, or in addition to, or even differently than one of its sister-German-DPAs.  But in practice, the German DPAs have developed a custom (not based in law, but based in deference and mutual respect) that they would defer to the "lead German DPA".  In the example of Facebook, the DPA of Hamburg is leading on behalf of its sister-German DPAs, because Facebook's German headquarters are based in Hamburg.  That's why Hamburg, rather than, say, Munich, is investigating Facebook.  

So, the question is simple:  German DPAs have developed the concept of "lead regulator" amongst themselves.  But are they willing to respect the same concept, and show the same necessary regulatory deference, at a European level, e.g., vis-a-vis the Irish DPA? 

If the European Commission proposal becomes law, then the concept of "lead regulator" will be cemented into law.  I often critique other aspects of the Commission's proposal, but on "lead regulator", I applaud their efforts. The issue is contentious, and the French authority, the CNIL, to take one example, is very publicly attacking the concept of a "lead regulator", precisely because they don't want to defer to a non-French lead regulator.  

In the meantime, it's hard to know who's in charge.  I'm someone who believes that regulatory enforcement is more effective when it's absolutely clear who's in charge.  

Wednesday, August 15, 2012

Rainbows in Ravello: Technocracy or Democracy?

As the European elite has for centuries, I love summertime in Ravello.  Civilization has flourished on these ravishing hills for millenia.  Democracy has ruled here for only very brief interludes.  Indeed, modern Italy has given up on having an elected Prime Minister, and instead appointed a (well-respected) technocrat as their leader. The "democracy deficit" in Europe is well-documented.  When things get tough in Europe, well, do we turn our backs on democracy?  Virtually all European-level legislation is drafted by un-elected Brussels-based European Commission technocrats.  (I have the greatest respect to the intelligence and professionalism of the Commission staff, so my comments are institutional, rather than individual.)  What's true for virtually all EU legislation is also true for data protection.  The current EU proposal for revising EU Data Protection is a technocratic tour-de-force. 

The Commission has chosen the approach of a Regulation (directly applicable law), rather than the approach of a Directive (prior law was a Directive, which included scope for national parliaments to make adjustments).  There are pro's and con's to the Regulation approach.  The biggest advantage is that it would result in fully harmonized, consistent privacy laws across Europe.  That's why businesses love it: it's easier to comply with one set of rules, rather than with dozens of (slightly) different rules.  The biggest disadvantage is that a Regulation leaves no scope for national parliaments to bring their own democratic choices and legitimacy to privacy laws in Europe.

Privacy is the product of culture and history, and naturally, attitudes to privacy vary widely across Europe, given the wildly different cultural and historical experiences.  Even neighboring countries, like Germany and Denmark, have very different views on privacy, given their different histories and cultures.  Given Germany's history, we expect Germans to be particularly sensitive to privacy issues.  But should German views on privacy, based on Germany's traumatic history, or French views on State-dirigisme, based on centuries of an all-powerful centralized State, dictate privacy laws in a country like Britain that has been a stable parliamentary democracy for centuries?  Half of European Member States are first-generation democracies.  Does one size fit all?

The toughest choices in privacy laws are deeply political.  For example, how much cost are we willing to impose on businesses to improve privacy compliance?  This is a clear political trade-off:  how much bureaucracy, like privacy impact assessments, mandatory appointments of Data Protection Officers, etc is enough, before the costs become too burdomsome for European businesses, in particular, SMEs?  Where do you draw the line between freedom of expression and the "right to be forgotten"?  Where do you draw the line between citizens' privacy and government surveillance?  How much flexibility should the laws include to reflect the cultural and regulatory differences amongst countries in Europe?  Is a Regulation the right instrument in the interest of harmonization, or is the flexibility of a Directive more democratic?  How high should fines be set for data handling compliance mistakes (high enough to punish/deter, but not so high as to freeze European innovation and risk-taking)?  All these are deeply political issues.  I have my views, and the unelected Commission has its views, and unelected data protection authorities have their views, but what do European elected officials think? 

There has been very little political debate in Europe about how privacy laws should be up-dated for the modern world.  The European Commission technocrats have had their say, and they are naturally wary of seeing their careful package of privacy-compromises re-opened in a messy democratic debate in the European Parliament, and elsewhere.   Democracy is indeed messy, but, as the saying goes...the alternative is worse.  

"Privacy" is a deeply political and democratic issue.  It is too precious to leave all difficult privacy law decisions to technocrats.  Privacy needs and deserves a political and democratic debate.  Perhaps this is all part of a much bigger democracy deficit in Europe.  We're on a path to "solve" the Euro crisis by transferring even more power from elected national leaders to unelected Brussels technocrats.  Nonetheless, I hope we see a vibrant debate in the European Parliament on data protection.  Privacy laws need democratic legitimacy.  Anyway, that's what we, the European elite, are debating, sipping Campari over the Amalfi coast.  

Wednesday, August 8, 2012

A travel blog post, about data centers

Sometimes I think I should write a travel blog instead of a privacy blog.  I'm the kind of guy who likes to be outdoors and physically active, and I'm just back from hiking in Spain.  Galicia has a pristine coast like Brittany, but with fewer tourists.  And it's relaxing to have a few days to enjoy privacy, instead of worrying about it.  If I don't feel safe hiking in a place, I sure wouldn't recommend putting a data center there. 

Data centers are now big business.  They're part of the fundamental infrastructure of the Web.  And people naturally want to know that the data that they choose to store in the cloud will be safe.  The location of data centers is one factor in ensuring that data will be safe.  

Some countries have proven successful at fostering a data center industry:  a few come to mind immediately, ranging from the US, UK, Ireland, Belgium, The Netherlands, Norway, Finland, Hong Kong, Singapore, Taiwan, Japan (of course there are others, but these were top of mind for me).  All these countries strike me as welcoming jurisdictions, and they are succeeding in convincing international investors to put their money and host data there. Nowadays, data centers can be large investments, involving hundreds of millions of euros, creation of hi-skilled jobs, and spurring a virtuous cycle of hi-tech clustering.  It's no surprise that many countries are competing to attract them.  

I think there are two big factors in picking locations for data centers, namely, physical-infrastructure stuff and law.  

Physical infrastructure includes:  1) cheap, reliable and renewable energy sources,  2) a cool climate to reduce electricity running costs,  3)  lots of bandwidth.  

But law is just as important.  What's the legal/regulatory environment in each country, with regards to:  
  • the rule of law?  
  • censorship?  
  • fair legal process to validate/challenge government and law enforcement requests for user data?  
  • holding intermediaries liable for third-party content in the cloud?  
Many countries around the world fail all of these tests.  Some of them only fail one or two of them.  There is no commonly-accepted "black list" of countries where international companies should avoid placing a data center.  That's an interesting challenge, and perhaps deserves some public discussion.  Maybe someone should do a study to rank countries according to these criteria, just as countries are regularly ranked for competitiveness.  For example, companies also need to worry about opening a data center in a country where its employees could be held personally liable for third-party content hosted there.  (friends, how's that for understatement?) 

Maybe the safest place to put data centers, in terms of protecting users' data from government surveillance, would be on boats floating in international waters, powered by waves, cooled by sea water, and safely beyond the jurisdictional reaches of most governments.  Ok, not really, but then again, try coming up with your own list of countries.   And if you're having trouble concentrating, would you run the risk of landing in jail for a risky bet?

Tuesday, August 7, 2012

Mud-slinging, Anonymously

As a privacy-sensitive guy, I have always had a soft spot for anonymity.  But I wonder if things have just gone too far.  Sometimes, I hold my nose and try to read the "comments" on un-moderated platforms that allow "anonymous" to post comments.  Frankly, these comments often sound like monkeys throwing their feces at each other.  And all of this happens, because, well, it's anonymous.  Anonymity has become the shield of the ignorant, the inhumane, and the uncivil.  

I'm all for freedom of speech.  And in some contexts, anonymity is an essential foundation for freedom of speech.  Without anonymity, there would be far impoverished freedom of speech for political dissidents, or whistle-blowers, or other types of speech that are socially desirable, but which put the speaker at personal risk.  Nonetheless, the real question is whether the social benefits of certain categories of anonymous speech outweigh the tsunami of garbage that is being un-leashed behind the veil of anonymity on Internet platforms today.  

It's a hard challenge: can we figure out how to enable the socially-desirable forms of anonymous speech, while filtering out the anonymous slime, without turning into censorship engines?  

On this blog, I do not allow unmoderated comments.  In other words, I welcome your comments, but I review all comments before they are posted here.  I am not censoring the critical comments posted anonymously (you need only take a look at them to verify this).  But I do delete the many comments that are spam, or blatantly ignorant or hate-speech.  Really, a picture of myself hiking without a shirt should hardly prompt an outpouring of homophobic rants, but well, sadly, it did.  

As I grow older, I think more and more sites should reconsider the idealism of the early web, when many of us believed the world would be a better place, and privacy would flourish, by enabling people to express themselves anonymously.  Forcing people to use their real names on many sites might stop much of the grotesque defamation, hate-speech, cyber-bullying, ignorance and incivility that we are all enduring today, under some out-dated (and algorithmically ordered) view that "anonymous" should be free to say anything.

It's not easy for an Internet platform to figure out how to balance the benefits of anonymity against the lack of accountability that goes with it.  By the way, I use my real name for this blog.  Here's a picture of myself, vulnerable and unclothed, covered in mud on the Dead Sea.  If you want to comment with a homophobic or anti-Semitic rant, would you dare to use your real name?  I'm not writing a blog to give "anonymous" a platform for bile.  

I predict the Web tide is going to start ebbing away from anonymity, with a sea-shift back to real-world identity.

Friday, May 25, 2012

A torrent of bureaucracy

As Europe slips into recession and economic decline, how is privacy law being changed in Europe?  Sadly, privacy debates here, like the other big political debates in Europe, are not about how to foster the digital economy, but rather about how to regulate it.  Tax and regulate:  is that Europe's plan to build its digital economy?

While policymakers around the world are frantically nurturing their digital economies, what's happening here in Europe?  Lots, lots more red tape is coming.  Politicians are furiously running around giving media interviews about how this will rein in Facebook or Google, as though all of Europe's privacy laws should be written for one or two companies.  Indeed, wags have started to call Europe's new proposed privacy laws "Lex Google" or "Lex Facebook".  But trying to write a privacy law to "rein in" Google or Facebook is a sure recipe for writing a bad privacy law that would apply to all companies in Europe.

Very few people have actually looked at how Europe is planning to change fundamental privacy laws.  While politicians are posturing that this is a reduction of red tape, the reality is that it is on track to become the biggest increase in paperwork and compliance process obligations in the history of privacy law anywhere on the planet.  Moreover, here's an assessment that would surprise some people:  I think Facebook and similar big companies could cope just fine with the new proposals, one way or another.  But there is absolutely no way Small and Medium-size Enterprises in Europe could cope.  SME's are already an embattled group in Europe, facing the highest regulatory and employment tax burdens in the world.  Data protection officers at large corporations generally have lots of resources, and they can manage bureaucracy and paperwork, even if it costs a few more million euros.  For big companies, it's not a big deal if the data protection "compliance tax" increases by a few million "new pesetas" or "new lira".  Frankly, I wonder how an SME could possibly deal with this paperwork and process torrent, and how they're supposed to pay for it.

Consider the details of this regulatory torrent, and ask yourself how new legal obligations like those below would impact an SME:
  • 1)  Breathtaking fines for routine paperwork data protection lapses.  Large fines are proposed for data protection violations, some of which are really nothing more than paperwork lapses or documentation foot-faults.  Does anyone really think European SME's are set up to be able to report a data breach in less than 24 hours?  It baffles me how policymakers can propose to impose fines of 1 or 2% of a company's global turnover for not "adequately" filling out paperwork, such as "privacy impact assessments" or "documentation of data processing", especially since there is not even any agreement on what such paperwork is even supposed to look like.  
  • 2)  Mandatory Data Protection Officers.  What happens if we obligate all enterprises with over 250 employees to appoint a Data Protection Officer?  Practically, where are all these people going to come from, since only a handful exist today?  Can SMEs afford the cost of these new employees, or of outsourcing this function to expensive law firms?  Or over-burden others on their staff, e.g., a Human Resources person, to try to play this role too?  and needless to say, some companies with 250 employees (like Internet or health companies) have vastly different privacy impacts than others (like construction companies), so laws with arbitrary fixed rules are rarely well-adapted to the different realities of the real world. 
  • 3)  Mandatory privacy impact assessments.  What will SMEs have to do, if they are obligated to carry out privacy impact assessments on all new projects?  While I think such privacy impact assessments can be a useful privacy compliance tool for some projects, I also know that they are burdensome and time-consuming.  Can SMEs handle this additional burden?  While "privacy impact assessments" are still undefined, I estimate doing one would cost, roughly 10,000 to 100,000 euros.  I imagine most SMEs would have several, and larger companies would have many projects requiring such privacy impact assessments.  
  • 4)  Mandatory data processing documentation.  Documenting such data handling processes is time-consuming and difficult.  How much will it cost SMEs to document their data processing practices?  I would roughly assume that the burden to comply with this requirement would be comparable to the time/money spent complying with tax laws.  No one knows what it means to "adequately" document data processing, but nonetheless, these confused proposed privacy laws would threaten massive fines for failing to comply with an undefined standard.  
I hope SMEs will have their voices heard in the up-coming political process.  As long as the laws are passed to "rein in" Google and Facebook, you can be sure the SMEs will be ensnared in rules that make no sense for them.  But I wonder if politicians can limit SME-killing regulatory over-load.  I am worried about the impact of excessive regulation on Europe's digital economy, which is surely the world's most promising to create the jobs of the future.  All successful technology companies start as SME's.  Europe is committing a crime against its youth, when 50% of young people in many countries here are out of work.  SMEs create jobs, especially for young people.  Although politicians can run around and get media headlines about how these new proposed fines would rein in Facebook and similar companies, the reality is that a law applies to all companies, including SMEs.  Surely, we can figure out how to apply data protection paperwork obligations in a more sensible fashion, more adapted to the sensitivity and scale of data processing, than what is contained in the current proposed law.  Let's not suffocate European SMEs, as the unfortunate collateral damage of trying to "get" the big American Internet companies.

Europe is about to threaten companies with fines so large that they will throw them into bankruptcy for bureaucracy and paperwork foot-faults?   As countries around the world begin the competitive race to build their digital economies, we in Europe are starting the race by shooting ourselves in the foot?   It's possible to be deeply committed to privacy, without drowning in a torrent of privacy bureaucracy.

Monday, March 19, 2012

The Safe Harbor

Periodically, and again today, there’s a conference to discuss trans-Atlantic privacy issues, and take stock of the Safe Harbor framework. As an American who works in this field in Paris, I have long cared more than most people about trans-Atlantic privacy issues.

Why is the Safe Harbor framework still relevant? Here’s a reminder: the Safe Harbor framework was created because of a quirk in European law dating from 1995 that divided the countries of the world into so-called "adequate" and not-"adequate", in terms of having European style data protection. Countries like the US and Japan are not currently deemed to have "adequate" protections under EU law, but other countries like Argentina and Mexico and Israel are. It's a fair question whether the criteria to assess "adequacy" are themselves realistic or out-dated. Essentially, the criteria area formalistic: e.g., does a country have a European-style “independent data protection authority” and European-style “comprehensive” privacy legislation? So, countries that do not, like Japan and the US, are not deemed to have “adequate” data protection, but countries like Mexico, Argentina or Israel are. The Safe Harbor framework constitutes an “adequacy” regime for the US-based companies that comply with it. Therefore, the Safe Harbor framework is a partial solution to a bigger “adequacy” problem.

Rather than debating the Safe Harbor framework, we should be debating the “adequacy” regime. In the real world, no one would believe for a minute that data is less protected in Japan or the US than in Mexico, Argentina or Israel. But this bureaucratic fiction has very real-world consequences, if it makes “illegal” the transfer of personal data from Europe to these non-”adequate” countries. Surely, such routine global data transfers from Europe to Japan, to take just one examples amongst many in the cloud, can’t all be “illegal”?

Why does Europe fight so hard to maintain these rather reality-divorced rules, and why is Europe choosing not to modernize them as part of its comprehensive data protection law review? There is a simple reason, and it has very little to do with the reality of privacy protections. The so-called “adequacy” test is a powerful tool used by European policymakers to cajole other countries into adopting European style data protection laws and regulations. In 2011 alone, 6 countries in Latin America adopted European-style data protection laws. The motivation for these countries is often unabashedly trade-based, namely, the unhindered transfer of personal data from Europe to these countries, which hope to build information-based out-sourcing industries. Europe holds out a significant carrot to countries, saying essentially, “if you copy my privacy legal structure, we’ll reward you with information-based trade.” This, in a nutshell, is why Europe is winning the global competition to influence privacy laws in countries around the world.

I have long been an advocate of the vision of global privacy standards. Instead, what the world is getting is the globalization of European privacy standards.