Tuesday, February 27, 2007

The Slippery Slope of Data Retention


The Article 29 Working Party issued a blunt Opinion in March 2006 about data retention: “The decision to retain communication data for the purpose of combating serious crime is an unprecedented one with a historical dimension. It encroaches into the daily life of every citizen and may endanger the fundamental values and freedoms all European citizens enjoy and cherish.”
http://ec.europa.eu/justice_home/fsj/privacy/docs/wpdocs/2006/wp119_en.pdf

The Working Party went on to make some concrete, practical recommendations for Member States to address when they implement the Directive. As someone who will likely be on the receiving end of law enforcement requests, and will likely struggle with the ambiguities of the law, I’d like to highlight four of their recommendations, all of which present slippery slopes indeed.

1) Since the Directive mandates retaining data for the purposes of investigating “serious crime”, that term should be defined. What is a “serious crime”? And which crimes are not “serious”? I’m sure terrorism and child pornography are “serious”. But is defamation “serious”? And if the law doesn’t define them, who are going to decide: law enforcement, or the companies receiving these orders, or independent arbiters?

2) The data should only be available to specifically designated law enforcement authorities. The Working Party opined that a list of such designated law enforcement authorities should be made public. In the absence of such a public list, I’m sure that lots of officials will make requests for data. To take just one European country, France are we talking about the gendarmerie, the police, the CRS, investigative magistrates, military personnel, diplomatic officials, or any of many other officials? And for companies dealing with cross-border issues, how else could companies know which officials are “designated” in 27 different countries, each with different languages and legal systems?

3) Investigations should not entail large-scale data-mining. But in practice, who is going to enforce limitations on data mining: the companies that refuse to provide large amounts of data? Google famously went to court to challenge a DOJ subpoena in the US for large amounts of data, but 34 other companies receiving requests from the DOJ around the same time did not.


4) Access should be authorized on a case by case basis by judicial authorities or other independent scrutiny. If this Working Party recommendation were implemented, it would indeed insert a level of independent review. In the absence of such a process, who ensures that the requests are indeed valid under the laws? It’s optimistic to assume that all the recipient companies in Europe will exercise independent scrutiny, and only answer the types of requests that a judge or independent authority would have authorized.


We’re on a slippery slope, and we need much clearer rules. Or, as W Somerset Maugham put it: “There are three rules for writing the novel. Unfortunately, no one knows what they are.”

Saturday, February 24, 2007

Raise your hands if you’re worried about Data Retention!

As Dan Quayle put it: “I believe we are on an irreversible trend toward more freedom and democracy – but that could change.”

It’s a flattering self-image in Europe to play Greece against the American Rome: confronting the clumsy boot of American government power with humanistic values like privacy. Witness the outrage in Europe over the transfers from Europe to the US of airline passenger name records or financial wire transfer data. And it has become common knowledge in Europe that the US Patriot Act sacrificed privacy and other civil rights in favor of the “war on terror.” It’s time for us to take a look in the mirror: at Europe’s own reaction to the terrorism in Madrid and London, the Data Retention Directive.

The goals of privacy and the goals of law enforcement are often in conflict in the best of times. In the worst of times, like the aftermath of terrorist strikes, politicians have taken a new look at the balance, and chosen to shift it away from privacy and towards the goals of law enforcement. The shock of a terrorist act is asymmetrical, moving the balance in one direction. The slow erosion of civil liberties hardly generates the shocks to move the balance back.

The Patriot Act is a grab bag of disparate measures, mostly meant to make it easier for law enforcement to access data to help them investigate terrorism. It’s a clumsy law, at best, and it over-rides many longstanding procedural safeguards to protect people’s privacy from the State. But it’s not a data retention law. It makes it easier for American law enforcement to get their hands on data, but it doesn’t impose an obligation for companies to retain data, in case law enforcement should someday want access to it. The EU Data Retention Directive takes the opposite approach: it imposes massive data retention obligations on companies in Europe to keep mountains of data in case law enforcement should someday decide to ask for it. You may disagree, but in terms of privacy, I think the Data Retention Directive is far worse than the Patriot Act: a law that mandates that you collect and maintain mountains of data for law enforcement is worse than a law that makes it easier for law enforcement to access pre-existing databases.

I doubt most Europeans realize that the Data Retention Directive will require that telco’s and Internet “electronic communications service providers” (e.g., email providers) store all their traffic data for between 6 and 24 months. And do Europeans realize that some governments are trying to push the balance even further away from privacy towards the goals of law enforcement than required by the Directive? The German Ministry of Justice has drafted a law to mandate that email providers in Germany must verify the identity of their email customers, to stop the use of anonymous email accounts. The Netherlands Ministry of Justice has proposed a requirement to retain location data for 18 months, going far beyond the requirements of the Directive.

This massive invasion of privacy would be easier to swallow if the “bad guys” couldn’t easily evade being tracked anyway. Very simple technical measures allow anyone to use the Internet without leaving the tracks that the Directive would try to retain. In fact, it might be as easy as using non-European-based service providers. Today, Google does not verify the identity of its email users, and I can’t imagine it would start to do so, whatever the German law might say. I’m hardly alone in believing that users should be entitled to anonymous email accounts, for lots of reasons, ranging from a philosophic belief in the right to be anonymous online, to practical reasons, like trying to protect one’s account from spam.

If you have read the privacy news over the last few months, you would get the impression that the biggest threat to the privacy of EU citizens resulted from the transfer of pieces of their personal data to the US government, either when they fly to the US (those passenger name records) or when they do a financial wire transfer (using the “SWIFT” network of banks). If there is so much distrust in Europe about the US government getting its hands on such relatively minor pieces of data, why aren’t more people in Europe worried about their own governments getting access to vastly more data about them? Really, what’s more troubling: allowing the US government to see passenger information about the people on a flight from Amsterdam to New York, or allowing the government of The Netherlands to mandate that the location of every person in the country be tracked and stored for 18 months every time they use the Internet or the phone?

EU governments are required to implement the provisions of the Data Retention Directive into their national laws by 2009. They’re just getting started now, and the early indications are not good if you care about privacy.

Monday, February 19, 2007

Your Data is in the “Cloud”


Henry David Thoreau was prescient again, when he wrote: “You must not blame me if I do talk to the clouds.” We’re all doing that now, even if Thoreau had more to say than most of us.

So, if your data is in the cloud, where exactly is that? The cloud is the data that exists within the physical infrastructure of the Internet. Web 2.0 services are built on the concept that data held in the cloud enables users to access and share data from anywhere, anytime and from any Internet-enabled device. The cloud exists on the servers of the companies offering these services, as well as on the browsers of users’ own devices. To know the “location” of your data, you’d need to understand the architecture of data centers.

Some companies like Google have very large data centers in multiple locations. A data center is simply a warehouse building with stacks of server computers. Companies try to pick places that are near cheap, reliable sources of electricity. They tend to prefer not to specify publicly the exact locations of these data centers, for a couple reasons. First, competitors are watching each others’ choice of data center locations. Second, strong security practices dictate that they be kept as low-profile as possible. Nonetheless, newspapers have written extensively about Google data center construction projects in Oregon and North Carolina, to name just two.

As a user of a Web 2.0 service, you expect your service provider not to lose your data and to respond to your queries quickly. Data centers therefore usually replicate users’ data in more than one place. Google users would not be happy if they lost all their data just because the power goes out in Oregon. And the geographical location of data centers can be optimized to enhance the speed of a service, e.g., serving European users from a European data center can be faster than having the data cross the Atlantic. Finally, having data centers in different locations allows companies to optimize computing power, automatically shifting work from one location to another, depending on how busy the machines are.

For all those reasons, it’s actually very hard to answer the apparently simple question: “where’s my data?” Yes, data protection law was largely written in an era when data did indeed have an easily-identifiable location. But, now, if you want to know how your data is being protected, the important question is not “where is my data?”, but rather “who holds my data?” and “what is the privacy policy being applied to my data?”

You can’t pin-point the location of the clouds, but you can still talk to them.

Tuesday, February 13, 2007

The Tangle of Cross-Border Law Enforcement Requests for Information

If you think the international mechanisms for cross-border law enforcement requests for information are clear, you’d be wrong.

The Internet is a global creature. A user in Country A can transmit, say, child pornography to an individual in Country B from a server in Country C. So, how does non-US law enforcement bearing non-US court orders for information get what they need from a US company to investigate that?

In the US, there is a cumbersome process that requires a non-US law enforcement entity first to contact the US Department of Justice’s Office of International Affairs (“OIA”). OIA then passes the non-US law enforcement official’s request to a US Attorneys Office. The US Attorneys Office can then apply to a US District Court to be designated as a Special Commissioner who can then act on behalf of the non-US law enforcement entity. In practice, this process can take many weeks or months.

Surely, this is a process that needs to be streamlined. Of course, international negotiations are tedious and slow, but the needs of cross-border law enforcement collaboration are going to increase, so continuing to live with an antiquated mechanism will only become more painful over time.

The other relevant US law, the Electronic Communications Privacy Act, is silent on the ability of US companies to disclose such information directly to non-US law enforcement bearing non-US court orders. Because the law is silent, some companies no doubt have decided to respond directly to non-US law enforcement requests based on non-US court orders. And other companies have no doubt concluded the opposite.

The Internet’s global dimension has vastly out-paced the provincial processes of cross-border law enforcement requests for information. And I assume that means that some of the bad guys aren’t getting caught.

Monday, February 12, 2007

Terrorists are using Google Earth?

The news have reported cases recently where Western military have raided terrorist lairs and found satellite images of sensitive sites from Google Earth. Governments are tasked with the awesome responsibility of protecting us from terrorist attacks. Sometimes, they turn to Google and ask for sensitive images to be removed or degraded. Is that the right approach?

First, some background. Google Earth is a digital globe on your personal computer. It combines satellite imagery, maps and Google search to bring the world's geographic information to your fingertips. I can still remember the first time I typed my home address into Google Earth and watched my computer screen zoom from space directly onto my home – even for someone used to technology, I just gasped. And I’m not alone. Google Earth is used by more than 100 million people.

Every user of Google Earth has his or her favorite examples, often including non-Google content called a “mash-up”. Here are mine. I watched the progress of the Tour de France across the lovely countryside of France. I saw the heart-breaking images of Banda Aceh before and after the tsunami, and learned that relief agencies used these images to plan their efforts. I studied the distribution patterns of avian flu and migration patterns of birds across the globe. And I look at my house and my neighborhood from the sky.

We all know that Google is working to help more people in more countries get access to more information. While we think about the security issues from giving people greater access to geographic data, we need to keep certain facts in mind: the imagery on Google Earth is not unique to Google. Google buys or acquires it from other companies. The imagery is not real-time, since the photographs are taken by satellites and aircrafts during the last three years and updated on a rolling basis. Commercial high-resolution satellite and aerial imagery of almost every country in the world is widely available from numerous different sources, and there are dozens of commercial satellite image providers in the world. Anyone who flies above or drives past a particular site can often get the same information. And several other sites, like Geoportail or MSN Virtual Earth, make similar satellite imagery available to their users.

The companies and governments that gather and distribute these
images are primarily responsible for addressing the security issues
they raise. And they sometimes address this problem by altering sensitive
images before distributing the data. Look at the center of The Hague, and you’ll see a building which has been erased from the image: Google posted the image the way it was received.

At the same time, it’s all too easy to image a slippery slope, where governments go too far in requesting that certain images be removed: should images be removed of disputed territories in the Kashmir? Of Israeli settlements on the West Bank? Of every British embassy around the world? Of entire regions of Russia? Of a politician’s holiday home? And which government and which department would decide which site is “sensitive”?

Governments control their airspace, and they can control which companies have the right to take aerial images, and to exclude certain zones. But satellite imagery from space is a different category. To take just one example, I think it’s a good thing that there are very detailed satellite images of North Korea on Google Earth, which the North Korean government would no doubt want to obscure.

Google has said publicly that it is always prepared to discuss security concerns directly with government officials. My personal view is that the right approach is generally not to change images, because I believe that more information gives people more choice, more freedom, and ultimately more power. And removing images from just one source is not a reliable basis for guaranteeing security.

Wednesday, February 7, 2007

Search Data: another conflict between Data Protection and Data Retention

Since the AOL incident, there has been a lot of discussion in privacy circles about the storage of search string data. The discussions generally focus on the time period during which such data is retained by the service provider, and whether or not data protection concepts should limit that time period. I have seen almost no discussion about whether or not the Data Retention Directive will require search string data it to be retained. So, again, we are seeing a conflict between data protection and data retention requirements. Here are a few thoughts.

What does a search engine like Google collect when a user conducts a search? Google explains this on its site:

http://www.google.com/privacy_faq.html
“4. What are server logs?
Like most Web sites, our servers automatically record the page requests made when users visit our sites. These "server logs" typically include your web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser.
Here is an example of a typical log entry where the search is for "cars", followed by a breakdown of its parts:
123.45.67.89 - 25/Mar/2003 10:15:32 - http://www.google.com/search?q=cars - Firefox 1.0.7; Windows NT 5.1 - 740674ce2123e969
• 123.45.67.89 is the Internet Protocol address assigned to the user by the user's ISP; depending on the user's service, a different address may be assigned to the user by their service provider each time they connect to the Internet;
• 25/Mar/2003 10:15:32 is the date and time of the query;
• http://www.google.com/search?q=cars is the requested URL, including the search query;
• Firefox 1.0.7; Windows NT 5.1 is the browser and operating system being used; and
• 740674ce2123a969 is the unique cookie ID assigned to this particular computer the first time it visited Google. (Cookies can be deleted by users. If the user has deleted the cookie from the computer since the last time s/he visited Google, then it will be the unique cookie ID assigned to the user the next time s/he visits Google from that particular computer). “

So, every time a user conducts a search, a so-called “server log” is collected by the search engine. How does the new Data Retention Directive apply to this?

In 2006, the EU passed the Data Retention Directive, which obligates certain types of network operators to retain certain types of data for mandatory periods, in order to make them available on request to law enforcement agencies. The Directive applies to “providers of publicly available electronic communications services” and “public communications networks”, but these terms are interpreted differently in the various Member States that have to implement the Directive, which gives rise to questions of interpretation. For example, in France and Italy, it is expected that the implementation of the Directive will apply to Internet cafes, bars, restaurants, hotels, and airports, to the extent that they provide services such as public Internet terminals. On the other hand, preliminary discussions in other Member States, such as Germany and Spain, indicate that they are likely to adopt a narrower interpretation which will include only entities that directly provide telecommunications and Internet access services.

So, it is possible that data retention requirements could also apply to a search engine operator such as Google in certain Member States. Given the ubiquity of Internet search engines, it is hard to believe that law enforcement authorities may not at some point turn to a search engine operator to request personal data in order to fulfill some law enforcement interests. While the Data Retention Directive does not specifically mention search string data, it does require the retention of certain types of data about the user’s Internet connection (sometimes called “traffic data”) that can be so closely intertwined with search string data that it may be nearly impossible to separate them.

The Directive gives the EU Member States the option of requiring retention of the data between six and twenty-four months, and in exceptional cases even longer. Not all Member States have so far implemented the Directive, but the implementations that have so far been enacted, and the legislative proposals for implementation, indicate that many Member States are likely to select a mandatory retention period of at least one year, or even longer. For instance, in The Netherlands, a retention period of 18 months has been proposed, while legislation and proposals in the Czech Republic, France, Spain and the UK set it at one year. The length of these periods indicate that personal data may need to be kept for a substantially longer period than data protection rules may imply. In addition, the US Department of Justice has called for a two-year mandatory data retention proposal.

The differing approaches to the retention of search engine data under data protection law and data retention law demonstrate the tension between these two areas, and also show that the retention of search engine data must be judged under both of them. This is hardly the first example of a conflict between data retention and data protection, but it deserves more discussion in the context of search.

Tuesday, February 6, 2007

Gmail and Targeted Ads: is that the right issue?

When Gmail was launched in April 2004, there was an outcry among privacy advocates that its model of email scanning for advertisement purposes was a troubling new privacy invasion. So, with the hindsight of nearly three years, where do I think these privacy advocates were right, and where they were wrong? I’ll quote some of Google’s public statements on Gmail here.

Everyone agrees that email communications should be confidential. So, the question is whether a particular model of ad targeting violates that principle. All major free webmail services carry advertising, and most of it is irrelevant to the people who see it. Some services which compete with Gmail attempt to target theirs ads to users based on their demographic profile (e.g., gender, income level or family status). Google believes that showing relevant advertising offers more value to users than displaying random pop-ups or untargeted banner ads. In Gmail, users see text ads and links to related pages that are relevant to the content of their messages. The links to related pages are similar to Google search results, and are culled from Google's index of web pages.

Ads and links to related pages only appear alongside the message that they are targeted to, and are only shown when the Gmail user, whether sender or recipient, is viewing that particular message. No email content or other personally identifiable information is ever shared with advertisers. In fact, advertisers do not even know how often their ads are shown in Gmail, as this data is aggregated across thousands of sites in the Google Network.

All email services scan your email. They do this routinely to provide such popular features as spam filtering, virus detection, search, spellchecking, forwarding, auto-responding, flagging urgent messages, converting incoming email into cell phone text messages, automatic saving and sorting into folders, converting text URLs to clickable links, and reading messages to the blind. These features are widely accepted, trusted, and used by hundreds of millions of people every day.

Google scans the text of Gmail messages in order to filter spam and detect viruses, just as all major webmail services do. Google also uses this scanning technology to deliver targeted text ads and other related information. This is completely automated and involves no humans.
When a user opens an email message, computers scan the text and then instantaneously display relevant information that is matched to the text of the message. Once the message is closed, ads are no longer displayed. It is important to note that the ads generated by this matching process are dynamically generated each time a message is opened by the user--in other words, Google does not attach particular ads to individual messages or to users' accounts.

Some advocates expressed the concern that Gmail may compromise the privacy of those who send email messages to Gmail accounts, since the senders have not necessarily agreed to Gmail's privacy policies or Terms of Use. But using Gmail does not violate the privacy of senders since no one other than the recipient is allowed to read their email messages, and no one but the recipient sees targeted ads and related information.

In an email exchange, both senders and recipients should have certain rights. Senders should have the right to decide whom to send messages to, and to choose an email provider that they trust to deliver those messages. Recipients should also have certain rights, including the right to choose the method by which to view their messages. Recipients should have the right to read their email any way they choose, whether through a web interface (like Gmail, Yahoo! Mail, or Hotmail), a handheld device (like a BlackBerry or cellphone), a software program (such as Outlook), or even via a personal secretary.

On the Internet, senders are not required to consent to routine automatic processing of email content, such as for spam filtering or virus detection, or the automatic flagging or filing of messages into folders based on content. Email providers essentially act as personal assistants for subscribers, holding and delivering their email messages and carrying out various tasks (such as deleting spam, removing viruses, enabling search, or displaying related information). And of course, recipients have the right to forward, delete, print or distribute any message they receive.

So, is there a privacy issue with Gmail?

There are issues with email privacy, and most of these issues are common to all email providers. The main issue is that the contents of your messages are stored on mailservers for some period of time; there is always a danger that these messages can be obtained and used for purposes that may harm you, such as possible misuse of your information by governments, as well as by your email provider. Careful consideration of the relevant issues, close scrutiny of email providers' practices and policies, and suitable vigilance and enforcement of appropriate legislation are the best defenses against misuse of your information. I’ll come back to these issues later, since they’re the new set of privacy challenges in Web 2.0 services.

Monday, February 5, 2007

Are IP addresses "Personal Data"?

I worked with other privacy professionals in the European Privacy Officers Forum to answer the question: “Are IP addresses Personal Data?” A simple question doesn’t always have a simple answer. We concluded that the answer depends on the context. We concentrated specifically on the issue of ‘identifiability’ and where the dividing line is drawn between “personal data” and ”anonymous data”.

Personal data is very broadly defined in Article 2 of the Directive as “any information relating to an identified or identifiable natural person…”. Where this definition is applied unqualified then it may be interpreted in such a way that data will remain ‘personal’ and subject to the full remit of the law if individuals remain in any way identifiable. We believe that the concept of personal data should rather be defined pragmatically, based upon the likelihood of identification. In our view, it should not be the case that an organisation has to be sure that there is no conceivable method, however unlikely in reality, by which the identity of individuals can be established. This is a highly impractical approach, usually requiring considerable resource to be expended on disproportionate statistical analysis. The responsibility of organisations is to ensure that effective safeguards are put in place to prevent the data from being processed in such a way that it leads to identification. The rights, freedoms, and legitimate interests of individuals can more than adequately be protected if data is processed in such a way that all means likely reasonably to be used to identify the said person will fail. In making judgements about whether information is personal data, an organisation should consider the following factors:

1. How that data could be matched with publicly available information, analysing the statistical chances of identification in doing so;
2. The chances of the information being disclosed and being matched with other data likely held by a third party;
3. The likelihood that ‘identifying’ information may come into their hands in future, perhaps through the launch of a new service that seeks to collect additional data on individuals;
4. The likelihood that data matching leading to identification may be made through the intervention of a law enforcement agency, and
5. Whether the organization has made legally binding commitments (either through contract or through their privacy notice) to not make the data identifiable.

Considerations on all these issues are of course contextual, based upon an assessment on a case-by-case basis of the likely chances that identification may occur in any reasonably foreseen set of circumstances. In terms of ‘reasonableness’ or ‘fairness’, an additional aspect of this assessment may involve consideration as to the sensitivity of the information and any potential harm that could arise for individuals if data is later made identifiable.

However, some Member States, such as Belgium, Sweden and France, have interpreted data protection law to mean that if someone can be identified from certain data, no matter how technically or legally difficult it is to ascertain the identity of the physical person from such data, then the data is deemed to be ‘personal data’.

We suggest that a significant step can be taken in solving this issue by providing qualifying guidance on the limits of ‘personal data’. This should be pragmatic and emphasise that identification must be subject to the reasonableness standard. For example, a definition such as that given in §3(6) of the German Federal Data Protection Act could be used as a basis for this interpretation:
“Depersonalisation means the modification of personal data so that the information
concerning personal or material circumstances can no longer or only with a
disproportionate amount of time, expense and labour be attributed to an identified or
identifiable individual.”

The UK has adopted a pragmatic position: data are deemed personal if the individual to whom they relate is identifiable “from those data and other information in the possession or likely to come into the possession of the data controller” UK Data Protection Act 1998, section 1(1). As long as there is little or no chance of disclosure by the controller to a third party of information that could lead, in combination with data held by that person, to re-identification of individuals, then this approach seems more than reasonable.

The regulatory approach to IP addresses also illustrates the dilemma that the Directive’s sweeping definition of ‘personal data’ can cause. According to the stated position of the Working Party, “IP addresses attributed to Internet users are personal data and are protected” by the Directive. Article 29 Working Party, The Use of Unique Identifiers in Telecommunications Terminal Equipments: the Example of IPv6, Opinion 2/2002, WP 58, 10750/02/EN/Final, at 3. The Working Party reasoned that:

“data are qualified as personal data as soon as a link can be established with the identity of the data subject (in this case, the user of the IP address) by the controller or any person using reasonable means. In the case of IP addresses the ISP is always able to make a link between the user identity and the IP addresses and so may be other parties, for instance by making use of available registers of allocated IP addresses or by using other existing technical means”.
The Working Party have assumed that if an IP address is identifiable by one company
(e.g., an ISP) it is personal data as far as all other companies are concerned, even if
they have no access to the information that permits an association to the individual.
But this assumption is very questionable. ISPs typically do not divulge IP account
names. Indeed, many Member States have interpreted Article 6 of the 2002 Electronic Communications Data Protection Directive as prohibiting ISPs from divulging user information connected to IP addresses. If a third party cannot receive assistance from an ISP in associating an IP address with a particular user, the IP address is not personal data as far as the third party is concerned. From the third party’s perspective, the IP address is anonymous.

It is of note that this more pragmatic position is supported by jurisdictions with data protection legislation outside Europe, for example, Hong Kong. In May 2006, in a written reply to a member of the Legislative Counsel , the Secretary for Home Affairs (Dr Patrick Ho), outlines a policy position on IP addresses similar to that advocated above:

"An Internet Protocol (IP) address is a specific machine address assigned by the web surfer's Internet Service Provider (ISP) to a user's computer and is therefore unique to a specific computer. An IP address alone can neither reveal the exact location of the computer concerned nor the identity of the computer user. As such, the Privacy Commissioner for Personal Data (PC) considers that an IP address does not appear to be caught within the definition of "personal data" under the PDPO…” http://www.info.gov.hk/gia/general/200605/03/P200605030211.htm

While exact location and/or the particular user identity may not be required to qualify the IP address as personal data, Mr. Ho’s point that the IP address only identifies a machine is important. In fact, this raises a slightly different, but associated, aspect of the concept of identifiability. In determining whether an IP address can be considered an item of personal data in itself, consideration should be given to the fact that the number is not allocated to a natural person but rather to an item of networked equipment. Data generated through the use of such equipment may be the result of intervention by a number of individuals, perhaps the members of an extended family each making use of a home pc, a whole student body utilising a library computer terminal, or potentially thousands of people purchasing from a networked vending machine. We should note that the number of internet-connected devices is set to explode in the coming years. To illustrate the point, it is envisaged that in the future every light bulb will have an IP address, to turn it on and off, and to send a signal when it needs to be replaced. In fact, the logic of this argument could be applied to a variety of unique identifiers that are not necessarily associated with a particular natural person, for example, RFID numbers. Clearly the more divorced the use of such a number is from the identity of a single natural person, the less strong the argument for considering such ‘identifiers’ as an aspect of personal data.

Whether or not these identifiers are personal data will turn on the context in which they are collected and how they are stored and processed.

Monday, January 15, 2007

Three Ideas to Update Data Protection

It’s time to update European data protection, and I’ve got three concrete suggestions.

The problem is not with the basic principles of data protection law, but the way they have failed to evolve to adapt to the information age. Since the adoption of the first EU data protection directive in 1995, companies have come to accept that effective privacy protection is a necessity for a flourishing economy, particularly in the online sector. And individuals in Europe demand effective privacy and data protection. European data protection law has also become influential outside the EU. In recent years, countries such as Argentina, Japan, and New Zealand have adopted privacy law that show influence from the European model. Even the US, which was formerly highly critical of EU privacy law, has become more open to seeing the good in the European system.

However, several principles of EU privacy law are out of date and need to be adapted to the global information economy. Foremost among these are the restrictions on transfer of personal data outside the EU. In past years, such transfer meant packing a computer tape or paper files into a box and shipping them to a far away location. However, nowadays almost any activity on the internet involves a transfer of data outside of the EU, so that strict application of these laws would cause the Internet to shut down. Moreover, studies have shown that privacy protection inside the EU leaves a lot to be desired, so that it is not clear why a transfer of personal data outside the EU necessarily causes greater risks to privacy than the processing of the same data in the EU would. Recent scandals in the press about outsourcing to India only show the bad actors; in fact, many such outsourcing centers actually have higher privacy and security standards than equivalent installations in Europe do.

Moreover, the way that the principles of EU privacy law are implemented is mired in red tape. While some progress has been made on this in recent years, there are far too many bureaucratic hurdles put on the processing of personal data. For example, most Member States require that individual databases be registered with the national data protection authority, but there is no single, EU-wide procedure for such registration, so that a company running a database which is accessible in all Member States will have to register the same database in different ways across the entire EU. Member States also do not recognize each other’s authorizations, which recognition has become routine in other sectors (such as with regard to the licensing of pharmaceutical products). To give an example: in one case a company had obtained permission to transfer personal data from twenty-two other Member States, but the data protection authority of one single Member State required an additional year of deliberation before it gave its permission, thus holding up the entire data transfer. One of the main purposes of EU data protection law is to provide a minimum floor of data protection throughout the entire Community, so that it is strange that national data protection authorities are not willing to grant each other’s authorizations some form of mutual recognition.

The most glaring gap in EU data protection law is that it does not apply to activities by law enforcement, military, and national security authorities. Thus, EU citizens have data protection rights which they can assert against an online shop that sells their e-mail addresses without permission, but have no such rights when the police surreptitiously listen into their telephone conversations, despite the far more serious breach of privacy that the latter action entails. It is thus not surprising that European governments are now seeking to collect personal data and build large databases in a way that would be illegal for companies to do, thus exploiting this loophole in the law. While is EU is currently discussing the passage of an instrument that would close this gap, I don’t expect it soon.

Privacy Laws & Business Q&A on Search Queries

I'm a fan of Privacy Laws & Business. They publish a terrific International Newsletter.
http://www.privacylaws.com/newsletters.international.html
They gave me permission to re-print here their Q&A on the privacy aspects of search queries, from Issue 84, October 2006.

Google’s privacy policy relates to many of its services. Peter Fleischer, the company’s
Privacy Counsel Europe, gives PL&B an insight into its approach. By Asher Dresner.
Question: Do you interpret any country’s data protection laws as meaning that search terms constitute personal data?
Answer: This is not a simple yes/no question. To answer it, you need to analyse both the source of a query and its content. Regarding the source, a query can be made either by a human being, or by a machine. The latter are sometimes called “bot” queries. Regarding the content, a query can be made on almost any topic that can be entered into a computer, such as words, numbers, even code strings. Some search queries may relate to an identifiable human being, eg, a query for “Bill Clinton”, and in that sense may constitute “personal data” about the data subject, which may or may not be subject to data protection laws. Most queries do not relate to an identifiable human being, such as a query for “weather in London”. In short, this is context-specific.
Question: Do you consider search terms to be personal data a) internally and b) externally? If so, do you have a policy for what you can and can’t do with search terms?
Answer: Our privacy policy governs how we handle “server logs”, which include the query text. Our FAQ explain the contents of those server logs: www.google.com/privacy_faq.html#serverlogs.
Moreover, we have a policy never to share search queries with anyone outside of Google if they contain personally-identifiable information. For example, we post anonymous and statistical information about our searches on our site Zeitgeist: www.google.com/press/zeitgeist.html.
Question: Under what circumstances would you authorise the release of search terms?
Answer: This would be governed by our Privacy Policy on “information sharing”:
www.google.com/privacypolicy.html
Information sharing
Google only shares personal information with other companies or individuals outside of Google in the following limited circumstances: We have your consent. We require opt-in consent for the sharing of any sensitive personal information. We provide such information to our subsidiaries, affiliated companies or other trusted businesses or persons for the purpose of processing personal information on our behalf. We require that these parties agree to process such information based on our instructions and in compliance with this Policy and any other appropriate confidentiality and security measures. We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request, (b) enforce applicable Terms of Service, including investigation of potential violations thereof, (c) detect, prevent, or otherwise address fraud, security or technical issues, or (d) protect against imminent harm to the rights, property or safety of Google, its users or the public as required or permitted by law. If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. We may share with third parties certain pieces of aggregated, nonpersonal information, such as the number of users who searched for a particular term, for example, or how many users clicked on a particular advertisement. Such information does not identify you individually.
Question: Do these circumstances differ in different countries or areas with different privacy laws?
Answer: Yes, because, as pursuant to the clause above, there are differences amongst countries with regards to: any applicable law, regulation, legal process or enforceable governmental request
Question: I understand that the information Google collects on users differs according to which Google product they are using (eg Google account, toolbar, Gmail, accelerator, etc). Could Google cross-reference this information with searches made from these products to find out who searched for what? For example, if a searcher has a Google account, can you identify which account a search term comes from (quite apart from the IP address)? If so, is this done, and under what circumstances?
Answer: From our Privacy Policy: Information you provide - When you sign up for a Google Account or other Google service or promotion that requires registration, we ask you for personal information (such as your name, e-mail address and an account password). For certain services, such as our advertising programs, we also request credit card or other payment account information which we maintain in encrypted form on secure servers. We may combine the information you submit under your account with information from other Google services or third parties in order to provide you with a better experience and to improve the quality of our services. For certain services, we may give you the opportunity to opt out of combining such information.
Question: If this information can be cross-referenced, under what circumstances would you authorise the release of search terms cross-referenced with the personal data users provided when they signed up to these services? For example if you had a US Justice Department request to release the search terms of all Google account holders whose sign-in name matched that of a terrorist suspect, would you release the terms?
Answer: As explained in our privacy policy, we will respect a valid legal order. The legal system has mechanisms to address/resolve questions relating to the specificity of the information being demanded.
Question: Does this situation differ in areas or countries with different privacy laws?
Answer: See answer to fourth question.
Question: If a resident of country A searches for something using a computer in country B, and their search term is stored in country C, which area’s privacy laws apply?
Answer: Resolving questions of jurisdiction in an international context is a complicated process, which takes into account numerous factors, such as the location of the person using the service, the location of the company providing the service, the location of the data, and other factors. Google’s Terms of Service are subject to the laws of the State of California, where Google is headquartered (see www.google.com/terms_of_service.html).
Nonetheless, we are committed to being respectful of the laws of the various countries in which we do business.
Question: Do you have an internal policy governing what Google employees can do with search terms, and which employees have access to them? If so, would you please provide me with a copy of that policy?
Answer: Yes, we have a policy and a written confidentiality agreement which we require those employees to sign who have access to search terms (i.e., to server logs data). We do not share that externally.
Question: When a user of one of your services cancels the service (eg deletes their gmail account or uninstalls toolbar), for how long do you keep their personal data? Does this period differ according to the jurisdiction in which they are resident?
Answer: When a user terminates a Google service, the length of period that their personal data is retained will vary from one service to another, and depending on the type of information. For example, some types of personal data are retained for legal/tax/ accounting reasons, such as purchase records using our CheckOut service, and those retention periods are often dictated by applicable laws or regulatory practices. Other types of personal data, such as content that a user uploads to our service (such as Video) may remain on the service notwithstanding the cancellation of the user’s account. Other types of user personal data, such as the e-mails in a person’s Gmail account, should be deleted within a short period of time after the user closes his/her account. The retention periods do not currently differ according to the jurisdiction in which the user is resident, but it is possible that such changes will be made in the future.

Government Access to "Private" Data

Governments are becoming increasingly data-hungry. Largely because of concerns about terrorism, government agencies are seeking to collect and process more and more types of personal data. While many of these types of data collection may be necessary, unfortunately the burdens of collecting often are placed on companies that are also under conflicting data protection obligations.

Following 9/11, governments in both the EU and the US sought to greatly expand their access to different types of data, in order to investigate terrorist incidents and prevent other ones from happening. This type of collection includes areas such as money laundering, antiterrorist financing, airline passenger data, customs information, telecommunications records, logs of web pages, and many other types of data. In today’s globalized economy, much of this data is not itself collected or retained by governments, but is held by private sector entities and companies.

But companies also have obligations under data protection and privacy law. In Europe, data protection law places heavy burdens on companies to only collect and process personal data for specific purposes and not to process them in other ways; to delete data once the purposes of processing have been ended; and not to pass on personal data to third parties (including governmental entities) without notice being given to individuals and, in some cases, only with consent.

Companies are certainly willing to do their part in the fight against terrorism, but they are often placed in a position of having to comply with conflicting data protection and law enforcement rules, so that it is almost inevitable that they will have to violate one of the two. For example, before the US and the EU finally reached an agreement recently on the transfer of airline passenger data to US law enforcement authorities, airlines flying from Europe to the US were in effect breaching EU data protection rules by transferring such data to the US Department of Homeland Security. In another case, the French data protection authority found that whistleblower complaint hotlines run in France by companies, many of which were obligated by US law to maintain such hotlines in their operations overseas, violated French data protection law. The number of these conflicts is only increasing.

Data protection and law enforcement regulators often seem to be operating in different worlds, and do not speak to each other sufficiently. What is needed is more communication between privacy regulators and those in other areas, and an overarching framework for privacy protection in the context of transferring personal data to law enforcement authorities. Moreover, this framework needs to be coordinated not only within Europe, but also between Europe and other countries like the US.

Friday, January 12, 2007

Asia: the new Thought Leader in Privacy?

Privacy and data protection have always been a big thing in Europe and in the US. In Europe, the experience of Nazism and communist dictatorship have given rise to comprehensive legal instruments giving people extensive control over how data relating to them are processed (referred to as “informational self-determination”) by both the government and private companies. In the US, there is a long-standing tradition of privacy rights, but traditionally, most concern has been voiced about data processing by the government.

Europe and the US have become embroiled in a number of privacy-related squabbles recently. For example, European legal restrictions on transferring personal data outside the EU have led to objections in the US, where such restrictions are sometimes felt to be unfair and protectionist. On the other hand, Europeans have become increasingly upset by US processing of European data for law enforcement purposes. Thus, for example, requirements that airline passengers pass on their data to US law enforcement before boarding a plane bound for the US caused controversy recently. These disagreement are to some extent inevitable, given the differing cultures and legal traditions in Europe and the US, and to some extent are caused by simple misunderstanding and wounded pride. However, both the US and Europe are ignoring an important development, namely the growth of interest in privacy in Asia.

With over three billion people, Asia represents over half of humanity, and dwarfs both the US and Europe. However, traditionally privacy has not had a high value in many Asian cultures, which have been built more around communitarian ideas. Nevertheless, privacy regulation is exploding in Asia. While a few Asian regions have long had some sort of privacy regulation (such as Hong Kong and Korea), others have recently passed laws, including Australia, Japan, New Zealand, Taiwan, and others. In addition, other countries (such as Singapore and Vietnam) are presently either drafting privacy legislation or seriously considering it.

There are several reasons for this surge in interest in privacy among Asian governments. One factor is the work done by the Asia Pacific Economic Cooperation group (APEC), which is a group of Asia-Pacific countries (including the US) that get together to increase cooperation among their economies. In 2004, APEC approved a set of non-binding privacy principles for governments to follow in passing privacy legislation, and is currently working on other privacy-related issues such as international data transfers.

In addition, Asian countries see an economic benefit in passing privacy legislation. Experience in Europe an the US has shown that such legislation is a pre-condition to increasing consumer trust, particularly trust in online commerce. In addition, many large multinational companies are hesitant about locating large-scale outsourcing operations in countries without any legal framework for publishing breaches of data security and data privacy. Finally, as their economies have developed, more and more Asian citizens are demanding some sort of privacy protection.

In developing their privacy framework, the Asia-Pacific countries have a unique opportunity to draw from the good of the European and US frameworks, and reject the bad. Of course, the sheer size of Asian countries, and the diversity of their cultural and legal systems, makes implementing the APEC framework a significant challenge.

The wild card in all of this is China. With over one billion people, any privacy law enacted by the Chinese government will have immense impact, both inside Asia and beyond. In fact, the government has begun drafting a privacy law, the outlines of which will likely become clear in the coming months.

Despite these developments, privacy regulators in Europe and the US remain clearly focused on themselves and each other and their differences. In our globalized world, it is crucial that regulators work together more, and as the Asian economy picks up speed and China overtakes the US as the world’s largest economy, it is dangerous to focus only on transatlantic privacy issues and ignore what is happening in the Asia-Pacific region. Europe and the US should realize that they need to make their privacy regimes interoperable with such regimes in Asia. In our globalized world, any bilateral, or even regional, approach to regulation of the flow of information is doomed to eventual failure. Both Europe and the US are already dwarfed in terms of population size by the Asia-Pacific region, and will eventually be dwarfed economically as well. In this situation, it is in everyone’s interest to regard privacy not only as a transatlantic issue, but as a global one.

Thursday, January 11, 2007

What can Europe learn from the experience with US Security Breach Notification Laws?

At my last count, 34 US States have laws that require some kinds of security breaches to be notified to the public when their personal information is compromised. Europe is now considering following suit. But turning any such proposal into effective practice will require a lot of work.
I think it’s easy for all us to agree on one thing – every individual is entitled to be told promptly when a company learns of a security breach that has resulted in the loss of personal data that may subject an individual to identity theft or other such serious harm. It doesn’t really matter whether the breach was the result of mistake or malice – our first instinct should be to protect the user against harm while at the same time figuring out or fixing the problem that led to the breach in the first place. Prompt notice simply is the first step to protecting the customer.
The first practical question, however, is whether every loss of personal data should require notice. Yes, when customers give their personal information to companies who in turn store it for future uses, we all like to think that the information about us is held in vault-like security. But the truth is that there is no perfect security – some people will abuse their position or power to steal information, others will hack into systems for the challenge of it, and we are all human in the end and make mistakes. So if personal information is lost, it is appropriate to ask what types of personal information if lost should trigger notice.
While reasonable people may differ, I think the trigger should be a material risk of harm to the individual such as identity theft or financial loss. It is fair to say that, objectively, the loss of a laptop with customer list information such as name, physical address, email address and phone number likely presents little risk of financial harm or identity theft to any individual. Such information generally is available from directories or other public sources and most of us don’t take steps to protect that information from the public domain. But if you couple that information with an account access code or other key data elements that would permit a person to apply for credit such as date of birth and government identity number, then the risk of harm certainly increases.
In the United States, with its 34 State security breach notice laws, this in fact is the legal standard. Notice is required when a person’s name in combination with financial account information and access codes or a personal identifier like a social security number or driver’s license number are disclosed, so that there is a risk of harm occurring. This makes more sense than giving notice routinely – individuals should not be moved to anxiety over disclosures that present no real risk of harm; notice should mean something and be viewed as an important alert to pay attention to one’s credit card and bank statements. The diversity of security breach laws in the US has meant that in many US states, notice must be given whether or not any harm has been caused by a breach, which has led to notification being given frequently. In fact, there is evidence that individuals in the US are becoming jaded by receiving frequent security breach notices, so that sending a notice for every breach, even those that have no real affect on security, can produce a numbness that can itself represent a security risk.
Another practical issue to consider is the timing of any notice: it must be sufficiently prompt to afford individuals a meaningful opportunity to take steps to avoid harm. Yet, companies likewise need to investigate the cause of the breach, take remedial action, and prepare for notification as well. Similarly, companies often quickly complete their investigation and determine that the compromise is the result of the illegal acts of a third party. In those cases, referral is often made to law enforcement agencies. In some instances, law enforcement requests the service provider to refrain from giving notice or publicly disclosing the incident in an effort to further investigate the crime and find the perpetrator. So it may be appropriate to delay notice at the request of law enforcement if doing so is in the public interest.
Giving notice is the easy part; responding to subsequent inquiries, however, takes planning. For example, companies that have been through the breach notification process have commented on the need to establish call center support to respond to customer inquiries that arise after receipt of a notification. Many companies also arrange for fraud protection insurance coverage and take steps to notify credit reporting agencies, banks and card issuers. These customer-friendly steps are important to ensuring a complete and accurate notification and for the protection of the customer. Such procedures would obviously need to be adapted to European conditions since, unlike the US, there is no EU-wide credit reporting capability.
A sensible security breach law would help consumers in Europe know when their personal data is safe, and when it might not be.

The EU Data Retention Directive

Many people wonder what will happen to their data when European governments begin to pass laws implementing the Data Retention Directive. Google's recent victory in a US court opposing a request to obtain data by the US Department of Justice (DOJ) may shed some light on the challenges raised by the new Directive.

Under the new Directive, every EU country must pass laws requiring phone and Internet companies to retain vast amounts of their users' data (both subscriber data, and so-called traffic and location data), for periods ranging from 6 months to two years. The goal of data retention is to make the data available to law enforcement for the investigation and prosecution of serious crime. At the same time, every European has a human right to have his/her personal data protected from unauthorized disclosure, and companies have both ethical and commercial reasons to protect the privacy of their users' data. It’s noteworthy, however, that government agencies in the EU (such as law enforcement bodies) are not legally required to comply with the same data protection laws that companies like Google must follow.

The experience in the US, and Google’s case, demonstrate that government and other law enforcement agencies are seeking access to stored data on a massive scale. The US DOJ originally sent a subpoena to Google for billions of URLs and two months' worth of users' search queries. The DOJ wanted this data not to further any particular pending investigation, but to test its theories about the effectiveness of software filters to protection children from harmful content, as part of a lawsuit to defend the constitutionality of the 1998 US Children's Online Protection Act. Google chose to go to court to resist this massive government demand for data, which we felt was disproportionate to the uses to which the data were to be put. Thankfully, the Judge agreed and drastically reduced the scope to 50,000 URLs and zero search queries. The Judge agreed with Google that the government's request for this data had to be weighed against our users' legitimate expectations of privacy.
http://www.google.com/press/images/ruling_20060317.pdf

Data retention laws will be passed in the months ahead across Europe to implement the new EU Directive, and telecom and Internet companies will comply with them by retaining vast amounts of data. Law enforcement will then start demanding this information, but they are not currently bound by data protection laws (the EU is considering the extension of data protection laws to law enforcement, but the outcome of this effort is uncertain). Thus, while the US court upheld Google’s objections, law enforcement might have prevailed in a European court, despite the existence of data protection laws.

In Europe we need to have a much broader and open public discussion about how to make sure that our laws incorporate safeguards to ensure that law enforcement is provided with data that is relevant and proportionate, but not provided with unlimited access to data that most Europeans expect to be kept private. Not every company will go to court to ask a judge to get the balance right the way Google did. Google is certainly willing to fulfill its legal obligations both to respond to legitimate law enforcement requests for data, and to protect the data protection rights of its users. But the combination of the new Data Retention Directive that will mandate the creation of massive databases, and the failure of the EU to so far extend data protection law to law enforcement entities, creates a situation in which personal data may lack appropriate legal protection. I hope that national legislators pass data retention laws that are narrow in scope, and that the EU extends data protection law to law enforcement activities.

A German threat to Anonymous Email Accounts

There is a lot of confusion about anonymity on the internet, and what it means. In Europe, data protection law governs the process of personal data on the internet. However, such law applies only if the data is “personal”, that is, if it can be linked to a particular person. If data is totally anonymous, then by definition it is not possible to link it to a person, so that it is not covered by the law.

Even though anonymity sounds like a black and white concept, it is actually much more flexible than we usually think. We all depend on a certain amount of anonymity in our daily lives; for example, if you buy a book in a bookstore with cash, you are assuming that your transaction will remain anonymous. In order to protect our private sphere, it is important that we have the choice to carry out certain transactions anonymously.

At the same time, even this example shows how anonymity is not absolute. You might be friends with the salesperson who sold you the book, so that he would know that you bought a particular book. The bookstore may also have surveillance cameras located near the cash registers to record transactions in case of theft. Most people don’t worry much about these types of small incursions into anonymity, since they are minor and nearly unavoidable.

However, recently governments have been trying to make broad inroads on anonymity for law enforcement purposes, particularly with regard to the internet. Governments are particularly nervous about anonymous e-mail accounts that are offered by many online services, since they believe that such accounts are used by terrorists and other criminals. In fact, recently the German justice minister even proposed eliminating or sharply restricting such anonymous e-mail, by requiring that individuals present a passport before they are able to open a webmail account. Here’s the proposal (in German only):

http://www.humanistische-union.de/fileadmin/hu_upload/doku/vorratsdaten/de-recht/bmj_2006.11.pdf

That’s a bad idea. It would not even be technically possible to eliminate or restrict such accounts, since such e-mail services are freely available on the web from providers in other countries. Moreover, just as we all want a certain degree of anonymity when we buy a book, there are occasions when people want and need to have an anonymous e-mail account. There are many such scenarios: the dissident who is writing an account of political persecution to be sent to a newspaper abroad; the individual who wants to order something over the internet but doesn’t want to use his office e-mail; or just the ordinary internet user who is concerned about his privacy will all want to consider using an e-mail account which isn’t tied to their particular name or identity. There is nothing wrong with that, and it is no different than sending a letter to someone without putting your return name and address on the envelope.

Attempting to restrict or regulate anonymity on the internet, or even to ban anonymous e-mail accounts, will only be ineffective, and will severely damage trust of individuals and consumers to use the net. Politicians should be attempting to increase privacy protection on the net and user trust, rather than restricting them in this way.

Some thoughts on international conflicts of law in recent privacy controversies

The free flow of data around the globe is the lifeblood of the Information Age. And any company doing business on the Internet is participating in this global flow. So, what happens when a company finds itself challenged by different regulatory authorities with completely contradictory requirements?

There have been numerous recent cases of companies facing such conflicting legal requirements. Airlines flying from Europe to the US were confronted with US anti-terrorism laws which required them to provide so-called Passenger Name Records to the US authorities, at the same time that privacy regulators in Europe held such transfers to be illegal under European data protection laws. SWIFT, the financial services company, was forced by the US anti-terrorism authorities to provide it with large amounts of individual wire transfer records, which was then condemned by the Belgian data protection authorities.

Similarly, Google was recently involved in a court action in Brazil with regards to its US-operated social networking site, Orkut. The Brazilian authorities were trying to investigate Internet crimes, like child pornography and hate speech directed against blacks and gays, and demanded that Google provide them with the personal information of some Orkut users. But Google is also a US-based company, and the applicable US privacy law, the Electronic Communications Privacy Act, prohibits Google from answering any demand for its users’ personal data, except pursuant to a valid legal order. Google said publicly that it had cooperated with the Brazilian authorities in numerous cases, and was prepared to cooperate again, as long as the Brazilian authorities direct their valid legal order to the operator of the service, namely Google Inc in the US, and not to its sales office in Brazil (which neither operates the service, nor has access to the data).

These are just a few examples of the many casese where companies are being torn by the conflicting legal requirements of different governments. Such cases are becoming much more frequent in the era of the Internet. So, perhaps it’s time to ask ourselves whether governments have an obligation to talk more to each other. Every child knows what it’s like to have mom tell you one thing, and dad tell you just the opposite. In a functional family, mom and dad will talk and work it out. In a disfunctional family, the kids will just have to choose between listening to mom or to dad.

The legal arena of resolving disputes of international jurisdiction has always been complicated, based on numerous factors: location of the company providing a service, location of the users of the service, location of equipment providing the service, etc, and those factors are sometimes inconclusive and contradictory themselves. But such jurisdictional conflicts arise constantly on the Internet, a medium which naturally crosses borders. Ironically, many of the legal mechanisms in the world to resolve such jurisdictional disputes are ancient, from an era where cross-border disputes were rare.

The time has come for governments and multinational companies to assume their respective responsibilities to work with each other to develop a process to discuss and resolve cross-border legal conflicts. The fight against child pornography and terrorism, to take these examples from recent casees, is simply too important to hamper on the basis of legal conflicts. The answer is not to expect companies to work out a solution to contradictory laws on their own. It’s time for mom and dad to talk.

We Need to Update European Privacy Laws for the Information Age

The global flow of information is one of the most powerful trends of our age. Every time we use a mobile phone, a credit card, or the Internet, our information races across computers networks, and often around the globe. But many European privacy laws were designed for another era, before the Internet. These laws restrict the flow of personal data outside of Europe. It’s time to take a fresh look at technology trends and to update privacy laws. We should be able to protect privacy, without sacrificing the amazing benefits of global information flows.

Under the 1995 EU Data Protection Directive, it’s illegal to transfer personal data to a country that does not have “adequate” data protection, as defined by the European Commission. The Commission has taken the approach that only countries with a clone of European privacy laws (e.g., Hong Kong, Argentina, Guernsey) have “adequate” laws, while countries like the US, Brazil and Japan do not. Moreover, many privacy laws require a company to obtain prior approval from the local data protection authority to transfer data to a third country, even to its own subsidiary, despite the fact that the authorities are often over-worked and under-staffed, and sometimes need months to review a request for transfer.

Efforts to fix this conundrum have been well-intentioned, but remain unsatisfactory. Transfers of data from Europe to the US can be legalized under the so-called Safe Harbor Agreement, but that arrangement does nothing for data flows to all the other countries of the world. Another regulatory initiative, to impose “binding corporate rules” on companies so that they can transfer data within their corporate group, has become bogged down in the bureaucratic maze which requires companies to obtain separate approval from every data protection authority in Europe.

But the whole idea of regulating privacy based on restrictions on transfers of data across borders is obsolete. Increasingly, data lives in the Internet “cloud”. In other words, information and applications are migrating from the architecture of PCs and their mainframe servers to “cloud” computing, with information and applications hosted in cyberspace. And the total amount of information is exploding, as more people come online, as more information is digitized, and as more devices become Internet-enabled.

The key to protecting privacy for the Information Age is to make sure that people can control their data, wherever it is located: 1) they need to get clear notice about the privacy practices of companies that collect their data, 2) they need to be given meaningful choices about how their data will be used, and 3) they need to trust systems to provide a higher level of protection for sensitive data like credit card numbers and personal health information.

Today, a huge amount of effort is spent on regulating the transfers of data outside of Europe. But we should confront the bigger challenge of making sure that privacy is respected, regardless of its location. Yes, this will require better international collaboration amongst governments and companies across borders, and indeed, a flexibility to respect privacy regimes which may be different from our own. If companies and privacy regulators work together, we will certainly be able to develop a simple and unbureaucratic system to ensure that privacy is respected while data flows in the “cloud”. We’ll have to focus more on the key principles of privacy protection and international collaboration. Europe led the way on inventing privacy laws, and now we have a chance to lead the way on updating them for the Information Age.