Monday, January 15, 2007

Three Ideas to Update Data Protection

It’s time to update European data protection, and I’ve got three concrete suggestions.

The problem is not with the basic principles of data protection law, but the way they have failed to evolve to adapt to the information age. Since the adoption of the first EU data protection directive in 1995, companies have come to accept that effective privacy protection is a necessity for a flourishing economy, particularly in the online sector. And individuals in Europe demand effective privacy and data protection. European data protection law has also become influential outside the EU. In recent years, countries such as Argentina, Japan, and New Zealand have adopted privacy law that show influence from the European model. Even the US, which was formerly highly critical of EU privacy law, has become more open to seeing the good in the European system.

However, several principles of EU privacy law are out of date and need to be adapted to the global information economy. Foremost among these are the restrictions on transfer of personal data outside the EU. In past years, such transfer meant packing a computer tape or paper files into a box and shipping them to a far away location. However, nowadays almost any activity on the internet involves a transfer of data outside of the EU, so that strict application of these laws would cause the Internet to shut down. Moreover, studies have shown that privacy protection inside the EU leaves a lot to be desired, so that it is not clear why a transfer of personal data outside the EU necessarily causes greater risks to privacy than the processing of the same data in the EU would. Recent scandals in the press about outsourcing to India only show the bad actors; in fact, many such outsourcing centers actually have higher privacy and security standards than equivalent installations in Europe do.

Moreover, the way that the principles of EU privacy law are implemented is mired in red tape. While some progress has been made on this in recent years, there are far too many bureaucratic hurdles put on the processing of personal data. For example, most Member States require that individual databases be registered with the national data protection authority, but there is no single, EU-wide procedure for such registration, so that a company running a database which is accessible in all Member States will have to register the same database in different ways across the entire EU. Member States also do not recognize each other’s authorizations, which recognition has become routine in other sectors (such as with regard to the licensing of pharmaceutical products). To give an example: in one case a company had obtained permission to transfer personal data from twenty-two other Member States, but the data protection authority of one single Member State required an additional year of deliberation before it gave its permission, thus holding up the entire data transfer. One of the main purposes of EU data protection law is to provide a minimum floor of data protection throughout the entire Community, so that it is strange that national data protection authorities are not willing to grant each other’s authorizations some form of mutual recognition.

The most glaring gap in EU data protection law is that it does not apply to activities by law enforcement, military, and national security authorities. Thus, EU citizens have data protection rights which they can assert against an online shop that sells their e-mail addresses without permission, but have no such rights when the police surreptitiously listen into their telephone conversations, despite the far more serious breach of privacy that the latter action entails. It is thus not surprising that European governments are now seeking to collect personal data and build large databases in a way that would be illegal for companies to do, thus exploiting this loophole in the law. While is EU is currently discussing the passage of an instrument that would close this gap, I don’t expect it soon.

Privacy Laws & Business Q&A on Search Queries

I'm a fan of Privacy Laws & Business. They publish a terrific International Newsletter.
They gave me permission to re-print here their Q&A on the privacy aspects of search queries, from Issue 84, October 2006.

Google’s privacy policy relates to many of its services. Peter Fleischer, the company’s
Privacy Counsel Europe, gives PL&B an insight into its approach. By Asher Dresner.
Question: Do you interpret any country’s data protection laws as meaning that search terms constitute personal data?
Answer: This is not a simple yes/no question. To answer it, you need to analyse both the source of a query and its content. Regarding the source, a query can be made either by a human being, or by a machine. The latter are sometimes called “bot” queries. Regarding the content, a query can be made on almost any topic that can be entered into a computer, such as words, numbers, even code strings. Some search queries may relate to an identifiable human being, eg, a query for “Bill Clinton”, and in that sense may constitute “personal data” about the data subject, which may or may not be subject to data protection laws. Most queries do not relate to an identifiable human being, such as a query for “weather in London”. In short, this is context-specific.
Question: Do you consider search terms to be personal data a) internally and b) externally? If so, do you have a policy for what you can and can’t do with search terms?
Answer: Our privacy policy governs how we handle “server logs”, which include the query text. Our FAQ explain the contents of those server logs:
Moreover, we have a policy never to share search queries with anyone outside of Google if they contain personally-identifiable information. For example, we post anonymous and statistical information about our searches on our site Zeitgeist:
Question: Under what circumstances would you authorise the release of search terms?
Answer: This would be governed by our Privacy Policy on “information sharing”:
Information sharing
Google only shares personal information with other companies or individuals outside of Google in the following limited circumstances: We have your consent. We require opt-in consent for the sharing of any sensitive personal information. We provide such information to our subsidiaries, affiliated companies or other trusted businesses or persons for the purpose of processing personal information on our behalf. We require that these parties agree to process such information based on our instructions and in compliance with this Policy and any other appropriate confidentiality and security measures. We have a good faith belief that access, use, preservation or disclosure of such information is reasonably necessary to (a) satisfy any applicable law, regulation, legal process or enforceable governmental request, (b) enforce applicable Terms of Service, including investigation of potential violations thereof, (c) detect, prevent, or otherwise address fraud, security or technical issues, or (d) protect against imminent harm to the rights, property or safety of Google, its users or the public as required or permitted by law. If Google becomes involved in a merger, acquisition, or any form of sale of some or all of its assets, we will provide notice before personal information is transferred and becomes subject to a different privacy policy. We may share with third parties certain pieces of aggregated, nonpersonal information, such as the number of users who searched for a particular term, for example, or how many users clicked on a particular advertisement. Such information does not identify you individually.
Question: Do these circumstances differ in different countries or areas with different privacy laws?
Answer: Yes, because, as pursuant to the clause above, there are differences amongst countries with regards to: any applicable law, regulation, legal process or enforceable governmental request
Question: I understand that the information Google collects on users differs according to which Google product they are using (eg Google account, toolbar, Gmail, accelerator, etc). Could Google cross-reference this information with searches made from these products to find out who searched for what? For example, if a searcher has a Google account, can you identify which account a search term comes from (quite apart from the IP address)? If so, is this done, and under what circumstances?
Answer: From our Privacy Policy: Information you provide - When you sign up for a Google Account or other Google service or promotion that requires registration, we ask you for personal information (such as your name, e-mail address and an account password). For certain services, such as our advertising programs, we also request credit card or other payment account information which we maintain in encrypted form on secure servers. We may combine the information you submit under your account with information from other Google services or third parties in order to provide you with a better experience and to improve the quality of our services. For certain services, we may give you the opportunity to opt out of combining such information.
Question: If this information can be cross-referenced, under what circumstances would you authorise the release of search terms cross-referenced with the personal data users provided when they signed up to these services? For example if you had a US Justice Department request to release the search terms of all Google account holders whose sign-in name matched that of a terrorist suspect, would you release the terms?
Answer: As explained in our privacy policy, we will respect a valid legal order. The legal system has mechanisms to address/resolve questions relating to the specificity of the information being demanded.
Question: Does this situation differ in areas or countries with different privacy laws?
Answer: See answer to fourth question.
Question: If a resident of country A searches for something using a computer in country B, and their search term is stored in country C, which area’s privacy laws apply?
Answer: Resolving questions of jurisdiction in an international context is a complicated process, which takes into account numerous factors, such as the location of the person using the service, the location of the company providing the service, the location of the data, and other factors. Google’s Terms of Service are subject to the laws of the State of California, where Google is headquartered (see
Nonetheless, we are committed to being respectful of the laws of the various countries in which we do business.
Question: Do you have an internal policy governing what Google employees can do with search terms, and which employees have access to them? If so, would you please provide me with a copy of that policy?
Answer: Yes, we have a policy and a written confidentiality agreement which we require those employees to sign who have access to search terms (i.e., to server logs data). We do not share that externally.
Question: When a user of one of your services cancels the service (eg deletes their gmail account or uninstalls toolbar), for how long do you keep their personal data? Does this period differ according to the jurisdiction in which they are resident?
Answer: When a user terminates a Google service, the length of period that their personal data is retained will vary from one service to another, and depending on the type of information. For example, some types of personal data are retained for legal/tax/ accounting reasons, such as purchase records using our CheckOut service, and those retention periods are often dictated by applicable laws or regulatory practices. Other types of personal data, such as content that a user uploads to our service (such as Video) may remain on the service notwithstanding the cancellation of the user’s account. Other types of user personal data, such as the e-mails in a person’s Gmail account, should be deleted within a short period of time after the user closes his/her account. The retention periods do not currently differ according to the jurisdiction in which the user is resident, but it is possible that such changes will be made in the future.

Government Access to "Private" Data

Governments are becoming increasingly data-hungry. Largely because of concerns about terrorism, government agencies are seeking to collect and process more and more types of personal data. While many of these types of data collection may be necessary, unfortunately the burdens of collecting often are placed on companies that are also under conflicting data protection obligations.

Following 9/11, governments in both the EU and the US sought to greatly expand their access to different types of data, in order to investigate terrorist incidents and prevent other ones from happening. This type of collection includes areas such as money laundering, antiterrorist financing, airline passenger data, customs information, telecommunications records, logs of web pages, and many other types of data. In today’s globalized economy, much of this data is not itself collected or retained by governments, but is held by private sector entities and companies.

But companies also have obligations under data protection and privacy law. In Europe, data protection law places heavy burdens on companies to only collect and process personal data for specific purposes and not to process them in other ways; to delete data once the purposes of processing have been ended; and not to pass on personal data to third parties (including governmental entities) without notice being given to individuals and, in some cases, only with consent.

Companies are certainly willing to do their part in the fight against terrorism, but they are often placed in a position of having to comply with conflicting data protection and law enforcement rules, so that it is almost inevitable that they will have to violate one of the two. For example, before the US and the EU finally reached an agreement recently on the transfer of airline passenger data to US law enforcement authorities, airlines flying from Europe to the US were in effect breaching EU data protection rules by transferring such data to the US Department of Homeland Security. In another case, the French data protection authority found that whistleblower complaint hotlines run in France by companies, many of which were obligated by US law to maintain such hotlines in their operations overseas, violated French data protection law. The number of these conflicts is only increasing.

Data protection and law enforcement regulators often seem to be operating in different worlds, and do not speak to each other sufficiently. What is needed is more communication between privacy regulators and those in other areas, and an overarching framework for privacy protection in the context of transferring personal data to law enforcement authorities. Moreover, this framework needs to be coordinated not only within Europe, but also between Europe and other countries like the US.

Friday, January 12, 2007

Asia: the new Thought Leader in Privacy?

Privacy and data protection have always been a big thing in Europe and in the US. In Europe, the experience of Nazism and communist dictatorship have given rise to comprehensive legal instruments giving people extensive control over how data relating to them are processed (referred to as “informational self-determination”) by both the government and private companies. In the US, there is a long-standing tradition of privacy rights, but traditionally, most concern has been voiced about data processing by the government.

Europe and the US have become embroiled in a number of privacy-related squabbles recently. For example, European legal restrictions on transferring personal data outside the EU have led to objections in the US, where such restrictions are sometimes felt to be unfair and protectionist. On the other hand, Europeans have become increasingly upset by US processing of European data for law enforcement purposes. Thus, for example, requirements that airline passengers pass on their data to US law enforcement before boarding a plane bound for the US caused controversy recently. These disagreement are to some extent inevitable, given the differing cultures and legal traditions in Europe and the US, and to some extent are caused by simple misunderstanding and wounded pride. However, both the US and Europe are ignoring an important development, namely the growth of interest in privacy in Asia.

With over three billion people, Asia represents over half of humanity, and dwarfs both the US and Europe. However, traditionally privacy has not had a high value in many Asian cultures, which have been built more around communitarian ideas. Nevertheless, privacy regulation is exploding in Asia. While a few Asian regions have long had some sort of privacy regulation (such as Hong Kong and Korea), others have recently passed laws, including Australia, Japan, New Zealand, Taiwan, and others. In addition, other countries (such as Singapore and Vietnam) are presently either drafting privacy legislation or seriously considering it.

There are several reasons for this surge in interest in privacy among Asian governments. One factor is the work done by the Asia Pacific Economic Cooperation group (APEC), which is a group of Asia-Pacific countries (including the US) that get together to increase cooperation among their economies. In 2004, APEC approved a set of non-binding privacy principles for governments to follow in passing privacy legislation, and is currently working on other privacy-related issues such as international data transfers.

In addition, Asian countries see an economic benefit in passing privacy legislation. Experience in Europe an the US has shown that such legislation is a pre-condition to increasing consumer trust, particularly trust in online commerce. In addition, many large multinational companies are hesitant about locating large-scale outsourcing operations in countries without any legal framework for publishing breaches of data security and data privacy. Finally, as their economies have developed, more and more Asian citizens are demanding some sort of privacy protection.

In developing their privacy framework, the Asia-Pacific countries have a unique opportunity to draw from the good of the European and US frameworks, and reject the bad. Of course, the sheer size of Asian countries, and the diversity of their cultural and legal systems, makes implementing the APEC framework a significant challenge.

The wild card in all of this is China. With over one billion people, any privacy law enacted by the Chinese government will have immense impact, both inside Asia and beyond. In fact, the government has begun drafting a privacy law, the outlines of which will likely become clear in the coming months.

Despite these developments, privacy regulators in Europe and the US remain clearly focused on themselves and each other and their differences. In our globalized world, it is crucial that regulators work together more, and as the Asian economy picks up speed and China overtakes the US as the world’s largest economy, it is dangerous to focus only on transatlantic privacy issues and ignore what is happening in the Asia-Pacific region. Europe and the US should realize that they need to make their privacy regimes interoperable with such regimes in Asia. In our globalized world, any bilateral, or even regional, approach to regulation of the flow of information is doomed to eventual failure. Both Europe and the US are already dwarfed in terms of population size by the Asia-Pacific region, and will eventually be dwarfed economically as well. In this situation, it is in everyone’s interest to regard privacy not only as a transatlantic issue, but as a global one.

Thursday, January 11, 2007

What can Europe learn from the experience with US Security Breach Notification Laws?

At my last count, 34 US States have laws that require some kinds of security breaches to be notified to the public when their personal information is compromised. Europe is now considering following suit. But turning any such proposal into effective practice will require a lot of work.
I think it’s easy for all us to agree on one thing – every individual is entitled to be told promptly when a company learns of a security breach that has resulted in the loss of personal data that may subject an individual to identity theft or other such serious harm. It doesn’t really matter whether the breach was the result of mistake or malice – our first instinct should be to protect the user against harm while at the same time figuring out or fixing the problem that led to the breach in the first place. Prompt notice simply is the first step to protecting the customer.
The first practical question, however, is whether every loss of personal data should require notice. Yes, when customers give their personal information to companies who in turn store it for future uses, we all like to think that the information about us is held in vault-like security. But the truth is that there is no perfect security – some people will abuse their position or power to steal information, others will hack into systems for the challenge of it, and we are all human in the end and make mistakes. So if personal information is lost, it is appropriate to ask what types of personal information if lost should trigger notice.
While reasonable people may differ, I think the trigger should be a material risk of harm to the individual such as identity theft or financial loss. It is fair to say that, objectively, the loss of a laptop with customer list information such as name, physical address, email address and phone number likely presents little risk of financial harm or identity theft to any individual. Such information generally is available from directories or other public sources and most of us don’t take steps to protect that information from the public domain. But if you couple that information with an account access code or other key data elements that would permit a person to apply for credit such as date of birth and government identity number, then the risk of harm certainly increases.
In the United States, with its 34 State security breach notice laws, this in fact is the legal standard. Notice is required when a person’s name in combination with financial account information and access codes or a personal identifier like a social security number or driver’s license number are disclosed, so that there is a risk of harm occurring. This makes more sense than giving notice routinely – individuals should not be moved to anxiety over disclosures that present no real risk of harm; notice should mean something and be viewed as an important alert to pay attention to one’s credit card and bank statements. The diversity of security breach laws in the US has meant that in many US states, notice must be given whether or not any harm has been caused by a breach, which has led to notification being given frequently. In fact, there is evidence that individuals in the US are becoming jaded by receiving frequent security breach notices, so that sending a notice for every breach, even those that have no real affect on security, can produce a numbness that can itself represent a security risk.
Another practical issue to consider is the timing of any notice: it must be sufficiently prompt to afford individuals a meaningful opportunity to take steps to avoid harm. Yet, companies likewise need to investigate the cause of the breach, take remedial action, and prepare for notification as well. Similarly, companies often quickly complete their investigation and determine that the compromise is the result of the illegal acts of a third party. In those cases, referral is often made to law enforcement agencies. In some instances, law enforcement requests the service provider to refrain from giving notice or publicly disclosing the incident in an effort to further investigate the crime and find the perpetrator. So it may be appropriate to delay notice at the request of law enforcement if doing so is in the public interest.
Giving notice is the easy part; responding to subsequent inquiries, however, takes planning. For example, companies that have been through the breach notification process have commented on the need to establish call center support to respond to customer inquiries that arise after receipt of a notification. Many companies also arrange for fraud protection insurance coverage and take steps to notify credit reporting agencies, banks and card issuers. These customer-friendly steps are important to ensuring a complete and accurate notification and for the protection of the customer. Such procedures would obviously need to be adapted to European conditions since, unlike the US, there is no EU-wide credit reporting capability.
A sensible security breach law would help consumers in Europe know when their personal data is safe, and when it might not be.

The EU Data Retention Directive

Many people wonder what will happen to their data when European governments begin to pass laws implementing the Data Retention Directive. Google's recent victory in a US court opposing a request to obtain data by the US Department of Justice (DOJ) may shed some light on the challenges raised by the new Directive.

Under the new Directive, every EU country must pass laws requiring phone and Internet companies to retain vast amounts of their users' data (both subscriber data, and so-called traffic and location data), for periods ranging from 6 months to two years. The goal of data retention is to make the data available to law enforcement for the investigation and prosecution of serious crime. At the same time, every European has a human right to have his/her personal data protected from unauthorized disclosure, and companies have both ethical and commercial reasons to protect the privacy of their users' data. It’s noteworthy, however, that government agencies in the EU (such as law enforcement bodies) are not legally required to comply with the same data protection laws that companies like Google must follow.

The experience in the US, and Google’s case, demonstrate that government and other law enforcement agencies are seeking access to stored data on a massive scale. The US DOJ originally sent a subpoena to Google for billions of URLs and two months' worth of users' search queries. The DOJ wanted this data not to further any particular pending investigation, but to test its theories about the effectiveness of software filters to protection children from harmful content, as part of a lawsuit to defend the constitutionality of the 1998 US Children's Online Protection Act. Google chose to go to court to resist this massive government demand for data, which we felt was disproportionate to the uses to which the data were to be put. Thankfully, the Judge agreed and drastically reduced the scope to 50,000 URLs and zero search queries. The Judge agreed with Google that the government's request for this data had to be weighed against our users' legitimate expectations of privacy.

Data retention laws will be passed in the months ahead across Europe to implement the new EU Directive, and telecom and Internet companies will comply with them by retaining vast amounts of data. Law enforcement will then start demanding this information, but they are not currently bound by data protection laws (the EU is considering the extension of data protection laws to law enforcement, but the outcome of this effort is uncertain). Thus, while the US court upheld Google’s objections, law enforcement might have prevailed in a European court, despite the existence of data protection laws.

In Europe we need to have a much broader and open public discussion about how to make sure that our laws incorporate safeguards to ensure that law enforcement is provided with data that is relevant and proportionate, but not provided with unlimited access to data that most Europeans expect to be kept private. Not every company will go to court to ask a judge to get the balance right the way Google did. Google is certainly willing to fulfill its legal obligations both to respond to legitimate law enforcement requests for data, and to protect the data protection rights of its users. But the combination of the new Data Retention Directive that will mandate the creation of massive databases, and the failure of the EU to so far extend data protection law to law enforcement entities, creates a situation in which personal data may lack appropriate legal protection. I hope that national legislators pass data retention laws that are narrow in scope, and that the EU extends data protection law to law enforcement activities.

A German threat to Anonymous Email Accounts

There is a lot of confusion about anonymity on the internet, and what it means. In Europe, data protection law governs the process of personal data on the internet. However, such law applies only if the data is “personal”, that is, if it can be linked to a particular person. If data is totally anonymous, then by definition it is not possible to link it to a person, so that it is not covered by the law.

Even though anonymity sounds like a black and white concept, it is actually much more flexible than we usually think. We all depend on a certain amount of anonymity in our daily lives; for example, if you buy a book in a bookstore with cash, you are assuming that your transaction will remain anonymous. In order to protect our private sphere, it is important that we have the choice to carry out certain transactions anonymously.

At the same time, even this example shows how anonymity is not absolute. You might be friends with the salesperson who sold you the book, so that he would know that you bought a particular book. The bookstore may also have surveillance cameras located near the cash registers to record transactions in case of theft. Most people don’t worry much about these types of small incursions into anonymity, since they are minor and nearly unavoidable.

However, recently governments have been trying to make broad inroads on anonymity for law enforcement purposes, particularly with regard to the internet. Governments are particularly nervous about anonymous e-mail accounts that are offered by many online services, since they believe that such accounts are used by terrorists and other criminals. In fact, recently the German justice minister even proposed eliminating or sharply restricting such anonymous e-mail, by requiring that individuals present a passport before they are able to open a webmail account. Here’s the proposal (in German only):

That’s a bad idea. It would not even be technically possible to eliminate or restrict such accounts, since such e-mail services are freely available on the web from providers in other countries. Moreover, just as we all want a certain degree of anonymity when we buy a book, there are occasions when people want and need to have an anonymous e-mail account. There are many such scenarios: the dissident who is writing an account of political persecution to be sent to a newspaper abroad; the individual who wants to order something over the internet but doesn’t want to use his office e-mail; or just the ordinary internet user who is concerned about his privacy will all want to consider using an e-mail account which isn’t tied to their particular name or identity. There is nothing wrong with that, and it is no different than sending a letter to someone without putting your return name and address on the envelope.

Attempting to restrict or regulate anonymity on the internet, or even to ban anonymous e-mail accounts, will only be ineffective, and will severely damage trust of individuals and consumers to use the net. Politicians should be attempting to increase privacy protection on the net and user trust, rather than restricting them in this way.

Some thoughts on international conflicts of law in recent privacy controversies

The free flow of data around the globe is the lifeblood of the Information Age. And any company doing business on the Internet is participating in this global flow. So, what happens when a company finds itself challenged by different regulatory authorities with completely contradictory requirements?

There have been numerous recent cases of companies facing such conflicting legal requirements. Airlines flying from Europe to the US were confronted with US anti-terrorism laws which required them to provide so-called Passenger Name Records to the US authorities, at the same time that privacy regulators in Europe held such transfers to be illegal under European data protection laws. SWIFT, the financial services company, was forced by the US anti-terrorism authorities to provide it with large amounts of individual wire transfer records, which was then condemned by the Belgian data protection authorities.

Similarly, Google was recently involved in a court action in Brazil with regards to its US-operated social networking site, Orkut. The Brazilian authorities were trying to investigate Internet crimes, like child pornography and hate speech directed against blacks and gays, and demanded that Google provide them with the personal information of some Orkut users. But Google is also a US-based company, and the applicable US privacy law, the Electronic Communications Privacy Act, prohibits Google from answering any demand for its users’ personal data, except pursuant to a valid legal order. Google said publicly that it had cooperated with the Brazilian authorities in numerous cases, and was prepared to cooperate again, as long as the Brazilian authorities direct their valid legal order to the operator of the service, namely Google Inc in the US, and not to its sales office in Brazil (which neither operates the service, nor has access to the data).

These are just a few examples of the many casese where companies are being torn by the conflicting legal requirements of different governments. Such cases are becoming much more frequent in the era of the Internet. So, perhaps it’s time to ask ourselves whether governments have an obligation to talk more to each other. Every child knows what it’s like to have mom tell you one thing, and dad tell you just the opposite. In a functional family, mom and dad will talk and work it out. In a disfunctional family, the kids will just have to choose between listening to mom or to dad.

The legal arena of resolving disputes of international jurisdiction has always been complicated, based on numerous factors: location of the company providing a service, location of the users of the service, location of equipment providing the service, etc, and those factors are sometimes inconclusive and contradictory themselves. But such jurisdictional conflicts arise constantly on the Internet, a medium which naturally crosses borders. Ironically, many of the legal mechanisms in the world to resolve such jurisdictional disputes are ancient, from an era where cross-border disputes were rare.

The time has come for governments and multinational companies to assume their respective responsibilities to work with each other to develop a process to discuss and resolve cross-border legal conflicts. The fight against child pornography and terrorism, to take these examples from recent casees, is simply too important to hamper on the basis of legal conflicts. The answer is not to expect companies to work out a solution to contradictory laws on their own. It’s time for mom and dad to talk.

We Need to Update European Privacy Laws for the Information Age

The global flow of information is one of the most powerful trends of our age. Every time we use a mobile phone, a credit card, or the Internet, our information races across computers networks, and often around the globe. But many European privacy laws were designed for another era, before the Internet. These laws restrict the flow of personal data outside of Europe. It’s time to take a fresh look at technology trends and to update privacy laws. We should be able to protect privacy, without sacrificing the amazing benefits of global information flows.

Under the 1995 EU Data Protection Directive, it’s illegal to transfer personal data to a country that does not have “adequate” data protection, as defined by the European Commission. The Commission has taken the approach that only countries with a clone of European privacy laws (e.g., Hong Kong, Argentina, Guernsey) have “adequate” laws, while countries like the US, Brazil and Japan do not. Moreover, many privacy laws require a company to obtain prior approval from the local data protection authority to transfer data to a third country, even to its own subsidiary, despite the fact that the authorities are often over-worked and under-staffed, and sometimes need months to review a request for transfer.

Efforts to fix this conundrum have been well-intentioned, but remain unsatisfactory. Transfers of data from Europe to the US can be legalized under the so-called Safe Harbor Agreement, but that arrangement does nothing for data flows to all the other countries of the world. Another regulatory initiative, to impose “binding corporate rules” on companies so that they can transfer data within their corporate group, has become bogged down in the bureaucratic maze which requires companies to obtain separate approval from every data protection authority in Europe.

But the whole idea of regulating privacy based on restrictions on transfers of data across borders is obsolete. Increasingly, data lives in the Internet “cloud”. In other words, information and applications are migrating from the architecture of PCs and their mainframe servers to “cloud” computing, with information and applications hosted in cyberspace. And the total amount of information is exploding, as more people come online, as more information is digitized, and as more devices become Internet-enabled.

The key to protecting privacy for the Information Age is to make sure that people can control their data, wherever it is located: 1) they need to get clear notice about the privacy practices of companies that collect their data, 2) they need to be given meaningful choices about how their data will be used, and 3) they need to trust systems to provide a higher level of protection for sensitive data like credit card numbers and personal health information.

Today, a huge amount of effort is spent on regulating the transfers of data outside of Europe. But we should confront the bigger challenge of making sure that privacy is respected, regardless of its location. Yes, this will require better international collaboration amongst governments and companies across borders, and indeed, a flexibility to respect privacy regimes which may be different from our own. If companies and privacy regulators work together, we will certainly be able to develop a simple and unbureaucratic system to ensure that privacy is respected while data flows in the “cloud”. We’ll have to focus more on the key principles of privacy protection and international collaboration. Europe led the way on inventing privacy laws, and now we have a chance to lead the way on updating them for the Information Age.