Friday, March 11, 2011

France re-writes the rules of data retention

When Europe introduced a Data Retention Directive in 2006, it struck a very very careful political and legal balance between the interests of privacy and the interests of Law Enforcement/ Government access to data. The core distinction of the laws was to impose an obligation on service providers to retain and produce traffic data relating to communications, but to exclude contents of communications. Notwithstanding this careful balance, the Directive has always been highly controversial. There has been a long debate about whether this Directive, and the balance it struck, is Constitutional under national privacy laws, and indeed, last year its German-implementation was held un-constitutional by the German Constitutional Court.

Surprisingly, very few people have noticed what just happened in France. The law (decree, technically) adopted a few days ago in France up-ended the careful political/legal balance of the Directive by inserting one little word: "passwords". In other words, passwords are added to the list of "traffic data" that ISPs have to retain and produce to the French police on demand. Interestingly, the version of the law that had been circulating for discussion in France for the last two years, and which was reviewed by the French privacy authority the CNIL and by industry associations, did not contain that little word "password". The word "password" was inserted at the last minute, with no public or privacy review, as far as I can tell.

Stop to reflect for just a minute. Why would police want a password and what would they do with it? Well, obviously, they would use it to look at "content" of communications. In other words, a password would grant them access to all the things that the Directive explicitly chose not to subject to Data Retention in the interests of privacy.

All the years of work by privacy advocates has been chucked aside, in one little word. Well, three in French: "mot de passe".

I'm sure legal challenges to this French law will not be far behind. Curiously, only a few lone voices in the press or advocacy community seem to have noticed all this.

Wednesday, March 9, 2011

Foggy thinking about the Right to Oblivion



I was lucky enough to spend a few days in Switzerland working on Street View. And I treated myself to a weekend of skiing too. The weather wasn't great, we had a lot of mountain fog, but then, the entire privacy world seems to be sort of foggy these days.

In privacy circles, everybody's talking about the Right to be Forgotten. The European Commission has even proposed that the "right to be forgotten" should be written into the up-coming revision of the Privacy Directive. Originally, a rather curious French "universal right" that doesn't even have a proper English-translation (right to be forgotten? right to oblivion? right to delete?), le Doit a l'Oubli, is going mainstream. But, what on earth is it? For most people, I think it's an attempt to give people the right to wash away digital muck, or delete the embarrassing stuff, or just start fresh. But unfortunately, it's more complicated than that.

More and more, privacy is being used to justify censorship. In a sense, privacy depends on keeping some things private, in other words, hidden, restricted, or deleted. And in a world where ever more content is coming online, and where ever more content is find-able and share-able, it's also natural that the privacy counter-movement is gathering strength. Privacy is the new black in censorship fashions. It used to be that people would invoke libel or defamation to justify censorship about things that hurt their reputations. But invoking libel or defamation requires that the speech not be true. Privacy is far more elastic, because privacy claims can be made on speech that is true.

Privacy as a justification for censorship now crops up in several different, but related, debates: le droit a l'oubli, the idea that content (especially user-generated content on social networking services) should auto-expire, the idea that data collection by companies should not be retained for longer than necessary, the idea that computers should be programmed to "forget" just like the human brain. All these are movements to censor content in the name of privacy. If there weren't serious issues on both sides of the debate, we wouldn't even be talking about this.

Most conversations about the right to oblivion mix all this stuff up. I can't imagine how to have a meaningful conversation (much less write a law) about the Right to be Oblivion without some framework to dis-entangle completely unrelated concepts, with completely unrelated implications. Here's my simple attempt to remember the different concepts some people want to forget.

1) If I post something online, should I have the right to delete it again? I think most of us agree with this, as the simplest, least controversial case. If I post a photo to my album, I should then later be able to delete it, if I have second-thoughts about it. Virtually all online services already offer this, so it's unproblematic, and this is the crux of what the French government sponsored in its recent Charter on the Droit a l'Oubli. But there's a big disconnect between a user's deleting content from his/her own site, and whether the user can in fact delete it from the Internet (which is what users usually want to do), more below.

2) If I post something, and someone else copies it and re-posts it on their own site, do I have the right to delete it? This is the classic real-world case. For example, let's say I regret having posted that picture of myself covered in mud, and after posting it on my own site, and then later deleting it, I discover someone else has copied it and re-posted it on their own site. Clearly, I should be able to ask the person who re-posted my picture to take it down. But if they refuse, or just don't respond, or are not find-able, what can do I do? I can pursue judicial procedures, but those are expensive and time-consuming. I can go directly to the platform hosting the content, and if the content violates their terms of service or obviously violates the law, I can ask them to take it down. But practically, if I ask a platform to delete a picture of me from someone else's album, without the album owner's consent, and only based on my request, it puts the platform in the very difficult or impossible position of arbitrating between my privacy claim and the album owner's freedom of expression. It's also debatable whether, as a public policy matter, we want to have platforms arbitrate such dilemmas. Perhaps this is best resolved by allowing each platform to define its own policies on this, since they could legitimately go either way.

3) If someone else posts something about me, should I have a right to delete it? Virtually all of us would agree that this raises difficult issues of conflict between freedom of expression and privacy. Traditional law has mechanisms, like defamation and libel law, to allow a person to seek redress against someone who publishes untrue information about him. Granted, the mechanisms are time-consuming and expensive, but the legal standards are long-standing and fairly clear. But a privacy claim is not based on untruth. I cannot see how such a right could be introduced without severely infringing on freedom of speech. This is why I think privacy is the new black in censorship fashion.

4) The Internet platforms that are used to host and transmit information all collect traces, some of which are PII, or partially PII. Should such platforms be under an obligation to delete or anonymize those traces after a certain period of time? and if so, after how long? and for what reasons can such traces be retained and processed? This is a much-debated topic, e.g., the cookies debate, or the logs debate, the data retention debate, all of which are also part of the Droit a l'Oubli debate, but they completely different than the categories above, since they focus on the platform's traffic data, rather than the user's content. I think existing law deals with this well, if ambiguously, by permitting such retention "as long as necessary" for "legitimate purposes". Hyper-specific regulation just doesn't work, since the cases are simply too varied.

5) Should the Internet just learn to "forget"? Quite apart from the topics above, should content on the Internet just auto-expire? e.g., should all user posts to social networking be programmed to auto-expire? Or alternatively, to give users the right to use auto-expire settings? Philosophically, I'm in favor of giving users power over their own data, but not over someone else's data. I'd love to see a credible technical framework for auto-delete tools, but I've heard a lot of technical problems with realizing them. Engineers describe most auto-delete functionalities as 80% solutions, meaning that they never work completely. Just for the sake of debate, on one extreme, government-mandated auto-expire laws would be as sensible as burning down a library every 5 years. Even if auto-expire tools existed, they would do nothing to prevent the usual privacy problems when someone copies content from one site (with the auto-expire tool) and moves it to another (without the auto-expire function). So, in the real world, I suspect that an auto-expire functionality (regardless of whether it was optional or mandatory) would provide little real-world practical privacy protections for users, but it would result in the lose of vast amounts of data and all the benefits that data can hold.

6) Should the Internet be re-wired to be more like the human brain? This seems to be a popular theme on the privacy talk circuit. I guess this means the Internet should have gradations between memory, and sort of hazy memories, and forgetting. Well, computers don't work that way. This part of the debate is sociological and psychological, but I don't see a place for it in the world of computers. Human brains also adapt to new realities, rather well, in fact, and human brains can forget or ignore content, if the content itself continues to exist in cyberspace.

7) Who should decide what should be remembered or forgotten? For example, if German courts decide German murderers should be able to delete all references to their convictions after a certain period of time, would this German standard apply to the Web? Would it apply only to content that was new on the Web, or also to historical archives? and if it only applied to Germany, or say the .de domain, would it have any practical impact at all, since the same content would continue to exist and be findable by anyone from anywhere? Or to make it more personal, the web is littered with references to my criminal conviction in Italy, but I respect the right of journalists and others to write about it, with no illusion that I should I have a "right" to delete all references to it at some point in the future. But all of my empathy for wanting to let people edit-out some of the bad things of their past doesn't change my conviction that history should be remembered, not forgotten, even if it's painful. Culture is memory.

8) Sometimes people aren't trying to delete content, they're just trying to make it harder to find. This motivates various initiatives against search engines, for example, to delete links to legitmate web content, like newspaper articles. This isn't strictly speaking "droit a l'oubli", but it's a sort of end-run around it, by trying to make some content un-findable rather than deleted. This will surely generate legal challenges and counter-challenges before this debate is resolved.

Next time you hear someone talk about the Right to be Oblivion, ask them what exactly they mean. Foggy thinking won't get us anywhere.