It used to be said that “pour vivre heureux, vivons cachés.” If only life were still that simple. But today all of us regularly trust other people with our personal information. Mobile phones pin point where we are to within a few hundred meters. Credit cards record what we like to eat, where we shop and the hotels we stay in. Search engines log what we are looking for, and when.
This places a huge duty on business to act responsibly and treat personal data with the sensitivity it deserves – of which more later. But it also raises important questions for governments, which increasingly see the information companies hold on their customers as a valuable weapon in the fight against terrorism.
For decades politicians have had to strike a balance between personal privacy and the power of the police when drafting criminal justice legislation – and generally they have erred on the side of caution, aligning themselves with the rights of the individual. But in the aftermath of the atrocities on 9/11 and the horrendous bombings in Madrid and London, governments globally have sought to redress that balance – giving more power to the police and in the process starting a fierce debate about where the boundary between security and privacy lies.
The Patriot Act in the United States, for example, made it easier for law enforcement agencies to access people’s personal data so that they could more quickly investigate acts of terrorism. It has been widely criticized for over-riding longstanding safeguards designed to protect individual liberty. In Europe politicians have taken a different approach – although the consequences look as if they will be the same: an erosion of personal privacy. The EU Data Retention Directive requires phone operators and Internet companies to store data on their users – such as the emails they send and receive – for between six and 24 months so that the police can use it to investigate serious crimes.
Many people will see nothing wrong with this approach, arguing that it will impact only terrorists and that the innocent have nothing to hide. However, as is so often the case the problem lies in the detail, which will vary country by country as different governments intend to implement the Directive in different ways. In Italy, for example, the 2005 Act on Urgent Measures to Fight International Terrorism – which effectively anticipated the Directive - led to the suspension of certain privacy provisions in the Italian Data Protection Code. The Act also requires companies to store Internet traffic data for twelve months to help investigate terrorism and serious crime. In Germany the Ministry of Justice has decided that anyone who provides an email service must verify the identity of their customers before giving them an account – effectively ending the use of anonymous email.
The Data Retention Directive is being challenged on many fronts. Some question whether it will actually help in the fight against terrorism when tech savvy people will be able to use the Internet in such a way as to ensure they do not leave tracks that can be traced. Nor is it at all clear that the benefits outweigh the additional security risks posed by the creation of such massive databases. And then there is the whole question of whether this Directive actually applies to non-European-based companies.
Take Google for example. We do not ask our users to prove their identity before giving them an email address – and we think it would be wrong to do so because we believe that people should have the right to use email anonymously. Just think about dissidents. We would therefore challenge any government attempt to try and make us do this. Of course we recognize our responsibility to help the police with their inquiries where they have been through the proper legal process. While most people use the Internet for the purposes it was intended – to help human kind communicate and find information – a tiny minority do not. And it’s important that when criminals break the law they are caught.
But we think personal privacy matters too. From the start Google has tried to build privacy protections into our products at the design stage – for example we have an off the record button on our instant messaging services so that people cannot store each others’ messages without permission. And we allow people to use many of our services without registration. Like our search engine, we want our privacy policies to be simple and easy to understand – they are not the usual legal yada yada.
Nor do we believe that there are always right and wrong answers to these complex issues. That’s why we keep our policies under constant review and discuss them regularly with data protection specialists. For example we have recently decided to change our policy on retaining users’ old log data. We will make this data anonymous after 18 to 24 months – though if users want us to keep their logs for longer so that they can benefit from personalized services we will. This change in policy will add additional safeguards for users’ privacy while enabling us to comply with future data retention requirements.
In the meantime we expect to see the debate on privacy intensify as the Data Retention Directive is passed into law across Europe. The European Union has written both privacy and security into its Charter of Fundamental Rights. Important principles are at stake here – and an open and honest discussion is important if we are to balance these two, often conflicting, principles.
Tuesday, April 24, 2007
Tuesday, April 17, 2007
Online Ad Targeting
Google’s plan to acquire DoubleClick has refocused attention on the privacy issues in online ad targeting. Let’s be frank: in privacy terms, there are practices in the industry of online ad targeting that are good, others that are bad, and some that could be improved. I am convinced that this acquisition will start a process to improve privacy practices across the ad targeting industry. To improve them, we need to start by understanding them.
We live in an age when vast amounts of content and services are available for free to consumers. And that has been made possible by the growth of online ad targeting, which provides the economic foundations for all this. Given the enormous economic role that ad targeting now plays in sustaining the web, it’s important to analyze it very carefully for privacy implications. Of course, advertising has historically subsidized lots of services before the Internet, such as TV, radio, newspapers etc. And advertisements in those media have always been targeted at their audiences: a TV program on gardening carries different types of ads than a football match, because the advertisers assume their audiences fit different demographic profiles. Although the advertisements are “targeted” based on demographics, they remain anonymous, and hence raise no real privacy issues.
Online, the issues of ad targeting are more complicated, and in terms of privacy practices, there is a wide spectrum. On the responsible end, ad targeting respects the core privacy principles: providing notice to end-users and respecting their privacy choices. On the bad end of the spectrum, “adware”, a type of spyware, is malicious software which engages in unfair and deceptive practices, such as hi-jacking and changing settings on a user’s machine, and making itself hard to un-install. Below are thoughts about how to keep ad targeting on the responsible end of the spectrum.
Ad targeting is based on “signals”, and these signals can be either anonymous or “personally-identifiable information” (known as PII). To analyze privacy implications, the first question to ask about ad targeting is whether it is based on anonymous signals or on PII. Moreover, there are roughly two categories of signals (demographic and behavioral), and each of them can be either anonymous or PII.
Anonymous ad targeting is the most common form of ad targeting on the Internet. There are many different types of demographic signals, such as location, language, or age. For example, ads are routinely targeted to people who live in a particular location: an advertiser may wish to target people who live in Paris, which can be done based on the geolocation code in the IP address of end-users. Or an advertiser may wish to target people who speak a particular language, such as French, which can be done based on the language settings in end-users’ browsers or based on language preferences in their cookies. Or an advertiser may wish to target a young demographic, which might be done by targeting ads to sites where young people congregate, such as social networking sites. Anonymous ad targeting can also be based on an end-user’s behavior, such as the keyword search term that someone types. If I type the search “hotel in Rio”, Google may show me an ad for a hotel in Rio. This is a contextual ad, related to the search term, and based on the “behavior” of the person who typed it. It can be done without knowing the identity of the person typing the search.
Ad targeting can also be based on PII. For example, a retailer may target ads to me, as an identifiable person, because I have bought particular books from them in the past, and they have developed a profile of my likely interests. The key privacy principles which govern the collection and use of PII are “notice” and “choice”. So, any ad targeting based on PII needs to be transparent to end-users and to respect their privacy preferences.
The use of third-party cookies for ad targeting requires special care. If an end-user goes to a site, xyz.com, it may receive a cookie from that site, and the cookie would be known as a first-party cookie, since it was downloaded by the site the end-user was visiting. When a website uses an advertising network to serve ads on its site, the advertising network may download its own cookies on end-users’ machines to help target ads. Because the end-user receives a cookie from the advertising network while it is on the website of xyz.com, the advertising network’s cookies are known as third-party cookies.
Third-party cookies present particular challenges in terms of transparency and choice to end-users. Some users may not be aware that they are receiving cookies from third-parties at all. Others may be aware of receiving them, but they may not be aware of how to accept or to reject them.
The Network Advertising Initiative (“NAI”) has published a set of privacy principles in conjunction with the Federal Trade Commission. http://www.networkadvertising.org/industry/principles.asp
Among other things, they set standards for notice and choice in the context of ad targeting based on third-party cookies, which have been adopted by many of its member companies, including DoubleClick. These principles require that all websites served by these networks inform their end-users that, to quote:
1) “The advertising networks may place a 3rd party cookie on your computer;
2) Such a cookie may be used to tailor ad content both on the site you are visiting as well as other sites within that network that you may visit in the future.”
In addition to requiring notice to consumers about the use of 3rd party cookies, these NAI mandates that member advertising networks provide an opt-out mechanism for the targeted ads programs they provide.
It seems to me that these NAI principles are right to focus on notice and consent to end-users. As so often, there’s room to scrutinize the individual implementations of these principles. Amongst privacy advocates, we will continue to debate about the meaning of “anonymity”, and whether or not the types of unique identifying numbers used in the cookies of advertising networks can be linked with identifiable users under particular circumstances. There is a wide spectrum from “anonymity” to “identifiability”, so there is also a need for a constructive policy debate about the level of anonymity to be expected in online ad targeting. Similarly, there is room for a debate about the way choices are presented to end-users: Are the notices clear? Does the end-user have meaningful choices? Are the end-user’s choices respected?
Most companies facilitating online ad targeting, like DoubleClick, have operated in the background. Because they have generally not been consumer-facing sites, many consumers do not understand how they work. Google only recently announced its plans to acquire DoubleClick, so it’s too early to list any specific privacy improvements that it might try to make, although it’s not to early to start thinking about them.
I think it’s a good thing for people to become more aware of online ad targeting. It’s an industry that has operated in the shadows for too long. The attention that this deal may generate can do a lot of good. In the weeks and months ahead, I’ll be speaking with lots of privacy stakeholders, to solicit their ideas about how privacy practices could be improved in this industry. I’m optimistic that the process to improve transparency and user choice in online ad targeting has gotten a fresh impetus.
We live in an age when vast amounts of content and services are available for free to consumers. And that has been made possible by the growth of online ad targeting, which provides the economic foundations for all this. Given the enormous economic role that ad targeting now plays in sustaining the web, it’s important to analyze it very carefully for privacy implications. Of course, advertising has historically subsidized lots of services before the Internet, such as TV, radio, newspapers etc. And advertisements in those media have always been targeted at their audiences: a TV program on gardening carries different types of ads than a football match, because the advertisers assume their audiences fit different demographic profiles. Although the advertisements are “targeted” based on demographics, they remain anonymous, and hence raise no real privacy issues.
Online, the issues of ad targeting are more complicated, and in terms of privacy practices, there is a wide spectrum. On the responsible end, ad targeting respects the core privacy principles: providing notice to end-users and respecting their privacy choices. On the bad end of the spectrum, “adware”, a type of spyware, is malicious software which engages in unfair and deceptive practices, such as hi-jacking and changing settings on a user’s machine, and making itself hard to un-install. Below are thoughts about how to keep ad targeting on the responsible end of the spectrum.
Ad targeting is based on “signals”, and these signals can be either anonymous or “personally-identifiable information” (known as PII). To analyze privacy implications, the first question to ask about ad targeting is whether it is based on anonymous signals or on PII. Moreover, there are roughly two categories of signals (demographic and behavioral), and each of them can be either anonymous or PII.
Anonymous ad targeting is the most common form of ad targeting on the Internet. There are many different types of demographic signals, such as location, language, or age. For example, ads are routinely targeted to people who live in a particular location: an advertiser may wish to target people who live in Paris, which can be done based on the geolocation code in the IP address of end-users. Or an advertiser may wish to target people who speak a particular language, such as French, which can be done based on the language settings in end-users’ browsers or based on language preferences in their cookies. Or an advertiser may wish to target a young demographic, which might be done by targeting ads to sites where young people congregate, such as social networking sites. Anonymous ad targeting can also be based on an end-user’s behavior, such as the keyword search term that someone types. If I type the search “hotel in Rio”, Google may show me an ad for a hotel in Rio. This is a contextual ad, related to the search term, and based on the “behavior” of the person who typed it. It can be done without knowing the identity of the person typing the search.
Ad targeting can also be based on PII. For example, a retailer may target ads to me, as an identifiable person, because I have bought particular books from them in the past, and they have developed a profile of my likely interests. The key privacy principles which govern the collection and use of PII are “notice” and “choice”. So, any ad targeting based on PII needs to be transparent to end-users and to respect their privacy preferences.
The use of third-party cookies for ad targeting requires special care. If an end-user goes to a site, xyz.com, it may receive a cookie from that site, and the cookie would be known as a first-party cookie, since it was downloaded by the site the end-user was visiting. When a website uses an advertising network to serve ads on its site, the advertising network may download its own cookies on end-users’ machines to help target ads. Because the end-user receives a cookie from the advertising network while it is on the website of xyz.com, the advertising network’s cookies are known as third-party cookies.
Third-party cookies present particular challenges in terms of transparency and choice to end-users. Some users may not be aware that they are receiving cookies from third-parties at all. Others may be aware of receiving them, but they may not be aware of how to accept or to reject them.
The Network Advertising Initiative (“NAI”) has published a set of privacy principles in conjunction with the Federal Trade Commission. http://www.networkadvertising.org/industry/principles.asp
Among other things, they set standards for notice and choice in the context of ad targeting based on third-party cookies, which have been adopted by many of its member companies, including DoubleClick. These principles require that all websites served by these networks inform their end-users that, to quote:
1) “The advertising networks may place a 3rd party cookie on your computer;
2) Such a cookie may be used to tailor ad content both on the site you are visiting as well as other sites within that network that you may visit in the future.”
In addition to requiring notice to consumers about the use of 3rd party cookies, these NAI mandates that member advertising networks provide an opt-out mechanism for the targeted ads programs they provide.
It seems to me that these NAI principles are right to focus on notice and consent to end-users. As so often, there’s room to scrutinize the individual implementations of these principles. Amongst privacy advocates, we will continue to debate about the meaning of “anonymity”, and whether or not the types of unique identifying numbers used in the cookies of advertising networks can be linked with identifiable users under particular circumstances. There is a wide spectrum from “anonymity” to “identifiability”, so there is also a need for a constructive policy debate about the level of anonymity to be expected in online ad targeting. Similarly, there is room for a debate about the way choices are presented to end-users: Are the notices clear? Does the end-user have meaningful choices? Are the end-user’s choices respected?
Most companies facilitating online ad targeting, like DoubleClick, have operated in the background. Because they have generally not been consumer-facing sites, many consumers do not understand how they work. Google only recently announced its plans to acquire DoubleClick, so it’s too early to list any specific privacy improvements that it might try to make, although it’s not to early to start thinking about them.
I think it’s a good thing for people to become more aware of online ad targeting. It’s an industry that has operated in the shadows for too long. The attention that this deal may generate can do a lot of good. In the weeks and months ahead, I’ll be speaking with lots of privacy stakeholders, to solicit their ideas about how privacy practices could be improved in this industry. I’m optimistic that the process to improve transparency and user choice in online ad targeting has gotten a fresh impetus.
Friday, April 6, 2007
La protection de la vie privée sur Internet
LE MONDE 05.04.07
On a coutume de dire "pour vivre heureux, vivons cachés". Si la vie était aussi simple... Aujourd'hui, nous confions nos informations personnelles à des tiers. Les téléphones mobiles peuvent nous localiser à quelques centaines de mètres près, les cartes de crédit enregistrent nos plats préférés, nos boutiques favorites et les hôtels dans lesquels nous nous rendons. Les moteurs de recherche mémorisent la date et l'objet de nos recherches.
Les entreprises portent donc la lourde responsabilité de traiter nos données personnelles avec le respect qu'elles méritent. Mais cela soulève aussi d'importantes questions pour les gouvernements, qui considèrent de plus en plus que les informations détenues par les entreprises sur leurs clients constituent une arme précieuse pour lutter contre le terrorisme.
A la suite du 11-Septembre et des horribles attentats de Madrid et de Londres, les gouvernements ont cherché dans l'ensemble à redéfinir l'équilibre entre la protection de la vie privée et les pouvoirs de la police en donnant plus de pouvoirs à cette dernière. Cela a suscité un vif débat sur la frontière entre la sécurité et la vie privée. Aux Etats-Unis, par exemple, le Patriot Act a facilité l'accès des autorités publiques aux données personnelles des citoyens pour accélérer les enquêtes sur les actes de terrorisme. Cette loi a été critiquée comme remettant en question les garde-fous établis de longue date afin de protéger les libertés individuelles.
En Europe, les pouvoirs publics ont adopté une approche différente, mais dont les conséquences risquent d'être les mêmes : une érosion de la protection de la vie privée. En France, le décret du 24 mars 2006 fixe à un an la durée de conservation des données des communications électroniques pour aider les services de police dans le cadre de leurs enquêtes criminelles. De manière plus générale, la directive communautaire relative à la conservation des données exige que les opérateurs téléphoniques et les fournisseurs de services Internet conservent toutes les données de connexion de leurs abonnés entre six et vingt-quatre mois pour que la police puisse les utiliser dans le cadre d'enquêtes concernant des délits graves.
Peu de gens trouveront à y redire, considérant que cela n'affectera jamais que les terroristes, les innocents n'ayant rien à cacher. Mais, comme souvent, les problèmes surgiront au niveau des modalités d'application, qui varieront d'un pays à l'autre. En Allemagne, par exemple, le ministère de la justice a décidé que tout prestataire de services de courrier électronique doit vérifier l'identité de ses clients avant de leur ouvrir un compte - interdisant ainsi en pratique tout usage anonyme du courriel.
La directive sur la conservation des données a fait l'objet de critiques. Certains doutent qu'elle puisse contribuer à la lutte antiterroriste, car les petits génies de l'informatique seront capables d'utiliser Internet sans laisser de traces. En outre, il n'est pas sûr que les avantages de cette législation l'emportent sur les risques en matière de sécurité entraînés par la création de bases de données personnelles aussi vastes. Enfin, de nombreuses questions se posent quant à l'application internationale de cette directive - en particulier pour les entreprises établies hors de l'UE.
Prenons l'exemple de Google. Nous ne demandons pas à nos utilisateurs de nous communiquer leur pièce d'identité avant de leur fournir une adresse électronique - et nous pensons que cela serait injustifié car nous estimons que les citoyens doivent conserver le droit d'utiliser le courrier électronique de façon anonyme (il suffit de songer aux dissidents politiques). C'est pourquoi nous exprimerions notre désaccord à l'égard de toute initiative gouvernementale allant dans ce sens. Nous sommes cependant tout à fait conscients de notre obligation de concourir au travail de la police dans ses enquêtes, dès lors que le cadre légal est respecté. Si l'énorme majorité des internautes utilise Internet dans le but pour lequel il a été conçu - communiquer et trouver des informations -, tel n'est pas le cas pour certains d'entre eux, et il est important que les criminels agissant sur le Net puissent être poursuivis.
Néanmoins, il nous semble tout aussi important que la protection de la vie privée soit garantie. Dès le début, Google a cherché à intégrer la protection de la vie privée dans ses services et ce, dès le stade de leur conception. Il existe par exemple sur nos services de messagerie instantanée un mode privé qui rend impossible l'enregistrement des conversations sans autorisation. Par ailleurs, nous permettons aux internautes d'utiliser beaucoup de nos services sans avoir à s'inscrire au préalable. Tout comme notre moteur de recherche, nous voulons que notre politique de confidentialité soit simple et claire, et pour cela nous n'utilisons pas le jargon juridique habituel.
Nous ne pensons pas non plus qu'il y ait de bonnes ou de mauvaises façons de régler ces problèmes complexes. Nos politiques sont donc revues et soumises à des spécialistes de la protection des données. Nous avons décidé de modifier notre politique de conservation des données de connexion des utilisateurs (les logs de connexion incluant l'adresse IP, la date et l'heure de connexion, les mots recherchés, les cookies). Ces données seront rendues anonymes au bout de dix-huit mois, vingt-quatre mois au plus tard, sauf lorsque la loi exige une conservation supplémentaire. Mais les utilisateurs pourront bénéficier de services personnalisés et conserver ces données plus longtemps s'ils le souhaitent. Cette nouvelle politique renforcera encore la protection de la vie privée des utilisateurs, tout en nous permettant d'anticiper nos obligations en matière de conservation de données.
D'ici là, le débat va sans doute s'intensifier à mesure que la directive sur la conservation des données est mise en oeuvre dans les différents pays d'Europe. L'UE a inscrit à la fois le respect de la vie privée et la sécurité dans sa Charte des droits fondamentaux. Des principes majeurs sont ici en jeu, et une discussion ouverte et honnête est indispensable si nous voulons trouver le juste équilibre entre ces deux principes essentiels et souvent contradictoires.
Peter Fleischer est responsable protection des données personnelles, Google Europe
Article paru dans l'édition du 06.04.07
Subscribe to:
Posts (Atom)