Thursday, October 30, 2025

Worried about AI? Don’t worry, it will cure cancer

You have surely read the dire predictions of the leaders of the field of AI:  everyone from the leaders of OpenAI, Anthropics, Bill Gates, etc etc have been speaking about the massive challenges that AI will thrust onto our societies, including in particular the massive imminent wave of job destruction.  So, it’s refreshing to hear from Google’s President that AI “will cure cancer”.  https://fortune.com/2025/10/26/google-ruth-porat-cure-cancer-in-our-lifetime-with-ai/

Some of you will be cynical, and suggest that she is just whitewashing AI in the PR interest of her own employer, and her own.  She’s repeated that same line about AI curing cancer more times than Brittney Spears has gotten wasted.  


I’d like to hear more leaders of companies building AI tools engage in a public discussion about the good and bad consequences of their inventions.  The tech industry is famous for privatizing gains and socializing losses.  In other words, building their businesses and their share prices based on the good use cases of their inventions, but letting other people, governments or societies deal with the negative fall-out.  Heads, I win, Tails, you lose.  For example, a company could make a fortune “curing cancer”, but would it be held responsible if the same AI tool that it built to cure cancer could also be re-purposed to build bio-weapons?  


Let’s take the example of using AI to screen CVs.  Many people looking for jobs today will be auto-rejected by an AI bot.  There’s no transparency, and they won’t be told why.  One reason might be that they’re screened out for being “old”, amongst many other possible, but non-transparent, reasons.   https://www.wired.com/story/ageism-haunts-tech-workers-layoffs-race-to-get-hired/  And indeed, in the US, a lawsuit on precisely this topic has been launched, accusing Workday of using AI tools to discriminate against older job applicants:  https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged  


AI in recruitment is pretty simple.  Ask the AI to study the characteristics of “successful” employees, and go find me more like them.  So, if the data about “successful” employees skews heavily to the age range of 25-39, well, the AI will look for more of the same and auto-reject the rest.  That’s how AI will reinforce discrimination in our societies.  The category of ageism is not unique:  sexism, homophobia, racism (some races, but not others) are all categories that AI will discriminate against, simply to pursue the goal of advancing the types of people who meet its training model of “success”. 


Europe’s privacy law (GDPR Article 22) has a very clear provision dealing with machines that make automated decisions: 


  • The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

I’m not aware of any serious regulatory attempt to enforce GDPR Article 22 against companies using AI to screen job applicants.  But they should.  If companies are using machines to decide whether to hire or auto-reject an applicant, then those decisions “significantly affect him or her” under the law.    


AI has already begun to accelerate its trend to replace human workers with machines.  Companies are already replacing entry-level functions with AI agents, making it hard for young workers to get their first rung on the employment ladder.  Older workers have long been pushed out of Silicon Valley.  The trend is just starting, but even today, hardly a day goes by without some big company announcing plans to slash human jobs and hire machines.   https://seekingalpha.com/news/4508911-amazon-plans-to-cut-up-to-30000-jobs---reuters


You may despair of getting or keeping a job in this new environment.  But you don’t have to accept, in Europe at least, that a machine will make an automated decision not to hire you, in violation of existing law.  Often regulators don’t act, until complaints are filed.  If your CV is being auto-rejected by a machine, you can file a complaint with your local data protection authority, which might prompt them to intervene.   


AI is a tool for automated decision making:  that’s the whole point of it.  I cited one example from the world of job applications, but there are thousands of other examples, today and soon.  It’s high time to start applying and enforcing the laws against automated decision making.  The biggest imminent disruption in human history is around the corner:  so far, I can confidently state that privacy regulators have had very close to zero impact on the development of how AI is being developed and used.  The Italian Garante is one of the few to have tried, and I applaud them for their leadership.  The others seem to be content with conducting blah blah public consultations.   If we want AI to respect human values and the laws, we need to speed up, urgently, because the machines won’t slow down.  Enforcing the laws on automated decision making would be a good place to start. 


Thursday, October 23, 2025

The Age of Discovery, seen from Seville




I enjoyed a week in enchanting, intriguing Seville.  The photo is the tomb of Christopher Columbus in the Seville Cathedral. In the Age of Discovery, Seville had a Spanish monopoly on ships to/from the New World.  Historic Seville can teach us a lot about our own AI-driven age of discovery.  Both eras have a lot in common, driven by science, greed and missionization. 

Europe won that technological race:  compared to the indigenous populations of the “New World”, it had superior sailing/navigation/mapping tech, it had superior military tech, it had deep capital markets to fund the expeditions, and it had a belief in its cultural and religious superiority.   That’s a good description of the people leading the AI race today.  


Europeans in the Age of Discovery expanded human knowledge and science dramatically, and AI will do the same now.  But even though some actors were driven by science and a pure search for knowledge, most were driven by greed.  Leaders in the field of AI are now accumulating vast (perhaps bubble) riches, just as the riches of the New World poured into Seville in the Age of Discovery.  As a tourist in Seville, you can still visit the architectural gems financed by plundering the New World indigenous populations.  Then as now, some people got very rich, but most people got much poorer.  The Spanish royal house got rich, the indigenous populations were plundered.  The tech bros of today have gotten obscenely rich, the soon-to-be-unemployed legions of workers of today replaced by AI agents will get poor.  


The biggest losers of the Age of Discovery were the indigenous populations, wiped out by European-introduced diseases.  90% of indigenous populations were wiped out within one century of contact with European colonizers.  AI will probably do the same to us:  it’s becoming a consensus that superintelligence (whenever that happens) will eventually similarly cull or eliminate homo sapiens.  Lots of leaders are calling for a (temporary) ban on developing superintelligence, until our species can figure out how to build this safely.  My former colleague, Geoffrey Hinton, a Nobel prize-winning AI genius is amongst them. 


History tends to repeat itself.  As we enter into our own new AI-driven age of discovery, ask yourself if you think you and your society will become winners or losers.  A lot of people today think they’ll be winners:  tech bros (obviously), governments and businesses looking for new tech-driven growth and profits, scientists, entire countries like China or the US which are currently leading the race.  But lots of people will be losers:  in particular, looming job destruction and unemployment, in turn leading to social disruption, which in turn historically tends to lead to revolutions.   Which do you think you’ll be, winner or loser?  Even if AI doesn’t destroy humanity, yet, it may well destroy democracy.  It will destroy privacy too (I’ll blog about that separately).   


Privacy is anchored in the idea of the dignity of the individual human being.  There wasn’t much dignity in being an indigenous person dying of smallpox during the Age of Discovery, or an African victim of the trade routes of the Age of Discovery that evolved into the slave trade.  Can we do better today?  Machines don’t believe in privacy:  they consume data to output data to accomplish a task.   The rise of AI is the challenge of our age.  You might ask where to start:  how about stopping private companies from plundering other people’s intellectual property or personal data to train their AI models, as the Spanish conquistadors plundered the wealth of the indigenous populations.  


Lots of us need to step up to confront this challenge.  Or we can leave it in the hands of the tech bros and gullible politicians and impotent regulators, who are welcoming AI like Montezuma welcoming the Spanish.  


Wednesday, October 1, 2025

The world’s largest surveillance system…hiding in plain sight

 

The world’s largest surveillance system is watching you. 


It’s capturing (almost) everything you do on the web on (almost) every website.  And it’s hiding in plain sight. 


And it’s “legal” because it’s claiming that you know about it,

and that you consented to it. 

But do you know what it is?  

Do you know what “analytics” is?  Websites use analytics services to give them insights into how their users interact with their sites.  Every website wants to know that.  And analytics providers can give them that information.  For example, an analytics provider can give detailed statistical reports to a website about their users and how they interact with its site:  how many people visited the site, where did they come from, what did they view or click on, how did they navigate the site, when did they leave/return, and many, many other characteristics.  This data can be collected and collated over years, over thousands or millions of users.  

There are many providers of analytics service, but according to analysts, there is only one 800-pound gorilla, Google Analytics.  

“Google Analytics has market share of 89.31% in analytics market. Google Analytics competes with 315 competitor tools in analytics category.

The top alternatives for Google Analytics analytics tool are Tableau Software with 1.17%, Vidyard with 0.78%, Mixpanel with 0.59% market share.”

And according to other third party analystsAs of 2025, Google Analytics is used by 55.49% of all websites globally. This translates to approximately 37.9 million websites using Google Analytics.”

You get the point:  one company, one service is capturing the bulk of the web traffic on the planet.  Websites get statistical reports on the user interactions on their sites.  Google gets individual-level information on the actions of most everyone on the web, on most websites, click-by-click, globally.  Wow. 

Legally, a website that uses Google Analytics is contractually obligated to obtain "consent" from its visitors to apply Google Analytics.  But often the disclosure on those websites is cursory, or even incomprehensible:  “we use analytics”, or “we use analytics software for statistical purposes”...which sounds harmless, but hardly would explain what’s happening to the average user.  Technically, what happens is simple, but invisible to the average user:  when they click on a website, that website auto-transfers to Google, in real time, detailed information about every step a user takes on its site. What’s happening is very simple.  A site using Google Analytics incorporates a small piece of code on its site which auto-transfers to Google, in real time, information about every interaction its users have:  every visit, every click, and information about each of those visitors, on an identifiable basis.  

In fairness, Google Analytics has some privacy protections.  Its reports to its client websites are statistical, rather than reports at individual users.  But even if the websites don’t get information about users at an individually-identifiable level, Google does…. And Google does not do cross-site correlation, i.e., it does not profile users across sites, for Analytics purposes.  (Note, Google does exactly this cross-site correlation in the context of its Ads businesses, but that’s a different topic than this blog.)  

All this is “legal” if it’s based on consent.  A phrase disclosed in a privacy policy, or a cookie notice, no doubt you’ve seen, or maybe clicked on, is deemed to constitute “consent”.  But really, did you or the average user have a clue?  

I’m in the school of believing that analytics tools represent a relatively low level of privacy risk to individual users.  But what do you think if one company is getting real-time information about how most of humanity is engaging with websites on a planetary level?  A user goes to any random site, but their data also auto-transfers to Google, did they know?  Since the scale of this service vastly exceeds any other service on the web, the scale of this data collection is the largest on the web.  Please respond with a comment if you can think of anything of similar surveillance scale.  I know you can’t, but let’s engage in the thought experiment.  I’m not picking on Google (I love my former employer), but in this field, which is essential to privacy, it’s the 800-pound gorilla, surrounded by a few mice.  

And the photo, if you’re interested, is Chartres Cathedral, built in the era when we believed only God was all-knowing.  

Wednesday, September 24, 2025

The Irish Backdoor

It’s not a gay bar in Dublin, the Irish Backdoor, sorry if that’s why you clicked on this blog.  It’s how non-EU companies, like tech companies from the US and China, use the “one stop shop” mechanism to evade the privacy regulations of 26 countries to be regulated instead by the Irish regulator, the gentle golden retriever of privacy enforcement.  

I am expanding on my blogpost below.  But now I’m revealing something new.  How most of the non-EU companies, like tech companies from the US and China, have no legal right to assert a claim to be regulated by the one stop shop.  Fiction or fraud?  Let me explain.


Legally, a non-EU company can only claim the benefits of the one stop shop if the decisions regarding data processing in Europe are made there.  


Let me suggest a reality test.  Most companies from outside the EU claim to benefit from the one stop shop in Ireland if they do the following:  1) create a corporate entity in Ireland, 2)  write a privacy policy (or ask ChatGPT to write one) that tells users that the Irish corporate entity is the “controller” of their data in Europe, and 3) has some minimal presence in Ireland, like appointing some employee as a “data protection officer” for the entity.  All this can be done in a day, and with a tiny local Irish staff. But…does this meet the legal test?… that the data processing operations in Europe are being decided by this Irish entity?


Most tech companies build products in their homes, Silicon Valley, China, etc.  They then roll out these products globally.  Usually these products are identical worldwide, except for language interface translations.  In those cases, does anyone really believe that their Irish subsidiaries are really the decision-makers for how the data will be processed for their services for their (millions) of European users?  Perhaps that is the case for a few large non-EU companies with large operations in Ireland.  For all the others, it’s hard to believe.  


Maybe it’s an innocent fiction for a company from China or the US to claim it is “established” in Ireland to evade the privacy laws of 26 EU countries with millions of users.  Or maybe it’s a fraud…?


(Final note, as a former employee of Google, I must point out that nothing in this blogpost is meant to suggest anything regarding that particular company.  Google has a huge workforce in Ireland).


Meanwhile, non-EU companies are getting an easy ride in Europe, while their EU company competitors aren’t.  I just don’t think that’s fair to EU companies or to EU users. 


Monday, September 22, 2025

Why does every US and Chinese company want to go to Ireland?

Ireland is one of the biggest winners of the EU 27 construct.  It has established itself as a tax and regulatory haven for foreign (non-EU) companies.  Virtually all Chinese and American companies, in particular in tech, rush to “establish” themselves in Ireland.  In exchange, they get to pay a low corporate tax rate (even if their users and their money is made in the other 26 EU countries) and they get to benefit from the light-touch privacy regulation of Ireland.  

You’ll recall that Europe’s tough (on paper) General Data Protection Regulation of 2018 created the concept of a one-stop shop for foreign companies.  So, any Chinese or American company could pick one of the EU countries as its “establishment”.  Of course, they all picked Ireland, given its universal reputation for light-touch tax and regulation.  It is entirely a different debate about why/how Europe made this blunder:  in effect, it gave a massive advantage to foreign companies over domestic European companies.  A French/Italian/Spanish company would be regulated by their domestic French/Italian/Spanish regulators, who take privacy seriously, and would sanction non-compliance.  But a Chinese or American tech company would do business in all those countries, while benefiting from the Irish regulatory culture, as gentle as an Irish mist.  


Occasionally, a European regulator would try to take on an American or Chinese company in the field of privacy.  https://www.cnil.fr/en/cookies-placed-without-consent-shein-fined-150-million-euros-cnil

But this action wasn’t based on the core European privacy law, the GDPR, but on a rather obscure law about other things.  


The Trump administration has defended American companies in Europe against what it claims are discriminatory regulatory actions.  https://www.lemonde.fr/en/international/article/2025/09/06/eu-commission-reluctantly-fines-google-nearly-3-billion-despite-trump-threat_6745092_4.html#  It was therefore not a surprise to see the French regulator announce fines at the same time against one American and one Chinese company.  But it is surprising to see the Trump administration rushing to defend one of the most Democratic-leaning companies in the US.


Indeed, Europe does discriminate, in the field of privacy, in favor of non-EU Chinese and American companies, due to the one-stop-shop Irish backdoor.  One can only assume European dysfunctional politics led to this absurd result, from a European perspective.  Hundreds of millions Europeans depend on a small Irish privacy regulator to ensure that the gigantic American and Chinese tech companies respect European privacy laws.  Hilarious.  


All of this might seem like trivial corporate politics, but the consensus is growing that humanity is allowing the tech industry to put us (I mean, our entire home sapiens species) on a path to doom.  https://www.theguardian.com/books/2025/sep/22/if-anyone-builds-it-everyone-dies-review-how-ai-could-kill-us-all  Even if we’re doomed, can we at least put up a fight?  


Thursday, August 28, 2025

Hi Privacy Pro's: where's your mojo?


I’ve been committed to the field of privacy for 3 decades, and I’ve had the pleasure to mentor multiple generations of smart and committed people to the field.  But I can’t remember a time when the profession felt more disempowered and disrespected than now.  

Where have all the senior privacy leaders at Big Tech gone?  Virtually all the Big Tech companies have lost (or fired) their most senior privacy leaders this year.  The most senior privacy leaders at Microsoft, Google, Facebook, and Apple have all exited this year, or recently.  These are the companies that process vast amounts of personal data, so it’s not a minor question to ask why they’ve lost their most senior privacy leaders.  Undoubtedly, each person who exited their employer will have their own story, and I won’t tell it, even if I know it.  But if these companies have lost their most senior privacy leaders, who is left there to ensure that these companies respect their users’ privacy?  


The privacy leaders of my generation (and I knew them all) all shared one characteristic:  they advocated for good privacy in their organizations internally, and they worked collaboratively with regulators to find solutions when required.  But perhaps the collaborative model is no longer the fashion in Silicon Valley:  perhaps the truculent, cage-fighting ethos has the ascendancy, reflecting the personalities of some of its leaders:  media-hungry, kick-boxing, “I am Caesar” and anyway, I have a survivalist bunker in case it doesn’t work out.  In that world, you don’t want privacy leaders, you want privacy litigators. Privacy litigators can make an easy meal of the average privacy regulator, who have tiny technical and litigation resources.  


Privacy only makes sense as a human value, since its only purpose is to protect the autonomy and dignity of an individual human being.  In an age when Big Tech fires many thousands of workers (in the name of “efficiency”), often without warning, by email at 2 am, with immediate effect (I don’t need to name names, do I?), it’s fair to ask what respect they have for individual human beings.  If you don’t respect your own employees as human beings, why would you respect your users, or their or anyone’s privacy?  


Try to read a privacy policy, when you randomly click on some website.  It will inevitably begin with the phrase:  “We care about your privacy”.  Then it will go on to list the innumerable ways that they plan to violate your privacy, to track and profile your data, and to share it with hundreds of their “partners”.  You cannot possibly understand these privacy statements, and neither can I.  They’re not designed to explain privacy practices:  they’re designed to create a veneer (or fiction) that their companies’ data collection practices have been disclosed, and that users have somehow “consented” to them.  Of course, you can’t consent to something that you can’t understand, but a click looks like consent, so that’s all these companies are seeking.  The latest atrocity is the attempt by sites to ask you to consent to tracking your “precise location”.  Usually this phrase is buried innocuously deep inside the privacy statement.  If you are dumb, or bored enough, to click “I accept”, these companies will track your precise location (within meters) every time they encounter you on the web, and share that with their hundreds of partners, and store your precise locations forever, and heaven knows what they’ll do with that.  Nothing creepy there? 


Thursday, June 12, 2025

It’s all about (sharing) the data, stupid: privacy meets antitrust

I spent time with a group of privacy experts recently.  We were discussing the intersection of privacy and antitrust law.  Traditionally, these two fields were very separate, with separate laws, separate regulators, and separate practitioners.  But the rise of the data-processing monopolies like Google and Facebook is forcing these two fields to converge.  When a monopoly like Google Search or Facebook is based on processing vast amounts of personal data, and when no competitor could possibly compete with these data-gorged monopolies, well, it’s obvious that antitrust law should consider forcing these monopolies to share data with potential competitors.  Otherwise, these monopolies will carry on with their “data barrier to entry”.  Data is an essential input into any of these existing or future services.  


Existing monopolies, like Google Search, do not want to share their data with potential competitors.  Duh.  So, they are making public arguments that such sharing would create a serious risk of violating the privacy of their users.  But is that true?  

Google has resorted to public blogging to warn its (3 billion) users of the risks of court-mandated data sharing.  “DOJ’s proposal would force Google to share your most sensitive and private search queries with companies you may never have heard of, jeopardizing your privacy and security. Your private information would be exposed, without your permission, to companies that lack Google’s world-class security protections, where it could be exploited by bad actors.” https://blog.google/outreach-initiatives/public-policy/doj-search-remedies-apr-2025/


Now, let’s unpack that statement.  Google is clearly stating that it collects “your most sensitive and private search queries”.  Its privacy policy makes it clear that it collects, retains and analyzes that data to run and improve its own services (not just Google Search).  So, Google clearly analyzes your “most sensitive and private” data, despite the privacy issues to you:  the privacy issues, according to Google, only arise if that data is shared with other parties.  


Now think about Google’s money machine, its ads network.  Doesn’t that network do exactly what Google is here claiming is a terrible thing for users’ privacy?  Google ads network collects vast amounts of its users “sensitive and private” surfing history, and shares it with “companies you may never have heard of”.  Indeed, that’s exactly what the ads network does today.  Not coincidentally, an unrelated antitrust monopoly case is underway regarding the Google ads monopoly.  So, let’s be clear, in the context of Google Search, Google claims sharing data with third parties would be terrible for users’ privacy, but in the context of Google ads network, all that sharing is just fine…


Privacy professionals should take a clearer look at the privacy implications of any court ordering Google to share Search data with competitors.  Would that really raise any privacy issues?  Some experts in the field are starting to discuss the issue:  https://www.hklaw.com/en/insights/publications/2025/04/google-search-data-sharing-as-a-risk-or-remedy


Search is based on data mountains.  They are different mountains.  Each category of the data mountains has different privacy implications.  We need to unpack data-sharing into its different categories to assess whether it has any impact on privacy.


The Index:  the biggest data mountain is the Search index.  That’s the index that Google Search creates by crawling the entire public web.  It’s one of the largest, if not the largest, database on the planet.  But it’s not a privacy issue:  it’s just crawling the public web.  Of course, there is public data on the public web, but it’s not a privacy issue to force Google to share such data with other parties, who could also access it on the public web. 


User interaction data:  with its 3 billion users, and over 20 years of operation, Google Search has the largest database of user interaction data on the planet.  I’m guessing it’s 1 million times larger than its nearest competitor Bing. (Google can correct my guess if it wishes to.)  This user interaction data is essential to teach a Search engine’s algorithm how to guess what someone intends to find when they type a query.  If you have billions of examples of what people are searching for, you can train your search algorithms accordingly.  If you don’t have that data, you don’t have a chance.  So, would it be a privacy issue, as Google menacingly suggests in its blog post, if it were forced to share such data?  It depends:  yes, if it were forced to share search histories (i.e., search logs) with all of the personally-identifiable data that Google collects and shares.  No, if it were forced to share anonymized data sets, such as anonymized search logs.  


Fortunately, many years ago, Google introduced its policy to anonymize search query logs, after a number of months, in the interests of users’ privacy, and to respond to regulators’ pressure.  I know something about that, since I worked on that privacy initiative, with my great former colleagues.

https://publicpolicy.googleblog.com/2008/09/another-step-to-protect-user-privacy.html

There is no privacy issue, none at all, with forcing a company to share anonymized user interaction data.  


I get that Google is blogging as part of its anti-antitrust litigation strategy.  It really, really, doesn’t want to share its data with potential competitors.  Litigators will advance their clients’ interest, as best they can.  The rest of us 3 billion users of Google Search can assess the intellectual honesty of their arguments.   As far as I am concerned, there are profound privacy issues on the web:  forcing the Google Search monopoly to share its non-personally identifiable data with potential competitors is not a privacy issue.  





Tuesday, April 29, 2025

Debating Privacy in Venice

 





I’m looking forward to seeing lots of old friends at the upcoming Venice https://privacysymposium.org/  


For many years, I attended and spoke at privacy conferences around the world.

 

I believe in sustaining a dialogue amongst privacy professionals, regulators, academics and advocates. 


I always learned a lot from these events, and I did my best to contribute to the debates as well. 


I also believe in building human connections to the people in this field, and I’m happy to count many of them as personal friends.  


This year, I’ll join a distinguished group of regulators and practitioners on a panel entitled:  Privacy and Antitrust.


This should be interesting!  I have, after all, spent 30 years guiding Microsoft and Google… 


Like privacy itself, Venice is precious and fragile. 


We’re lucky to be there together in May, before another tech monopolist rents the city for himself in June.  














Wednesday, April 23, 2025

A Gaggle of Monopolies

 












One of the peculiarities of monopolies in the age of Big Tech is how they tend to leverage quickly into a group of monopolies. 


Historically, building a monopoly was a rare business event, and it usually just happened in a single industry, like oil or finance. 


But the tech industry is different:  businesses build one monopoly (legally, let’s assume).


Then they quickly manage to leverage it into multiple monopolies across an array of businesses. 


You can read recent press reports about antitrust enforcement actions against Google and Meta, to take those prominent examples.  

Traditional antitrust/competition law developed to restrain individual monopolies from leveraging their existing monopolies unfairly into new markets.  But historical law seems to struggle with how to address this new phenomenon of companies that develop a portfolio of monopolies.  Of course, these tech companies leverage their monopolies to develop and support each other, in particular by sharing user data, given that all these monopolies are based on processing vast amounts of user data.  The more you have, the better you can leverage into a new market.  That’s why this antitrust/competition conundrum is also a privacy challenge.  Monopolies that process personal data and share them across their portfolio of services are processing personal data at a scale unprecedented in human history.  Europe took a first step to try to address this problem with its Digital Markets Act.  

We don’t have a legal word for a portfolio of monopolies.  Calling a company a “monopolist” doesn’t capture the nature of a portfolio of interlocking monopolies.  So, I looked to the wildly colorful words of English vocabularies to describe a group of animals, for inspiration.  

A bloat of hippopotamuses

A parliament of owls

A gaggle of geese

A flamboyance of flamingos


A murder of crows


A company of parrots


A charm of finches


A shiver of sharks


An aggregation of snakes


A gamble of alligators


A skulk of foxes

Antitrust/competition law will have to come up with new tools to deal with this new phenomenon of portfolios of monopolies, as will the field of privacy.  Any remedies that the authorities impose will need to take into account the nature of these interlocking monopolies.  And yes, forcing a company with a portfolio of monopolies to divest one of its monopolies might be the right way forward, or to stop it from acquiring new ones.  I doubt though that a “murder of crows” will suffer terribly if it loses one crow.  

But first, let’s find a name:  a “gaggle of monopolies”?, a “bloat of monopolies”?  Any of the above might do, with all due respect to the animals.  I’m happy to let Llama or Gemini choose.