Monday, February 2, 2026

The polar bears committing to protect baby seals

Everyone wants to protect kids online.  Recently, Australia passed a ban on social media use by under-16’s.  France is in the process of doing the same for under-15’s.  And the UK is consulting on a similar law.  We all know that online platforms have become (often knowingly, intentionally) a snakepit of dangers for kids, a ‘marketplace for predators”, and I fully support laws trying to protect children.  But, unfortunately, the road to hell is often paved with good intentions, and these laws are a catastrophe for privacy…why?

The problem for privacy is simple:  how does a platform identify who is a kid and who is an adult?  There are two potential methods for such age verification.  One, is the real-world method, such as demanding that users provide documentation to prove they’re adults, like credit card or driver license numbers.   Platforms don’t want to use the real-world method, because they know that many of their users won’t provide them.  These users simply don’t trust the platforms with that sensitive information.  The platforms know that requiring real-world age verification will result in losing users, and these platforms really, really do not want to see their user numbers go down.  Their stock prices are all premised on rising user numbers.  


So, these platforms are all turning to the second method of age verification:  the inferential method, using tracking and surveillance to “guess” which users are kids.  These platforms track all their users, monitoring everything they do, read, post, click on, share, with the unblinking non-stop surveillance of machines that are then using AI analysis to infer who’s a kid, based on patterns of usage that the AI algorithms decide are typical of kids.  Ah, shucks, the platforms now have to monitor everyone, all the time, including all adults, in order to guess which users are kids.  For platforms that have business models based on monitoring, profiling and targeting users, these new legal obligations to “protect kids” are a blessing in disguise.  


TikTok explained this privacy-invasive approach.  The New York Post reported:  “Amid government-sponsored social media clampdowns, TikTok is rolling out a new AI-powered age-detection tech across Europe that allows it to identify and purge accounts belonging to kids under the age of 13.

The new AI age control system analyzes profile info, posted clips, and posting patterns to determine if an account belongs to someone underage, the tech firm declared in a press blog post.”  I would wager that TikTok is collecting and monitoring much more data than what they disclosed.

To understand all this, I traveled to Greenland, to meet a group of polar bears who formed an alliance to protect baby seals.  The bears all had Greenlandic names, but showing their great sense of Arctic humor, they nicknamed themselves TikTok, Instagram and YouTube.  When I first met the bears, I was terrified by their size.  After all, they are the apex predators of their ecosystem.  But they explained that they were committed to protecting baby seals.  I asked them how they planned to do that.  They explained that they would create a system to track, geolocate, monitor and profile every baby seal in the Arctic.  All this data would be shared amongst all polar bears, who committed to use it only for the seals’ protection.  But I asked:  don’t you eat the baby seals, isn’t that your business model, and wouldn’t a predator love to track and monitor its prey?  The bears glared at me, and for a moment, I thought they might crush my skull in their jaws.  But then they relaxed, and said that, well, yes, some baby seals might not get protected, but some would, and I should just trust that they were responsible bears.  After all, they guffawed, we’re tracking you too.