But guess what, it turns out that the militaries of the world want to harness AI for their purposes, which include “causing harm”, to put it mildly. So, Google, presumably smelling a big business opportunity of developing AI for the world’s “democratic” democracies, re-wrote its AI principles. Now the company can develop AI tools for the world’s democratic militaries (I’d love to read the list of the countries that Google considers “democracies”).
I re-watched Robocop recently: an AI-prototype robot cop is demoed in a boardroom, and a hapless person is asked to hold a gun pointing at it. Chaos ensues, as the prototype disregards human voice commands to stand down and shoots down the poor person holding the gun he’s trying to throw away. Oops. Indeed, the company whose AI gave us a recipe for glue on pizza, will now work on AI to kill people.
The collaboration between the militaries of the world and tech companies is wide and deep. Think space exploration or even the origins of the Internet. But here we’re talking about bringing very specific competencies of tech to build AI autonomous killing machines, building on these companies’ decades of leadership in surveillance, monitoring, profiling and targeting. It’s one thing, I think, to engage in surveillance, monitoring, profiling and targeting…to show ads, and quite another to select individuals or groups for elimination. The tech is not so different.
Any company can of course change its principles, or its ethics. Users can decide if they trust a company that changes its principles and ethics. Its workforce can decide if they want to work on these projects, even if they’re threatened with termination for refusing.
Privacy law, in Europe, for decades has included a principle that machines shouldn’t be allowed to make “automated decision making” on important questions without human involvement. Needless to say, an AI automated decision to kill someone seems to meet that test. The militaries of the world are largely exempt from privacy laws, but the private sector companies working for them are not. I would love to see people explore exactly what a company plans to do to build AI tools for the military, and ask the hard questions: how did you train it, what data did you use to train it, how reliable is it, what is your accountability for its accuracy or failure? You and I might think this is an intellectually interesting question, but we’re not teenagers on the frontlines of Gaza or Ukraine.
No comments:
Post a Comment