Categories: world

Microsoft sounds a facial recognition technique

Sophisticated face recognition technology is at the heart of many of China's more dystopian security initiatives. With 200 million surveillance…

Sophisticated face recognition technology is at the heart of many of China’s more dystopian security initiatives. With 200 million surveillance cameras – more than four times as many in the United States – China’s Face Detection System tracks members of the Uighur Muslim minority, blocking the entrances to residential areas and blaming debt customers by displaying their faces on signs.

I often take these stories here because it seems inevitable that they will get to the United States, at least in some form. But before they do, a coalition of public and private interests tries to alert the alarm.

AI Now is a group associated with New York University, which counts as members of engineering companies including Google and Microsoft. In a new newspaper published Thursday, the group urges governments to regulate the use of artificial intelligence and face recognition technology before they can undermine basic civil liberties. The authors write:

Face Detection and Impact Recognition requires strict regulation to protect public interest. Such regulation should include national laws that require strong monitoring, clear restrictions and public openness. The communities should be entitled to reject the application of these technologies in both public and private contexts. Many public disclosures about their use are insufficient, and there should be a high threshold for any consent given the dangers of repressive and continuous mass monitoring.

AI Now researchers are particularly concerned about what is called “impact recognition” attempts to identify people’s feelings and possibly manipulate them with the help of machine learning.

“It’s no longer the issue of issues of responsibility,” says AI Now, founder Meredith Whittaker, who works on Google for Bloomberg, “That’s what we do about it.”

Later in the day, Microsoft’s president repeated , Brad Smith, some of these concerns in a speech at the Brookings Institution:

We consider it important for governments in 2019 to begin to adopt laws to regulate this technology. The face recognition gene, so to speak, comes only from the bottle. If we do not act, we risk waking up five years from and to find that facial recognition services have spread in ways that worsen social problems. By then, these challenges will be much more difficult to bottle up.

In particular, we do not believe that the world is best served by a commercial race to the bottom, with technical companies having to choose between social responsibility and market success. We believe that the only way to protect against this race is to build a field of responsibility that supports sound market competition. And a solid floor requires us to ensure that this technology, and the organizations that develop and use it, are regulated by the rule of law.

The paper comes one day after news that Secret Service plans to distribute facial recognition outside the White House. Probably what the agency calls a “test” will not stop there:

ACLU says that the current test seems appropriate narrow, but that it “crosses an important line by opening the door to the mass, suspiciously scrutinizing the Americans on public sidewalks” – like the road outside the White House. (The program’s technique will analyze faces up to 20 meters from the camera.) “Face Detection is one of the most dangerous biometrics for security reasons, as it can be easily expanded and misused – including by mass scaling without human knowledge or permission.”

Perhaps Americans’ lasting paranoia about big government will prevent more Chinese initiatives from ever coming to root. But I can also imagine a scenario where a populist, authoritarian leader, who constantly invokes the twin spikes of terrorism and uncontrolled illegal immigration, raises popular support for monitoring technology.

It feels like a conversation worth having.

Published by