Facebook to shut down facial recognition system

luchezar/iStock

(NEW YORK) — Meta, the newly named parent company of Facebook, announced Tuesday that it was shutting down its use of a facial recognition system on its social media platform.

The announcement comes after mounting pressure from advocacy groups concerned about privacy issues, allegations of racial bias in algorithms and additional concerns related to how artificial intelligence technology identifies people’s faces in pictures. It also notably comes amid renewed scrutiny of the tech giant from lawmakers and beyond.

“We need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules,” Jerome Pesenti, the vice president of artificial intelligence at Meta, said in a company blogpost Tuesday. “In the coming weeks, we will shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in our products.”

“As part of this change, people who have opted in to our Face Recognition setting will no longer be automatically recognized in photos and videos, and we will delete the facial recognition template used to identify them,” Pesenti added.

Pesenti said that more than a third of Facebook’s daily active users have opted in to use facial recognition, and its removal “will result in the deletion of more than a billion people’s individual facial recognition templates.”

Looking ahead, Pesenti said Meta still sees facial recognition technology as a tool that could be used for people needing to verify their identity or to prevent fraud or impersonation, and said the company will continue to work on these technologies while “engaging outside experts.”

“But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole,” Pesenti added. “There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Removing the Facebook’s facial recognition system will lead to a number of changes for users, Pesenti noted, including that the platform will no longer automatically recognize if people’s faces appear in photos or videos, and people will no longer be able to turn it on for suggestions on whom to tag in photos. The company also intends to delete the template used to identify users who have employed the setting.

The change will affect the automatic alt text feature, which creates image descriptions for blind and visually impaired people, Pesenti added, saying the descriptions will no longer include the names of people recognized in photos but will function normally otherwise.

The announcement comes amid mounting controversies for the tech giant. A company whistleblower, Frances Haugen, testified before lawmakers just weeks ago, alleging blatant disregard from Facebook executives when they learned their platform could have harmful effects on democracy and the mental health of young people.

Some digital rights advocacy groups welcomed Facebook’s recognition of the pitfalls of facial recognition technology, though still urged for an all-out ban.

“Facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that,” Caitlin Seeley George, campaign director for the nonprofit advocacy group Fight for the Future, told ABC News in a statement shortly after the announcement was made.

“From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance,” she added. “And even as algorithms improve, facial recognition will only be more dangerous.”

The tech could allow governments to target and crack down on religious minorities or political dissenters, create new tools for stalking or identity theft and more, Seeley George added, saying simply: “It should be banned.”

Copyright © 2021, ABC Audio. All rights reserved.