In the midst of huge crisis in the U.S. around police mistreatment of black Americans, two tech giants have said they will halt the sale of controversial facial-recognition software, which has been called out by privacy groups as contributing to racial profiling and ineffective most of the time.
On Monday, IBM Corp.’s
new Chief Executive Arvind Krishna said the company is exiting the facial-recognition business, saying in a letter to Congress: “IBM firmly opposes and will not condone uses of any technology, including facial-recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”
took a slightly different tack, saying Wednesday that it is implementing a one-year moratorium on police use of its Rekognition technology, but that it would still allow organizations focused on stopping human trafficking to continue to use the technology.
Facial-recognition software can be used to help identify people in photos, videos or in real time, and has been increasingly used by law enforcement agencies. But according to the Electronic Frontier Foundation, a nonprofit focused on digital privacy rights, “facial-recognition software is particularly bad at recognizing African Americans and other ethnic minorities, women, and young people, often misidentifying or failing to identify them, disparately impacting certain groups.”
According to ProPrivacy, a U.K.-based company that develops virtual private network tools, facial-recognition algorithms used by the police in various parts of the world have been shown to be inaccurate 81% of the time. “These percentages jump even higher when dealing with non-Caucasian faces,” said Ray Walsh, digital privacy expert at ProPrivacy, in a statement.
Facial recognition, which can lead to racial profiling by the police, has been seen as problematic technology even before the protests that have swept the nation following the death of George Floyd by the Minneapolis police. A bill was introduced in the Senate last year seeking to establish guidelines and set boundaries for facial recognition, but it has languished in the Committee on Commerce, Science and Transportation.
As tech companies hear their employees demand change — such as the request this week by Microsoft Corp.
employees that the company no longer sell its products to the Seattle Police Department and other law enforcement agencies — now is the time to examine how their products are designed with bias, even if it may be unintentional bias.
IBM’s CEO said he believes that now is the time to “begin a national dialogue on whether and how facial-recognition technology should be employed by domestic law-enforcement agencies.” It is also time to begin a national dialogue around bias in tech products in general, a vast topic for sure, but one that needs to be addressed before there can be any hope of change.