Amazon is still selling its flawed Rekognition technology, despite being warned of the biases in the facial-recognition software, by Joy Buolamwini, an activist at MIT Media Lab. Amazon invalidates the findings by calling it outdated and incorrect.
Joy Buolamwini, also the founder of Algorithmic Justice League – an organization established to combat bias in decisioning software – wrote an open letter to the software giant revealing that the Rekognition tool underperformed in identifying darker-skinned individuals and women. “Rekognition’s facial analysis feature mistakenly identified pictures of woman as men and darker-skinned women as men 19 percent and 31 percent of the time, respectively,” reported a news website aware of the findings.
Following a recent New York Times story about Rekognition, Amazon refuted to the MIT’s findings, saying that the study didn’t use the latest version of the tool in concern and was based on flawed methodology. Amazon also said that the research paper failed to mention the minimum precision Rekognition’s predictions must achieve in order to be considered “correct.”
Responding to Amazon’s dismissal, Buolamwini wrote in a press statement saying, “Amazon continues to push unregulated and unproven technology not only to law enforcement but increasingly to the military.” As a result – she notes – harms caused by algorithmic decision-making would not only lead to illegal discrimination but will also bolster many unfair practices limiting opportunities, economic gains and freedom for the misidentified.
Dr. Matt Wood, General Manager of deep learning and AI at AWS, said that the MIT study tried to conclude the accuracy of facial recognition based on the results obtained using facial analysis. “Facial analysis … is usually used to help search a catalog of photographs,” he said. “Facial recognition … is a distinct and different feature from facial analysis and attempts to match faces that appear similar… It focuses on “unique facial features” to match faces.”