On Tuesday, the 11th of August, a UK court ruled that facial recognition technology violates human rights. This ruling is the latest in a growing movement that facial recognition technology violates personal freedoms, invades privacy, and is discriminatory.
The case came about when Ed Bridges, a resident of Cardiff, filed a suit against the South Wales Police. His facial image was recorded twice – once in 2017, when he was on his lunch break, and again in 2018, when he was participating in a peaceful protest.
During the protest, a facial recognition van parked across the street from the protesters.
Bridges, along with other protesters, took note. “We felt it was done to try and deter us from using our rights to peaceful protest,” said Bridges. “I take the view that in this country we have policing by consent and the police should be supporting our right to free protest, rather than trying to intimidate protesters.”
The court originally ruled against Bridges but overturned this ruling on Tuesday.
The ruling stated that facial recognition technology violates human rights. It does not suspend the use of all facial recognition technology, but rather, states that better parameters need to be put in place as to when it can be used.
The Discrimination Problem With Facial Recognition Technology
This isn’t the first time that facial recognition technology has come under fire.
Facial recognition technology is not perfect. False positives abound. And these false positives occur more regularly with people of color, women, children, and the elderly.
A good portion of this comes down to the data set used to train the AI for facial recognition. If it is trained on a data set comprising of mostly white males, it’s going to become really good at identifying white males.
This could be a problem in a host of ways. It could be as simple as the wrong person being able to unlock a computer or a phone that uses facial recognition. It could also lead to the wrong person being arrested.
In fact, the South Wales Police used facial recognition technology at the Champions League Final in 2018 to attempt to find known criminals. They later released that 2,297 of 2,470 facial matches were false positives. That’s 92%.
Similar abysmal rates were reported by the police in Detroit after the ACLU filed a formal complaint against the use of facial recognition technology, where the wrong man was arrested. In this case, 96% of identities were false positive.
The ACLU also raised the point that facial recognition software could be biased in another way. People of color are more likely to be arrested for small crimes, which would add their mugshot to the database used for facial recognition. This could lead to them being arrested more frequently by using facial recognition software.
Facial recognition software may not only be used to identify individuals but also groups of people. It could put people in groups – women, men, white, black, and so on. Uses such as this have several discrimination concerns.
Saying Goodbye to Privacy
It is concerning when the wrong person gets arrested because of a false positive face match.
But it’s also concerning when the algorithm is right.
“If you think that the police are watching you … when you go to the bank, or you go to the doctor’s office, or you go to the church or the synagogue or the mosque, you’ll be less likely to exercise those freedoms,” says Stanley Shikuma of Seattle’s Japanese American Citizen’s League.
In a world where facial recognition is widely used, whether by the police or by corporations, privacy would evaporate.
This showed up in a concrete way during the Black Lives Matter protests. People were encouraged to not post images on social media of people who were at the protests. Facial recognition on social media could be used in this case to identify protesters.
Right now there is a lot of concern with facial recognition technology. In June of this year, IBM, Amazon, and Microsoft ended or paused the sale of facial recognition software to the police. With very high rates of false positives, data training sets that are comprised of images weighted towards white men, and human rights and privacy issues, the world may not be ready for the use of facial recognition software. It never may be.