Retrospective facial recognition surveillance hides human rights violations in plain sight

Following the burglary of a French logistics company in 2019, facial recognition technology (FRT) was used on security camera footage of the incident in an attempt to identify the perpetrators.

FRT works by attempting to match images from, for example, closed circuit television cameras (CCTV) to databases of often millions of facial images, in many cases collected without knowledge and consent.

In this case, the FRT system listed two hundred people as potential suspects.

From this list, the police singled out “Mr H” and charged him with the theft, despite a lack of physical evidence to link him to the crime.

At his trial, the court refused a request by Mr H’s attorney to share information about how the system compiled the list, which was at the heart of the decision to charge Mr H.

The judge decided to rely on this notoriously discriminatory technology, sentencing Mr. He has 18 months in prison.

Indicted by facial recognition

“Live” FRT often gets some (deserved) criticism, as the technology is used to track and monitor people in real time.

However, the retrospective use of facial recognition technology, after an incident has occurred, is less scrutinized despite being used in cases such as Mr H.

Retrospective FRT is made simpler and more pervasive by the widespread availability of security camera footage and the infrastructure already in place for the technique.

The EU executive argued that the risks and harms can be mitigated with the extra time retrospective processing affords. This argument is wrong.

AP Photo/Alexander Zemlianichenko

A surveillance camera is seen as people descend at a Moscow Metro station, February 2020 – AP Photo/Alexander Zemlianichenko

Now, as part of negotiations for a new law to regulate artificial intelligence (AI), the AI ​​Act, EU governments are proposing to allow the routine use of retrospective facial recognition against the general public, by the police, local governments and even private companies.

The EU’s proposed AI law is based on the premise that retrospective FRT is less harmful than its “live” iteration.

The EU executive argued that the risks and harms can be mitigated with the extra time retrospective processing affords.

This argument is wrong. Not only does the extra time fail to address the key issues – the destruction of anonymity and the suppression of rights and freedoms – it also introduces further problems.

RBI “Post”: The Most Dangerous Surveillance Measure You’ve Never Heard Of?

Remote Biometric Identification, or RBI, is an umbrella term for systems like FRT that scan and identify people using their faces — or other body parts — from a distance.

When used retrospectively, the EU’s proposed AI law refers to it as “Post RBI”. Post RBI means the software could be used to identify people in a feed from public spaces hours, weeks or even months after it was acquired.

Just knowing that retrospective FRT might be in use might make us fear how information about our personal lives might be used against us in the future.

Armin Weigel/dpa

A surveillance camera is seen at the Celtic-Roman Museum in Manching, Germany in November 2022 -Armin Weigel/dpa

For example, perform FRT on protesters captured by CCTV cameras in place. Or, as in Mr H’s case, to run CCTV footage against a government database of a staggering 8 million facial images.

The use of these systems has a chilling effect on society; about how comfortable we feel attending a protest, seeking health care — like abortion in places where it’s criminalized — or talking to a journalist.

Just knowing that retrospective FRT might be in use might make us fear how information about our personal lives might be used against us in the future.

FRT can also fuel racism

Research suggests that FRT enforcement disproportionately affects racialized communities.

Amnesty International has shown that people living in areas at greatest risk from racist stop-and-search policing – which overwhelmingly affects people of color – are likely to be more exposed to increased data collection and invasive facial recognition technology.

For example, Dwreck Ingram, a New York Black Lives Matter protest organizer, was harassed by police forces in his apartment for four hours without a warrant or legitimate charge, simply because he had been identified by RBI mail later to his participation in a Black Lives Matter protest.

For racialized communities especially, the normalization of facial recognition is the normalization of their perpetual virtual education.

Yuki Iwamura/AP

NYPD personnel set up a surveillance camera outside the Manhattan Criminal Courthouse in New York City, March 31, 2023 -Yuki Iwamura/AP

Ingram ended up in a lengthy legal battle to have the false charges against him dropped after it became clear that police had used this experimental technology on him.

The list goes on. Detroit resident Robert Williams was wrongfully arrested for theft committed by someone else.

Randall Reid was sent to prison in Louisiana, a state he had never visited because police misidentified him as a suspect in a FRT robbery.

For racialized communities especially, the normalization of facial recognition is the normalization of their perpetual virtual education.

If you have an online presence, you are probably already in the FRT databases

This dystopian technology has also been used by football clubs in the Netherlands to seek out banned fans and mistakenly issue a fine to a fan who did not attend the match in question.

It has also reportedly been used by police in Austria against protesters and in France under the guise of making cities “safer” and more efficient, but in reality by increasing mass surveillance.

These technologies are often offered at little or no cost.

In Europe, national data protection authorities have taken a strong stand against these practices.

AP Photo/Seth Wenig

Hoan Ton-That, CEO of Clearview AI, demonstrates the company’s facial recognition software using a photo of himself in New York, February 2022 -AP Photo/Seth Wenig

One company that offers such services is Clearview AI. The company has delivered highly invasive facial recognition searches to thousands of law enforcement agencies and officers in Europe, the United States, and other regions.

In Europe, national data protection authorities have taken a strong stand against these practices, with Italian and Greek regulators fining Clearview AI millions of euros for scraping the face of EU citizens without a basis legal.

Swedish regulators have fined the national police for illegally processing personal data while using Clearview AI to identify people.

The AI ​​Act could be an opportunity to end the abuse of mass surveillance

Despite these promising moves to protect our human rights from retrospective facial recognition by data protection authorities, EU governments are now trying to implement these dangerous practices regardless.

Biometric identification experiments in countries around the world have shown us time and time again that these technologies, and the mass data collection they entail, erode the rights of the most marginalized people, including racialized communities, refugees, migrants and asylum seekers.

European countries have begun legalizing a number of biometric mass surveillance practices, threatening to normalize the use of these intrusive systems across the EU.

With the AI ​​Act, the EU has a unique opportunity to end the rampant abuses facilitated by mass surveillance technologies.

AP Photo/Alastair Grant

The European flag flies outside Europe House in London, January 2021 – AP Photo/Alastair Grant

This is why, more than ever, we need strong EU regulation that captures all forms of real-time and retrospective biometric mass surveillance in our communities and at EU borders, including the arrest of Post RBI.

With the AI ​​Act, the EU has a unique opportunity to end the rampant abuses facilitated by mass surveillance technologies.

It must set a high standard for safeguarding human rights for the use of emerging technologies, especially when these technologies amplify existing inequalities in society.

Ella Jakubowska is Senior Policy Advisor at European Digital Rights (EDRi), a networking collective of non-profit organisations, experts, advocates and academics working to advocate and promote digital rights across the continent.

Hajira Maryam is a media manager and Matt Mahmoudi is an artificial intelligence and human rights researcher at Amnesty Tech, a global collective of advocates, activists, hackers, researchers and technologists advocating for human rights in the digital age.

At Euronews, we believe that all opinions matter. Contact us at [email protected] to send your suggestions or comments and join the conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *