articles

Facial Recognition in Brazil: a gender and race-based perspective

#algorithm #algorithmic racism #artificial intelligence #facial recognition

*Article by Bianca Kremer, Data Protection and Feminisms Fellow at Coding Rights in ADAPT Project (Advocating for Data Accountability, Protection and Transparency), originally published in English on the ADAPT/Internews website

The year 2021 witnessed heated debates on facial recognition in Brazil, especially within the context of public security. The technology was implemented by twenty states in the five regions of the country, and has been the subject of many promises from the private and public sector ranging from crime prevention to the identification of missing children.

But what exactly is facial recognition and how does it work? Facial recognition software is a biometric identification technology that collects facial data from photographs or video segments. These automated systems extract mathematical representations of specific facial features, such as the distance between the eyes or the shape of the nose, producing what is called a facial pattern.

The process of comparing a facial pattern to other facial patterns contained in the system`s previous database is what makes it possible to conduct a range of tasks including identifying unknown individuals (as occurs on street surveillance cameras), authenticating known people, unlocking personal cell phones using Facial ID, or even validating bank accounts on smartphones.

Parliamentarians, governors, and police officers have approached facial recognition as a silver bullet for public security problems, with special focus on the longstanding police activities and arrest warrants that do not achieve their purposes in criminal justice. But to what extent do the possibilities and promises of this technology meet the risks and controversies of its use?

Beyond the uses mentioned here, facial recognition is also a tool capable of reproducing and enhancing oppressions already present in society, especially in the criminal system. Algorithmic bias presents very problematic implications: technical elements, in addition to economic, historical, and cultural factors, greatly affect racial minorities and transgender people. But why is that the case? Facial recognition identification/ authentication accuracy rates are conditioned by different factors. They involve illumination, perspective, shadows, facial expressions, and even the resolution of images and videos. In addition, the results are also significantly influenced by: (i) how the system is trained; and (ii) the quality and size of the Datasets used to measure facial patterns of passers-by.

It is important to consider the stark difference between the tool teams that develop these technologies and those who are impacted by them. In general, there is a lack of statistics, studies, and public policies aimed at promoting women in technology in Brazil, especially black women. According to a USP Polytechnic School’s Gender Group (Poligen), in the 120 years of the Institution’s  existence there were only 10 black women who obtained an engineering degree. Moreover, in the list of pioneering women in science in Brazil, created by the National Council for Scientific and Technological Development (CNPq), only one black woman was mentioned: historian and activist Beatriz Nascimento, in the 7th edition (2021).

Technologies are loaded with the political, economic and cultural views of those who create them, and this power is mostly centered in the hands of white, heterosexual, middle class/rich men in Brazil. This already potentiates great inequality in an increasingly digital world, considering there is a large afrobrazilian population who consumes and/or is affected by technology.

The historical trajectory of racial selectivity in the country, especially in the field of public security, results in an even more dramatic scenario. In a recent public hearing on facial recognition and algorithmic racism held at the Legislative Assembly of Bahia, experts and activists pleaded for a ban on the use of this technology, demonstrating how racism is driven by technological mechanisms of facial recognition in countries like Brazil, shaped by enslavement and by police violence.

The year 2019 was the kick-off for public security authorities` adoption of facial recognition systems in Brazil. At the time, the cities of Rio de Janeiro and Salvador were testing security cameras installed on the roads with the objective of monitoring the Carnival festivities remotely – identifying those who had open arrest warrants, criminal records and also missing people. There were 151 people arrested during the festivities. Although Rio de Janeiro suspended the use, the numbers from Salvador meant that up to 221 people have been arrested through facial recognition so far.

Six months later the former Minister of Justice Sergio Moro (currently a candidate for president in 2022) proposed a pilot project of his own, called Em Frente Brasil (translated as “Move Forward, Brazil”). The initiative encouraged the voluntary adhesion of municipalities to the project, and counted on federal resources in the amount of R$ 19 million in 2019 to each adhering municipality, plus 25 million in 2020. The purpose was to carry out experiments able to support a national program investigating homicides and violent crimes. But a year and eight months later, the initiative ended in failure and with disappointing results: (i) a delay of more than a year; (ii) budgetary and structural challenges; and (iii) a complete absence of indicators able to support the government’s assumptions that facial recognition technology could reduce homicides substantially in the five tested cities.

Facial recognition has been the flagship of great promises in public security, which, as we have seen, in no way matches the reality of the facts. Socially vulnerable populations constantly experience the automation of constraints and violence, such as improper police approaches and untrue attribution of criminal records. This was the case of data scientist Raoni Lázaro Barbosa, unjustly arrested at the gate of his home in September 2021, and kept for 22 days. He was accused of being a militiaman, according to police’s database. Another expressive case was that of José Domingos Leitão in December 2021 in Piauí, awakened by police officers with screams and kicks at the front door of his house after being confused for the author of a crime he did not commit by a facial recognition program. Curious fact: the crime happened in Brasilia, approximately 1,200 kilometers away from where José lived.

Both Raoni and José had one thing in common: they were black men. The racial biases contained in facial recognition algorithms gain other contours in public security. From production to storage to the act of updating police databases, this technology is a veritable black box. Importantly, in the Brazilian regulatory context, there is still no legislation regulating the use of facial recognition and other artificial intelligence technologies.

According to the Brazilian General Data Protection Law (LGPD), biometric data used in the deployment of facial recognition technologies is considered sensitive personal data: intimate information capable of promoting stigmatization once brought to light. The processing of sensitive data – such as biometry – needs to be more cautious, since it is extremely personal and exclusive to each individual. 

The lack of transparency in how biometric data is stored by the public sector concerns digital rights activists. Especially considering that our data protection legislation is not applicable to  the use of data by public security, state security and criminal investigations. In other words: despite the cogent force of general principles and best practices of LGPD, there is still no specific regulation for the use of personal data and AI technologies in public security. There is also a lack of transparency and explainability about the use of this technology with Brazilian citizens.

Additionally, too few people know for sure what are the criteria used for facial pattern recognition, or how the inputs in the datasets truly work. To make the situation even worse, there is not even a specific regulation on how this technology can be implemented in order to ensure fundamental rights and guarantees. This panorama takes on even more dramatic contours when we consider the colonial heritage of Brazilian racial inequality, and how police forces have been used throughout history to subjugate black bodies. For this reason, it is urgent to ban the use of this technology in public security and public spaces. Old repressions are reinforced today with the use of new technological devices, such as facial recognition systems. 

Coding Rights has actively positioned itself on several pro-ban agendas, recognizing the violent biases that the application of these technologies can inflict on the black population in an intersectional manner. s a member of the Network Rights Coalition (a group of 50 academic and civil society organizations in defense of digital rights in Brazil) we have been collectively building a national campaign to ban facial recognition,.

In January 2021, we launched the a report titled Threats in the use of Facial recognition technologies for authenticating transgender identities, in partnership with researcher and activist Mariah Rafaela. We also developed the Webseries From Devices to Bodies. The second episode was on facial recognition, and featured interviews with Joy Buolamwini, Nina da Hora, and Mariah Silva to demonstrate how the panorama of technologies developed predominantly by white men has intensified the vulnerability of minority groups, mainly black women and trans people.

We also launched a bilingual podcast episode (English/Portuguese) in the Privacy is Global series, in partnership with Internews, called Facial Recognition: automating oppressions? In it, we present the risks of this technology in the Brazilian context, and also interview experts on the subject.

Our advocacy activities on facial recognition also gained more robust contours in 2021. We participated in the drafting of the first bills across the country that address the ban on facial recognition technology. The first in the Municipality of Rio de Janeiro, prohibiting the use of the technology by the municipal executive power. The second, in the State of Rio de Janeiro, which brings a greater scope of restriction to the use of these systems.

The historical context of inequality in Brazil inherently shapes the conditions of production and proliferation of facial recognition in public security. Technologies and public policies must take into consideration racial and societal intersectionalities to ensure those voices are heard, and  consider the asymmetries of power and historical practices of exclusion. This is why the ban on facial recognition is on the agenda, and why it not only deserves, but needs to be considered in the construction of a truly plural and egalitarian society.

Bianca Kremer

Data Protection and Feminisms Fellow at Coding Rights in ADAPT Project (Advocating for Data Accountability, Protection and Transparency). Professor and researcher in Law & Technology, with expertise in Private Law Theory, aphrodiasporic thinking and decoloniality. Doctor of Laws (Ph.D). Former Research Fellow at Leiden University on the Center for Law and Digital Technologies (eLaw – Coimbra Group Scholarship). Visiting Professor at Federal University of Rio de Janeiro (2015-2016), Federal Fluminense University (2018-2020), Pontifical Catholic University of Rio de Janeiro (2018- present) and New Law Institute (2020- present). Senior Lecturer in Brazilian General Data Protection Law at Infnet Institute (since 2019). Associate researcher at DroIT – Legalité Research Center (PUC-Rio Brazil) since 2018. Public policy expert on Digital Rights, Data Protection, Ethics & Artificial Intelligence, Private Law and Intellectual Property fields.