publications

THREATS IN THE USAGE OF FACIAL RECOGNITION TECHNOLOGIES FOR AUTHENTICATING TRANSGENDER IDENTITIES

#artificial intelligence #facial recognition

Research by Coding Rights maps harmful threats to transgender and non-binary people by the deployment of facial recognition technologies as a tool to authenticate IDs to access public services in Brazil*

*This article was originally published by Privacy International at: https://tinyurl.com/privacyinternationalTransID as a translation/adaptation from Coding Rights Medium post: https://tinyurl.com/IDtransFacialrecog

On the International Transgender Day of Visibility, Mariah Rafaela Silva and Joana Varon authors of the report “Facial recognition in the public sector and trans identities: techno-politics of control, surveillance and threats to gender diversity in their intersectionality of race, class, and territory”, produced for Coding Rights with support from Privacy International are presenting their main findings. The research mapped harmful threats to transgender and non-binary people by the deployment of facial recognition technologies as a tool to authenticate IDs to access public services in Brazil.

Increasingly, facial recognition technologies have been deployed on the streets, airports, in urban transportation, malls and in a wider variety of public spaces in Brazil. But, even more recently, federal government agencies have been piloting initiatives that use this technology as a tool for verifying identity to access public services. This is an emerging trend with facial recognition being used to authenticate identity in the driver’s license, to access information about social security, MEI (Individual Micro Entrepreneur Permit), among many others services that are gradually being incorporated into the governamental app meugov.br.

This trend is likely to accelerate with the implementation of Decree 10,543/2020, which addresses electronic signatures in the federal public administration and establishes a deadline of mid-2021 for Brazilian federal agencies to choose what kind of digital signature they will accept to deal with public service demands digitally. In this scenario, the Brazilian app meugov.br, which uses facial recognition to authenticate identity, has been disseminated by the Digital Government Secretary as the main tool for an “advanced signature”.

Methodology: intersectional and decolonial feminist perspective

Beyond desk research, our research methodology included: a) a series of FOIA requests submitted to the Federal Data Processing Service (SERPRO), the Ministry of Economy, the National Social Security Service (INSS), the Federal Revenue Service, Dataprev, Banco do Brasil, and Digital Government Secretary; b) the deployment of a virtual questionnaire distributed among groups of trans and non-binary people to collect impressions about the usages of facial recognition technologies and c) structured interviews with five transgender activists who work in the promotion of human rights for transgender, transsexual, cross-dresser and non-binary people. The study analyzes the complexities in the implementation of facial recognition technologies for authentication purposes, focusing on the identification of trans people and all the intersectionalities of race, class and territory.

We interviewed: Bruna Benevides, the first trans woman in the Brazilian Navy who develops a fundamental work as secretary of political articulation at the National Association of Transvestites and Transsexuals (ANTRA); Viviane Vergueiro, activist and researcher in Interdisciplinary Studies on women, gender and feminisms at the Federal University of Bahia; Fernanda Monteiro, technologist and independent researcher in digital security; Bárbara Aires, activist and consultant in gender and student of Communication and Sasha Costanza-Chock, nonbinary trans* femme working to support community-led processes in the development and design of technologies. Among other things, Sasha is a research scientist at the Massachusetts Institute of Technology, a senior research Fellow at the Algorithmic Justice League and a member of the Steering Committee of the Design Justice Network.

From an intersectional and decolonial feminist perspective, we start from a historical review of the struggle faced by trans and non-binary people in Brazil to guarantee their civil rights. Then we dive into another historical recollection: the origins of facial recognition technologies, from the scientific racism that set the basis of forensic anthropometry in Criminal Sciences to recent studies on algorithmic racism, segregation, exclusion, and surveillance in the use of these technologies. Our next step was to map examples of the deployment of these technologies by the Brazilian Federal government as a form to verify identity to access public services and analyzed these practices considering both our data protection framework and issues of racism, transphobia, and algorithmic discrimination. The result: strong concerns about transparency, discrimination, and data protection (many easily extend to everyone, trans and cis people).

What we found out?

SERPRO, a public company that develops the software BioValid and Datavalid, both using facial recognition technologies for ID authentication, is the main provider of these technologies to the Brazilian government. These services work by crossing images with a National Traffic Department database: the National Driver’s License Register (RENACH), which is also managed by Serpro. But, in response to our FOIA requests, the company did not provide information about the accuracy of these two services, meaning that we got no data about the percentage of error or false positives. The rationale provided was that it would depend on how these services were deployed. Nevertheless, if they were selling these services, an average rate of error should occur.

When we asked the same questions about accuracy to the National Social Security Service (INSS), which is already using SERPRO technologies, they declared that during the proof of concept, the system validated only 64.32% of the biometrics taken. When asked specifically about the margin of error of facial recognition used for the purposes of issuing a driver’s license (CNH), Serpro stated that “the technology has a very high probability of matching images that have over 93% similarity.” But what happens when we change a lot over time, for example, transitioning… What happens when the similarity is below 93%? Both cases can lead to several episodes of false negatives, meaning, situations in which technology does not recognize that you are you and leaves you locked out of the system.

In cases that have gone beyond the pilot phase, such as facial recognition deployed in the turnstiles of public buses under the narrative of confirming gratuity, there are records of a trans student who had her travel card with student discount blocked, meaning she was left without gratuity and access to transportation to study. How many more cases like this have occurred? How many more are going to happen with this new trend of identity verification?

We need more transparency and systematic data to monitor error rates and the impact they can have on human rights. And this transparency is more than just publishing general data, because another critical point is that, even when governmental institutions expose data about error rates, they do not bring a disaggregated approach of these percentages, in spite of a growing tendency to consider different demographic profiles while analyzing these technologies.

Researchers Joy Buolamwini, from MIT, and Timnit Gebru have already demonstrated that facial recognition technologies from large ICT companies, such as IBM, Microsoft, and Amazon, are more likely fail to recognize faces when they are from black women than from white cis men. Since then, even the National Institute of Standards and Technology — NIST, appointed by SERPRO as a reference, conducts studies analyzing the margin of error according to different demographic profiles. In Brazil, the lack of transparency prevails.

In the face of these challenges, Coding Rights’ research also brings a genealogy of the historic struggle of the trans population to ensure their right to self-determination in relation to their name, identity, and gender. It is no wonder that, in the face of a history of exclusion and institutional violence, in a brief survey collecting the impression of a few trans activists about the implementation of these technologies on ID systems, 90.5% responded that they believe this technology can operate from a transphobic perspective. Due to the goal and reach of the distribution of the questionnaire, these percentages are not at all statistically representative, they are only impressions of trans activists engaged in the rights agenda. Nevertheless, they are strong indications of concerns that exist and are to come in the deployment of such technologies in a socio-political context of violence and discrimination against transgender people. Brazil is sadly on the top of the ranking of countries that most kill trans people, and these numbers are rising. According to a report by ANTRA, transgender murders in Brazil increased 41% in 2020, and 68% of the victims were black. In this context of violence and negligence, it is not a surprise that 95.2% of the people who answered the questionnaire had the impression that this technology can leave them vulnerable to situations of embarrassment and contribute to the stigmatization of trans people. 76.2% believe that it can threaten privacy rights. When asked if they agreed with the widespread use of this technology as a form of civil identification, 66.7% disagreed.

This whole situation of lack of transparency is aggravated by the fact that SERPRO, manager of the national driver’s licenses database, which is used to verify identity in all the services of facial recognition that it provides, may be privatized. This database is so precious that, according to a denouncement made by The Intercept in June 2020, the Brazilian Intelligence Agency (ABIN) even asked SERPRO to provide data and photos from the driver’s license database, which, until November 2019, had 76 million driven licenses. With the implementation of more facial recognition systems to access public services, these databases are growing and we will be more likely to have our data subjected to unlawful data sharing, more vulnerable to data breaches. All of this in a context where we do not yet have a National Data Protection Authority operating with full capacity, sanctions and regulations.

Raising an alarm bell

The research is, therefore, an alert to expand the public debate and request more transparency about pilot initiatives deploying facial recognition technologies to authenticate citizens’ IDs in the access to public services. While these are in pilot mode, it is our window of opportunity to question and/or draw concrete limits so tech does not become another tool for gender violence and exclusion.

The complete research (only in Portuguese) is available at: https://codingrights.org/docs/rec-facial-id-trans.pdf

If you want to know more about the topic, watch the second episode of Coding Rights web series “From Devices to Bodies”, featuring interviews with Joy Boulamwini, Mariah Rafaela Silva, and Nina da Hora: https://youtu.be/omP93gEuQfI

Authors of the original post Mariah Rafaela Silva and Joana Varon / Translation Erly Guedes / Illustration Clarote.

Hacking the patriarchy since 2015

Privacy policy