Artificial intelligence and consent: a feminist anti-colonial critique
#inteligência artificial*This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster. Read the original publication here.
“Alexa, do you do things without my consent?”, a friend asked her virtual assistant.
“Sorry, I’m not sure about that”, Alexa answered. We decided to question Alexa once our friend mentioned that it had automatically connected with a new smart lamp in the house without requesting her to set it up. Who else knows about her new lamp? About her energy consumption? Or even about her weekend traveling habits? A lamp turned on and off can say a lot about household habits. Alexa didn’t tell us a lot about consent—apparently, “smart” things don’t think about that—but according to the Amazon Privacy Notice1, referred in Alexa Terms of Use2, “information about our customers is an important part of our business” that is transmitted to third parties on a series of occasions. The privacy notice also mentions that “by using Amazon Services, you are consenting to the practices described in this Privacy Notice”. The act of consent there is determined by unpacking and using a device meaning that, without noticing, customers are consenting to the terms of a contract which was never read.
Slowly, under similarly weak notions of consent that are enforced in data protection legislations, artificial intelligence (AI) systems are making automated decisions not only in our houses, but also in governments, and the consequences can go way beyond privacy concerns. “Nations around the world are ‘stumbling zombie-like into a digital welfare dystopia’”3 said former United Nations Rapporteur on Extreme Poverty and Human Rights, Philip Alston, during an interview in 2019. His report, presented to the 74th session of the General Assembly of the United Nations, coined the term “Digital Welfare States” on policy spaces, drawing critical attention to the phenomenon in which “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish” (OHCR, 2019). The rapporteur’s conclusions discussing possible harms of artificial intelligence systems are in tune with arguments presented by Virginia Eubanks in her book “Automating Inequality”, which shows how gradually and more frequently, particularly “poor and working-class people are targeted by new tools of digital poverty management” (Eubanks, 2018, p. 11). Focused on the US, after analysing several examples of automated decision-making systems deployed in finance, employment, health care, policing, etc, she affirmed that the “cheerleaders of the new data regime rarely acknowledge the impacts of digital decision-making on poor and working-class people” (Eubanks, 2018, p. 9).
In India, the Aadhaar System, which establishes a gradually mandatory Unique Identification Number to every citizen, composing the world’s largest biometric ID system, has been accountable for multiple forms of what Silvia Masiero and Soumyo Das have called “data injustice” due to the “datafication of anti-poverty programs” (Masiero & Das, 2019). According to them, as the beneficiaries’ data are compulsorily included in the programme’s design, these data sets become directly relevant for the determination of rights. In other words, the conversion of “beneficiary populations into machine-readable data” enables identifying and profiling the user for assigning (or not) entitlements. Not by chance, the most invasive and punitive systems are aimed at the poor (Eubanks, 2018). As always, power, in all it’s intersectionalities of race, class, gender, territoriality, disabilities, etc plays an important role in how a particular technology is deployed and who is targeted.
Once again, knowledge and technology are being used to exploit, commodify or objectify marginalised groups, a historically common process to maintain oppression and subordination. The more oppressed you are by the “matrix of domination”4 (Collins, 2009), which is operating to maintain the status quo of cisheteronormativity, capitalism, white supremacy, and settler colonialism, the less power you have, and the less meaningful your consent is likely to be. And, according to Collins, that wheel is likely to continue, unless we cherish critical consciousness to unpack hegemonic practices and create new knowledge, for the perspective of those that have been historically subordinated, so we can empower individual and organised collective resistance. So, can we shift the individualistic and neoliberal meaning of consent that is being applied to technologies to a feminist approach towards consent that takes power relations into account and, as such, could work as one tool to help us challenge the notion of Digital Welfare States?
Though not the only legal basis for data processing,5 consent is a key concept in several data protection legislations. But in most poverty management systems that are being gradually deployed by the public sector worldwide, there is no margin of maneuver for opting out of data collection and processing. Nevertheless, these systems profilings are likely to have ethical, political and practical implications on how people will be treated or will access rights. According to Linnet Taylor, in her article “What is data justice?” (Taylor, 2017), low-income portions of the population will be subjected to an even more challenging situation, since the ability of authorities to collect accurate statistical data about them has been previously limited, but now they are targeted by regressive classifications systems that profile, judge, punish and surveil.
As the hype to set up the so-called Digital Welfare Systems spreads across governments around the world, what does it mean to consent when providing data is a mandatory requirement to access rights and when different data sets are being combined to feed AI systems that make automated decisions about beneficiaries of social programmes? Furthermore, if public interest might also be an alternative legal basis for data processing, in whose interest does an AI system that automates historical inequalities operate?
Particularly focused on the extensive digitalisation of anti-poverty programmes, this article contributes to building a feminist and anti-colonial critique of how an individualistic notion of consent (or a universalistic view of public interest) is being used to legitimate practices of control and exclusion in the emerging Digital Welfare States.
It is already a common critique that current forms of notifications (privacy policies and terms of services followed by binary “agree” or “disagree” buttons) used to acquire our consent in digital platforms have turned it into meaningless and non-granular ways to accept different data processing operations. But critiques from feminist scholars to the model of “notice and consent” go even deeper and question structural power asymmetries between data subjects and the controller, as well as the neoliberal individualistic approach that data protection legislations have set towards consent. For legal scholar Julie E. Cohen, to understand privacy simply as an individual right is a mistake, as she points out, “the ability to have, maintain and manage privacy depends heavily on the attributes of one’s social, material, and informational environment” (2012). In this way, privacy is not a thing or an abstract right, but an environmental condition that enables situated subjects to navigate within preexisting cultural and social matrices (Cohen, 2012, 2018).
In the context of datification of anti-poverty programmes, specificity and granularity of consent becomes even less evident and, in most of these cases, data subjects have no free choice. They are unable to withdraw consent, as these contexts have turned extensive datification into a requirement for having access to social benefits. Therefore, even if there is any kind of consultation to seek consent, if access to a right or a social programme depends on giving consent, there is no possibility of saying “No” to the data being collected. As Sara Ahmed (2017, n.p.) says: “The experience of being subordinate — deemed lower or of a lower rank — could be understood as being deprived of no. To be deprived of no is to be determined by another’s will”. In other words, if there is no power to say “no”, there should be no valid consent.
Unlike most neoliberal data protection frameworks, feminists and anti-colonial theories around consent allow us to highlight and assess the power dynamics involved. From sexual consent to consent to our data bodies, power plays a role in shaping who has the ability to say “no”. Therefore, there are ethical, political, and practical implications of promoting an individualistic notion of consent such as the one envisioned in data protection frameworks, particularly when applied to anti-poverty AI programes. Either through an individualistic notion of consent or an universalist and non-participatory approach to public interest, these programmes tend to become a tool to increase surveillance and reinforce the matrix of domination (Collins, 1990).
Therefore, the goal of this article is to discuss how the functional role attributed to digital consent in automatic decision-making systems has been enabling a continuation of practices of (digital) colonialism embedded in cutting-edge digital technology and technosolutionist narratives focusing on maintaining the status quo. To do so, in the next section, we recall how feminist theories have invested a lot in the discussions around sexual and socio-political consent and try to transpose the density of such debates to the notion of consent to our data bodies. In section III, recalling the historical legacy of racism and poverty from the colonial modern state towards data colonialism in Digital Welfare systems, we will analyse some cases of implementation of AI systems for anti-poverty programmes in Latin America. We will be particularly focused on exposing how AI systems are being built upon binary and forced consent towards data extractivist practices for control, showing how the functional role of digital consent in automatic decision-making systems has been functioning as an enabler of colonialism embedded in cutting-edge digital technology. Concluding remarks highlight how it is important to reposition consent in data protection debates in accordance with feminist theories around consent and anti-colonial theories, also to question and challenge the conception of certain AI systems. As Julia Powles and Helen Nissenbaum (2010) pose, sometimes, just trying to “fix”, solve bias or seek fairness in AI systems erases a more fundamental question: “Which systems really deserve to be built? Who decides?” In this article we bring insights from feminist theories to resist harmful trends in the contagious hype of artificial intelligence deployed in the public sector to address social-economic challenges.
Feminism density for a critical approach to digital consent
As a moral concept, consent is meaningful because it plays a morally transformative role in interpersonal interactions. In other words, valid consent can make permissible an action that would otherwise be inadmissible, such as sexual relations, loans, and in the particular case of digital technologies, the use of personal data (Jones et al, 2018; Kaufman, 2020).
While digital consent has recently been problematised, mainly due to the emergence of digital technologies in all aspects of our social life, feminist theories have extensively studied consent, adding way more distinctive layers in the analysis, including considering colonialist power dynamics and situating bodies in historical and sociological dimensions.
From writings of the Age of Enlightenment, where the idea of the social contract is consolidated and philosophers—among them Rousseau—describe female consent as an exercise of will (something that was previously exclusively reserved to men), to the consolidation of divorce and the recognition of rape and sexual harassment as a crime, the idea of consent became seen as a core principle. For feminisms the concept of consent has been key for women’s autonomy and freedoms in either socio-political or sexual matters (Fraisse, 2012; Pérez, 2016).
Nevertheless, the idea of a “capacity to consent” is a product of modernity, a period in which human beings are conceived as autonomous, free and rational individuals, conditions without which there is no possibility for acquiescence. These assumptions represent a problem for feminism in the context of colonialism, as the naturalisation of this liberal way to conceive consent tends to be posed as some kind of almighty universalising formula that can resolve everything. As Pérez (2016) asserts, this formula does not take into account historical and sociological structures where consent is exercised: at a symbolic, social and subjective level, consent is structured from a system of hierarchically organised opposition based on the sexual order and the logics of dominance. It is women’s responsibility to establish limits to male attempts to obtain “something” from them. In other words, for Pérez, consent has been seen as a feminine verb.
These dimensions of consent (as an exclusive part of individual freedoms and as a feminine verb) can be seen as naturalised, for example, in legal theories. According to Pérez (2016), the theory of consent in criminal matters considers consent as an individual act of free, autonomous, and rational human beings. But she sees it as problematic when we reflect upon, for example, sexual consent. For this author, the temporary or total exclusion of certain people from the ability to consent is an important piece of information to suspect that consent is not an inherent capacity to the human condition (for example, you get this ability only with legal age). Therefore, we could even question if everyone who is legally capable of consent is actually equally free, autonomous and empowered to be able to say no.
Furthermore, another question remains: in this rational, free and individual assumption of consent agents, why the “no” spoken out by women in situations of sexual harassment is, according to Pérez (2016), many times ineffective? Such a revolting and sadly common situation is a clear example of how the individualistic liberal framework of consent isolates the act of consent from its symbolic and social dimension and, thus, sweeps out the power relationships amongst people. In this context, Pérez considers something fundamental: it’s not just about consent or not, but fundamentally the possibility of doing so.
“Because consent is a function of power. You have to have a modicum of power to give it”, says Brit Marling in an essay in The Atlantic named “Harvey Weinstein and the Economics of Consent” (2017) where she underlines how consent is linked with financial autonomy and economic parity. For her, in the context of Hollywood that can be generally extended to other economic realities, saying “no” for women could imply not only artistic or emotional exile, but also an economic one. Again, here we see the fight against the idea of consent as a free, rational, and individual choice. Consent would be a structural problem that is experienced at an individual level (Pérez, 2016).
Another important criticism of this traditional idea of consent in sexual relationships is the forced binarism of yes/no. According to Gira Grant (2016), consent is not only given but also is built from multiple factors such as the location, the moment, the emotional state, trust, and desire. In fact, for this author, the example of sex workers could demonstrate how desire and consent are different, although sometimes confused as the same. For her there are many things that sex workers do without necessarily wanting to. However, they give consent for legitimate reasons.
It is also important how we express consent. For feminists such as Fraisse (2012), there is no consent without the body. In other words, consent has a relational and communication-based (verbal and nonverbal) dimension where power relationships matter (Tinat, 2012; Fraisse, 2012). This is very relevant when we discuss “tacit consent” in sexual relationships. In another dimension of how we express consent, Fraisse (2012) distinguishes between choice (the consent that is accepted and adhered to) and coercion (the “consent” that is allowed and endured).
According to Fraisse (2012), the critical view of consent that is currently claimed by feminist theories is not consent as a symptom of contemporary individualism; it has a collective approach through the idea of “the ethics of consent”, which provides attention to the “conditions” of the practice; the practice adapted to a contextual situation, therefore rejecting universal norms that ignore the diversified conditions of domination (Fraisse, 2012).
In the same sense, Lucia Melgar (2012) asserts that, in the case of sexual consent, it is not just an individual right, but a collective right of women to say “my body is mine” and from there it claims freedom to all bodies. As Sarah Ahmed (2017, n.p.) states “for feminism: no is a political labor”. In other words, “if your position is precarious you might not be able to afford no. […] This is why the less precarious might have a political obligation to say no on behalf of or alongside those who are more precarious”. Referring to Éric Fassin, Fraisse (2012) understands that in this feminist view, consent will not be “liberal” anymore (as a refrain of the free individual), but “radical”, because, as Fassin would call, seeing in a collective act, it could function as some sort of consensual exchange of power.
Consent to our data bodies
Traditionally, it has been considered by data protection regulations that there is an invasion of privacy if there is no consent from the data owner to the data processor unless there are legal obligations, vital interests, public interest, or legitimate interests. These are also some of the legal bases for processing personal data under several acts of data protection legislation compatible with the General Data Protection Regulation (GDPR). While being presented as the primary basis for data processing, meaningful consent in the use of personal data in digital services has been largely problematised as ineffective (Lee et al, 2017). But the already known problems such as notification, choice, and proper withdrawal of consent (Jones et al., 2018) can be exacerbated by artificial intelligence systems that collect huge amounts of data, process and generate new data. In this context, even if AI system controllers really want to obtain transparent and meaningful consent, they just cannot do it because they don’t know where data is going and how it’s going to be utilised (Nissenbaum, 2018). Furthermore, controllers of these systems also say they don’t have the ability to inform us about the risks we are consenting to, not necessarily as a matter of bad faith, but because of increasingly powerful computational methods such as machine learning working as a black box (Tufekci, 2018; Carmi, 2020). For other authors, the unpredictable and even unimaginable use of data by AI systems are even considered a feature, not a bug. For this same reason, companies and parties collecting and processing data have an incentive to leave unspecified the range of potential future applications (Jones et al., 2018; Cohen, 2018). This system’s opacity has been considered a major problem for meaningful consent, for example, regarding the uses of AI in medical diagnosis consultations (Astromskė et al., 2020).
Even so, the criticism of consent in AI is still not very extensive, and it is largely influenced by the criticism of digital consent, focusing on the transparency and unpredictability aspects of the systems. Much of the concerns around consent on data processing have been approached by self-regulation solutions, the Federal Trade Commission in the United States being one of its main sponsors. For researcher Daniel Solove (2013), under the current approach of privacy regulation—that he would call “privacy self-management”, but is also called “privacy as control” by other scholars (Cohen, 2018)—policymakers try to provide people with a set of rights to enable them to make decisions about how to manage their data. This is an individual framing of consent, based on the assumption that we are all autonomous, free, and rational individuals with the capacity to consent, disregarding our possibility of doing so due to unequal power dynamics. Two have been the main measures of mitigation in this framework of self-regulation: anonymisation and transparency and choice (also called notice & consent) (Barocas & Nissenbaum, 2009; Nissenbaum, 2011). For Barocas and Nissenbaum (2009), this approach has an appeal to stakeholders and regulators basically because notice and consent—as a way to give individual control to users—seems to adequately fit in the popular definition of privacy as a right to control information about oneself. In the same way, notice and consent seem consistent with the idea of a free market, “because personal information may be conceived as part of the price of online exchange, all is deemed well if buyers are informed of a seller’s practices collecting and using personal information and are allowed freely to decide if the price is right” (Nissenbaum, 2011, p. 34).
In general terms, the critical voices on the model of notice and consent could be divided into two general groups: One that we call—borrowing the denomination from Nissenbaum (2011)—“critical adherents”, which are moderate in their critics and focus on improving procedures of the model of consent, more than criticizing the liberal paradigm. While the other group is much more radical in terms of not believing at all in the model of notice and consent, basically because they don’t believe in the paradigm of privacy as individual control and autonomy.
The main criticism of critical adherents focuses on the way consent is being offered to citizens. They are critical about the idea of consent as “take it or leave it” and believe in a more granular model of consent (Solove, 2013). They are also critical about the idea of choice as “opt-out” and push for a model of “opt-in” (Nissenbaum, 2011; Hotaling, 2008). Likewise, this group acknowledges that privacy policies are long, legalistic, and really hard to digest for regular citizens; it is also an unrealistic burden for individuals to notice and review hundreds of online contracts from start to finish (Hotaling, 2008) and, in this context, they also advocate increasing transparency (Nissenbaum, 2011).
Nevertheless, in addition to its unpredictability and opacity, artificial intelligence brings new challenges to the classic free model of notification and consent. AI systems applied to social programmes can induce personal information from individuals in unexpected and even manipulative ways. And also, many of these applications challenge the form of screen-based notification and consent model, since, most of the time, it is not a software that has direct interaction with the users who feed the system with their data, for instance, when they rely on technologies such as facial recognition or the “Internet of Things” (Jones at al., 2018).
For more severe critics of liberal consent, meaningful consent requires meaningful notice. In reality, the information provided about data collection, its processing, and use tends to be vague and general, or too cryptic for non-lawyers. For Nissenbaum (2011), the traditional notion behind “online privacy” suggests that “online” is a distinctive sphere where protecting personal information is always framed in the context of commercial online transactions. As we have mentioned before, Julie E. Cohen goes further and considers privacy as an environmental condition (Cohen, 2012, 2018). Thus, protecting privacy effectively requires a willingness to depart more definitively from subject-centred frameworks in favour of condition-centred frameworks (Cohen, 2018). Therefore, only this form of criticism considers structural power relations when addressing consent and data processing.
Following Cohen, Carmi (2018) goes even further and stresses that meanwhile legal and tech narratives frame online consent as if people—their data self or data bodies—were a defined, static, and almost tangible piece of personal property, our everyday realities as subjects are far away from that: we present ourselves in a fluid—never fixed—way depending on the context. Static categorisation, hierarchical evaluation according to the values of those in power and separation of different human beings to be targeted for surveillance and control was at the heart of colonisation practices. It is again at the heart of Digital Welfare States, as this is exactly what predictive algorithms and risk modeling systems operated by these welfare programmes are doing to determine social services, affecting a wide variety of aspects in life: work conditions, pensions, education, health, support for people with disability, and many others.
Historical legacy of racism and poverty: from the colonial modern state to data colonialism in Digital Welfare systems
Racism was at the core of the colonial system and of the development of the modern state. It provided a violent excuse for the colonial spoliation and dispossession of people from their lands and territories, transferring wealth to the colonisers. It became an ideology to de-humanise the other (Almeida, 2014), non-Europeans, in order to open space to erase cultures and submit bodies of both indigenous people and African descendents either to death or slavery, in order to compose the colonial work force. For Mario Theodoro (Theodoro, 2019, p. 6), “racism is an ideology that classifies, gives order and ranks individuals according to their phenotype on a scale of values that has the white European model as the upper positive pole and the black African model as the lower negative pole” (translation by the authors). As we will see in the next pages, similarities between this conceptualisation of racism and with what most algorithms of poverty management systems do is no coincidence.
Poverty also has several roots in colonisation. As a political system based on exploitation and dispossession of resources from the colony, it generated historical socio-economic inequalities among countries, but also within the population of the colonised countries in the form of race/ethnic income inequality. It is not by chance that in several countries in Latin America and the Caribbean, when analysing poverty according to ethnic-racial identity, African descendants account for higher percentages of poverty and extreme poverty7 (CEPAL, 2021). Illustrating this historical process, Beatriz Nascimento, Afro-Brazilian scholar, historian, poet and activist, drafted the concept: “urban quilombo” to acknowledge that the favelas are “a space of continuity of a historical experience that overlays slavery with social marginalization, segregation and resistance of black population in Brazil” (Ratts, 2007, p.11). This historical continuity needs to be recognised by any algorithm intended to address social inequalities, but that is not what we are observing.
From an economic point of view, Digital Welfare States are deeply intertwined with the capitalist market logic and, particularly, with neoliberal doctrines that seek deep reductions in the general welfare budget, a reduction in the group of beneficiaries, the elimination of some services, the introduction of demanding and intrusive forms of conditionality of benefits, among others, to the point that individuals do not see themselves as subjects of rights but as service applicants (UNGA, 2019; Masiero & Das, 2019). This is especially interesting in Latin America, as Welfare States have never existed in most countries of the continent. Hence, a Digital Welfare State is better understood in its very neoliberal doctrine, a social risk approach, enforced by entities like the World Bank that reinforce an idea of poverty understood as an individual problem, not a systemic and historical one, and caseworkers as protectors of people “at-risk” (Muñoz, 2018). Blaming the poor for poverty conditions is not understanding the historical origins of poverty, and trying to redress poverty through data, without compensating historical oppressions, is using data and technology as a tool to maintain oppressions.
In recent years, the frameworks that problematise hegemonic technology as an economic and epistemological extension of colonialism have abounded. This goes both for the power relations between countries and for the power dynamics between socio-economic elites and marginalised communities historically oppressed within one country.
Ecuadorian scholar Paola Ricaurte (2019) analyses the epistemology that accompanies the knowledge production regime that ‘Big Data’ technologies entail. In her words, this epistemology is based on three mistaken assumptions: (1) data reflects reality, (2) data analysis generates the most valuable and accurate knowledge, and (3) the results of data processing help make better decisions about the world. For her, though mistaken, this epistemology has become dominant even in non-Western states. As such, data colonialism is tainting not only our relations with commercial platforms, but also our relations with national and local government, affecting fundamental rights and access to public services.
Specifically focused on artificial intelligence, Mohamed et al. (2020) examine how coloniality presents itself in algorithmic systems by institutionalising algorithmic oppression (the unjust subordination of one social group at the expense of the privilege of another), algorithmic exploitation (ways in which institutional actors and corporations take advantage of often already marginalised people for the asymmetric benefit of these industries) and algorithmic dispossession (centralisation of power in the few and the dispossession of many).
The next pages have the goal to dig deeper into how all these criticisms unfold in AI systems being deployed in poverty management systems and how they represent a continuity of colonial projects, automating racism, ignoring historical origins of poverty, imposing an epistemology based on data as the problem solver, while oppression, exploitation and dispossession are masked as innovation. While violent domination and submission to the invader was the tactic of ancient colonisation, nowadays, data colonialism uses neoliberal and individualistic approaches to consent as one of its subtle tools for domination. At the discourse level, adoption of such systems is presented as a “noble and altruistic enterprise designed to ensure that people benefit from new technologies”, experience more efficient government, and enjoy higher levels of well-being (UNGA, 2019, n.p.). Political rationality that coincides with the narratives of colonisation that presented a particular knowledge as the civilising path. Now, Digital Welfare States present automated decision-making technologies as the civilised future and society as the natural beneficiary of the data extractive efforts of corporations and governments (Couldry & Mejias, 2018). This is a trend that serves as an ideal excuse to automate neoliberal policies that allow for the containment and individualisation of social benefits (Peña & Varon, 2019; López, 2020; Venturini, 2019).
In these programmes, consent, or the lack of meaningful consent, plays a role. Couldry and Mejias (2018) claim that, in the era of data colonialism, companies use long and incomprehensible documents, such as Terms of Service, as a form of power (through the discursive act) to inescapably embed subjects in colonising relationships. Using Cohen’s (2018) approach, this subject-centred framework gives more power to the data processor, while if we departed from what she calls a condition-centred framework, we would consider power relations behind who proposes and who agrees with Privacy Policies and Terms of Service towards an anti-colonial approach to consent and to data processing. This statement deserves even more reflection when we think about consent given for data to be processed by artificial intelligence systems. Due to legal obligations set in data protection legislations, normally, governments deploying automated poverty management systems must seek ways to obtain consent for the use of poor people’s personal data or proof that it is being done according to other legal exceptions to consent, such as processing data on behalf of the public interest. Processing data collected to feed AI systems of a Digital Welfare State that automates inequalities is evidently questionable when it comes to the interest of the general public (Eubanks, 2018; O’Neil, 2016). But, as governments are actually also seeking some form of consent, next, we will focus on highlighting the critical ways in which Latin American governments are implementing consent in AI systems that are gradually being deployed to distribute social benefits (Peña & Varon, 2019; Canales, 2020).8
Binary and forced consent is not consent
SISBÉN, an individual qualification system that determines who may be worthy of social benefits in Colombia, is fed by data collected through surveys. Individuals are forced to give their consent to share their data with other databases, once they are threatened with losing their social benefits. According to López (2020), this exemplifies a policy that seeds fear among individuals.
A similar situation can be seen in Chile, where the programme Alerta Niñez (SAN), a system that uses data about children and adolescents to assess who is likely to be at risk of rights violations. The system is fed by other governmental databases in addition to surveys conducted by officials who are obliged to keep personal and sensitive data confidential, as established by current legislation. To collect data from low-income portions of the population, it is mandatory they sign on a letter of acceptance, but it is also a requirement to receive social benefits from the programme and there is no clear information about the purpose or usage of their personal data (Subsecretaría de la Niñez, 2019). Moreover, similar to the Colombian example, there is an intentional discourse to encourage people not to give up being in the system. For example, in the sample letter for rejection, it is specified: “We have made this decision as a family, in full knowledge of the potential benefits of this service” (Subsecretaría de la Niñez, 2019, p. 106).
Therefore, what is seen is people faced with binary consent, where they can solely choose either all or nothing. Indeed, this form of forcing consent from poor people seems common in Digital Welfare systems deployed elsewhere. Alston, in his 2019 report to the UN, indicates that there is a real risk that beneficiaries will be effectively forced to give up their right to privacy and data protection in order to exercise their right to social security and other social rights. Furthermore, as Arora states (2016), there is a scarcity of studies on how marginalised populations in the Global South perceive privacy and are able to exercise their privacy rights. This happens while, on the other hand, government authorities move further in the data collection with little or no substantial participation of public opinion.
From a feminist and anticolonial perspective, the context of power imbalance in which consent is obtained is made evident, recalling that there is an impossibility for certain subordinate subjects to say “no”. Furthermore, the individualistic focus of consent seems unreasonable if the data controller is a governmental agency with the power to impose the negotiation and is the only provider of a fundamental public service for which there is no replacement or alternative. Ultimately, states are using the power given to them to impose on individuals forced contracts permeated by the logic of data extractivism.
Consent for data extractivism of the poor
Indian digital anthropologist Payal Arora states (2019) that there is a tendency of states to experiment with people in economic vulnerability, as the damages that can be done are considered less important and it is more difficult for them to access justice for reparations. This extractivist logic focused on the most vulnerable prevails in AI systems developed for social welfare. They replicate what Couldry and Mejias (2018) call the “new state of capitalism” where the production and extraction of personal data naturalise the colonial appropriation of life in general. To achieve this, the authors consider that a series of ideological processes operate where, on the one hand, personal data is treated as raw material, naturally disposable for the expropriation of capital and, on the other, where corporations are considered the only ones capable of processing and, therefore, appropriate the data. Renata Ávila (2020) goes even further and points out how countries where most ‘big tech’ companies come from (the US and China, particularly) tend to benefit within a global system from the digitisation of poor and middle-income countries in what appears to be a new form of colonialism. Therefore, there is a multilayer of extractivism: at the level of individual countries, dominant elites processing, classifying and taking decisions about data of the poor, and at the global level, rich countries presenting themselves, and their companies, as the providers of “solutions”, benefiting from the profits of data colonialism.
For this extractivist and data colonialist practices to prevail, there is a chain of subject focused consent from citizens to governments and from local governments to ‘big tech’. Once again, consent is being instrumentalised to enable for data processing even beyond data owner clear awareness of future usages and consequences.
In the mentioned cases about Chile and Colombia, for example, there are private bidding processes. Moreover, in the case of the Colombian SISBEN, the state’s bidding contract is part of a strategy to consolidate a data analysis market in Colombia, so that the Colombian company selected would provide a service to the state, while receiving training from MIT experts and access to a sufficiently massive database to experiment (López, 2020).
In Latin America, IBM, Microsoft, NEC, Cisco, Google are commonly involved in AI projects developed by the public sector from the region. Every project feeds databases and provides intelligence for machine learning systems of these companies, which can use these less regulated environments, where enforcement of privacy rights is weak, as laboratories to test and improve their systems, normally unaccountable to possible harmful consequences.
Who will own the knowledge and set the epistemologies of the categories running these AI systems? Very likely, the digital welfare hype in Latin America is feeding a circle in which a foreign agent, unaware of the context and with lived experience far different from the local culture will always be bringing what is commonly called “an innovative solution” to a problem, treated as something external and punctual, though most of those problems are historical, structural and actually caused or fed into by the actions of these very same corporations.
With the inputs from these experiments, very commonly, these companies are also the vectors for spreading experiments from one country to another. For example, this is the case of Plataforma Tecnológica de Intervención Social (Technological Platform for Social Intervention), a machine learning experiment to predict teenage pregnancy and school dropouts, which was conducted by Microsoft, in partnership with the municipality of Salta, Argentina. “Intelligent algorithms allow us to identify characteristics in people that could end up with these problems and warn the government to work on their prevention,” said a Microsoft Azure representative in an interview for a company publication (News Center Microsoft Latinoamérica, 2018, n.p.). The system was heavily criticised due to statistical errors, sensitivities of reporting unwanted pregnancies, using data inadequate to make reliable predictions, but even further, for being used as a tool for discrimination of the poor and deviate the agenda of effective public policies to guarantee access to sexual and reproductive rights (Peña & Varon, 2019). Despite this, the programme is now being exported to other municipalities in Argentina, such as La Rioja, Tierra del Fuego, as well as to Colombia and Brazil (Peña & Varon, 2020).
Consent for control
As Elinor Carmi (2020) affirms, people make decisions according to various parameters, including emotions, health status, gender identity, financial and family situation, among many others, so it would be simply wrong to think that consent can be given freely. But what is an error and what is actually an intention of control? As Eubanks (2018) states, there is a historical scientific tradition in which data has been used for exploitation and dehumanisation. And, even more so, also following Eubanks, when old systems of social hierarchy are automated and, disguised as social welfare, have been used for profiling, policing and punishment of the poor.
Therefore, consent in AI systems used for social programmes can be seen as colonial contracts, as Mejias and Couldry refer to the terms and conditions of the platforms (2018). Both are made for domination and subjugation, which is done not as a way to get into any agreement, but rather as a warning, a way to claim territory: data of poor people, which is a no man’s land and which is ready for the exploitation of capital. In this sense, as Carmi (2020) states, consent is clearly a mechanism of control and dominance presented as an individual agency, while in fact, it gives space for states and companies to redetermine the boundaries of people’s bodies and the territories in which they live.
Consent for exclusion
Mathematician Cathy O’Neil (2016) says that AI “models are opinions embedded in mathematics” and explains that these models are an abstract representation of some process, a universalisation, and simplification of a complex reality where much information could be left out according to the judgment of its creators.
If the creators of those technologies are companies and government representatives, whose vision is being universalised? Governments produce beneficiaries through census categories that crystallise through data and become susceptible to top-down control, and where the risks of dehumanisation of the process put the dignity of the most vulnerable people at real risk (UNGA, 2019; Masiero & Das, 2019). O’Neil (2016) states that several AI systems tend to punish the poor because they are designed to evaluate large numbers of people. While information of privileged classes tends to be processed by people, the masses are now being analysed by machines. In this math, the class gap becomes even more explicit. What O’Neal detects is a historical continuity, now automated, as scrutiny, monitoring and surveillance by the state tends to focus on the poor (O’Neil, 2016).
And not only the poor, also LGBTQ+, black and indigenous people, and in some cases, simply every woman tends to be targeted differently by these systems than rich white cis hetero men. As Joy Boulamwini and Timnit Gebru have demonstrated, this is the case of facial recognition technologies (Boulamwini & Gebru, 2018). It is the case in Salta, where an anti-abortion and conservative governor presented the technological initiative as a magic solution: “With technology, based on name, surname and address, you can predict five or six years ahead which girl, or future teenager, is 86% predestined to have a teenage pregnancy” (Urtubay, 2018, n.p.). Imagine being a young poor girl who was flagged by an AI system as someone being predestined to pregnancy. A condition-centred framework of consent for data processing would consider that historically oppressed groups of the population would need redressing mechanisms to leverage the playing field for actually consenting. Or, moving even further, while pilots of anti-poverty AI systems start to be tested everywhere, why is there no pilot AI system predestining young rich white politicians to corruption? Why is the fiscal secrecy of the rich so ensured while every single data of the poor, from family, to medical and biometrical data is easily consented for data processing by both governments and companies?
The idea of being able to affirm who is “predestined” to a complicated future is a cruel one. Similar to what has been signaled to the programme developed in Salta, Chilean civil society organisations working on children’s rights also posed criticism to another project focused on kids, entitled Alerta Niñez, conceived for “risk assessment” of kids development. According to them, civil society groups working on children’s rights declared that the system “constitutes the imposition of a certain form of sociocultural normativity” (Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño et al., 2019, n.p.), as well as “encouraging and socially validating forms of stigmatization, discrimination and even criminalization of the cultural diversity existing in Chile”. In the same document, they stressed: “This especially affects indigenous peoples, migrant populations, and those with lower economic incomes, ignoring that a growing cultural diversity demands greater sensitivity, visibility, and respect, as well as the inclusion of approaches with cultural relevance to public policies”. In cases like this, there is no bias to be corrected, nor fairness to be reached, systems like this should simply not be considered to be developed if subjected to this first question: does this system deserve to be built?
IV. Towards a feminist anticolonial approach to consent in AI systems
In this article, we have crafted a feminist and anti-colonial framework to question how consent has been approached in the deployment of AI systems of Digital Welfare State programmes—something that has been overlooked, particularly when talking about data from poor communities. This is not by chance, but rather because privacy and data protection for these portions of the population are less likely to be enforced.
While consent is a powerful concept in feminist theories, it has been used to legitimate abuses in the usage of our data bodies. This situation is even more worrisome in the implementation of anti-poverty programmes by the emergence of Digital Welfare States that are using our data to, ultimately, automate inequality.
To be coherent with anti-colonial feminists’ thoughts, consent needs to be repositioned in the data protection debate to be considered as a collective matter. Only collectively, it might be possible to partially redress power imbalances and actually question the path of some tech developments. The UN Special Rapporteur on extreme poverty and human rights, when considering what real Digital Welfare would be, in the summary of his report states that “Instead of obsessing about fraud, cost savings, sanctions and market-driven definitions of efficiency, the starting point should be how existing or even expanded welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged” (UNGA, 2019, p. 2). So, instead of blindly following how corporations and governments are packing and selling the AI hype, a primary question should be: should this tech be built? With whom? Under what kind of continuous accountability processes? And the answers to these questions should be a continuous process of reinforcing collective answers, not a decision by data extractivist companies or by government representatives alone, not an individualist binary and forced consent for control and exclusion.
And this is not a utopian suggestion. Debates around decolonisation have precedents of considering collective rights and the right to self-determination, which is intrinsically connected with the concept of consent. Back in 1960, the United Nations General Assembly (UNGA) approved a resolution named “Declaration on the Granting of Independence to Colonial Countries and Peoples” prepared by the Special Committee on Decolonization. At its core, there was the principle of self-determination. Even within the United Nations, where decisions are consensus-led (which is hard to achieve in a global setting), there was agreement on the main role of self-determination in decolonisation processes. Self-determination is “free choice of one’s own act”9, “the right or ability of a person to control their own fate”10. Free choice, the ability to say yes or no to control our fate is also intrinsically related to the power to consent. Beyond individuals, the concept also refers to the determination of people in a territory.
The right to self-determination, collectively and individually, was also reinforced as a way to redress settler colonialism in the UN Declaration on Indigenous Peoples, adopted in 2007, after more than two decades of negotiations. In the preamble, the declaration recognises that: “indigenous peoples have suffered from historic injustices as a result of, inter alia, their colonization and dispossession of their lands, territories and resources, thus preventing them from exercising, in particular, their right to development in accordance with their own needs and interests” (UNGA, 2007, p. 2). Addressing both individual and collective rights, the declaration presents itself as an attempt to “outlaw discrimination against indigenous peoples and promote their full and effective participation in all matters that concern them”. In that sense, a collective (and retractable) approach to consent working as a counter force to colonisation is expressly mentioned in article 10: “Indigenous peoples shall not be forcibly removed from their lands or territories. No relocation shall take place without the free, prior and informed consent of the indigenous peoples concerned and after agreement on just and fair compensation and, where possible, with the option of return”. Prior and informed consent is also expressed in the declaration in several provisions for consultation and participation in decision-making processes of their concern. In this sense, throughout the declaration, we observe the quote “States in consultation and cooperation with indigenous peoples” (A/RES/61/295 pp. 3, 6, 9, 10). This is a clear example of a mechanism foreseen in international declaration in which consent is envisioned as a collective ongoing and retractable process.11 Why can’t we extend this notion to consent, consultation and participation in data processing, in AI systems and in the deployment of invasive technologies as a whole? How rich can such a process be to the conceptualisation of other kinds of AI systems?
The “Indigenous Protocol and Artificial Intelligence Position Paper” emerged from workshops conducted in Honolulu, Hawaii, which assembled indigenous people members of “Kanaka Maoli, Palawa, Barada/Baradha, Gabalbara/Kapalbara, Gadigal/Dunghutti, Māori, Euskaldunak, Baradha, Kapalbara, Samoan, Cree, Lakota, Cherokee, Coquille, Cheyenne, and Crow communities from across Aotearoa, Australia, North America and the Pacific” (Abdilla et al., 2020, p. 4). It departs from the perception that “given the long history of technological advances being used against Indigenous people, it is imperative to engage with this latest technological paradigm shift as early and vigorously as possible to influence its development in directions that are advantageous” (Abdilla et al., 2020, p. 6). Challenging what the paper refers to as the “anthropocentrism of Western science and technology”, the group was asking “how can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and A.I.?”12 A groundbreaking question considering that much of the debates around ethical AI are referring to a human-centred approach to these technologies as the solution to biases and possible harm, an approach that would clash with “many indigenous epistemologies that refuse to elevate humans” (Abdilla et al., 2020, p. 7) among all living beings.
Without any universalising attempt, recognising that all knowledge is situated, the position paper recalls that “Historically, scholarly traditions that homogenize diverse Indigenous cultural practices have resulted in ontological and epistemological violence, and a flattening of the rich texture and variability of Indigenous thought” (Lewis, 2020, p. 4) and that, though never extensive or finite, their “aim was to articulate a multiplicity of Indigenous knowledge”. The chapter by Dr. Hēmi Whaanga, entitled “AI: A New (R)Evolution or the New Colonizer for Indigenous Peoples?”, points to homogenisation that can be brought about by AI systems, revisiting Kenyan writer Ngũgĩ wa Thiongo (1986) on our need to decolonise our mental universe.
These epistemological analyses of colonisation and AI connect closely with what Paola Ricaurte, in the article “Data Epistemologies, The Coloniality of Power, and Resistance” pointed out when analysing the “data-centric rationality” that is at the core of AI systems. She stressed that it should be understood as a “violent imposition of ways of being, thinking, and feeling that leads to the expulsion of human beings from the social order, denies the existence of alternative worlds and epistemologies, and threatens life on Earth” (Ricaurte, 2019). Once again, pointing out that an anti-colonial approach to AI systems implies dismantling violent impositions, such as those reinforced by how consent is being framed in data protection debates. It requires inclusion from the beginning of the ideation process of an AI system, consultations, a willingness to achieve collective consent reinforcing multiplicity and plurality, not a flawed and individual consent to a powerful and limited Silicon Valley-centred vision about what artificial intelligence should be or how the future should look like.
References
Abdilla, A., Arista, N., Baker, K., Benesiinaabandan, S., Brown, M., Cheung, M., Coleman, M., Cordes, A., Davison, J., Duncan, K., Garzon, S., Harrell, D. F., Jones, P.-L., Kealiikanakaoleohaililani, K., Kelleher, M., Kite, S., Lagon, O., Leigh, J., Levesque, M., … Whaanga, H. (2020). Indigenous Protocol and Artificial Intelligence Position Paper. https://doi.org/10.11573/SPECTRUM.LIBRARY.CONCORDIA.CA.00986506
Ahmed, S. (2017, June 30). No. feministkilljoys. https://feministkilljoys.com/2017/06/30/no/
Almeida, M. D. S. (2014). Desumanização da população negra: Genocídio como princípio tácito do capitalismo. Revista Em Pauta, 12(34). https://doi.org/10.12957/rep.2014.15086
Alston, P. (2020). What the “digital welfare state” really means for human rights”. Open Global Rights. https://www.openglobalrights.org/digital-welfare-state-and-what-it-means-for-human-rights/
Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. International Journal Of Communication, 10, 19.
Astromskė, K., Peičius, E., & Astromskis, P. (2021). Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & SOCIETY, 36(2), 509–520. https://doi.org/10.1007/s00146-020-01008-9
Barocas, S., & Nissenbaum, H. (2009). On Notice: The Trouble with Notice and Consent”. Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information. https://ssrn.com/abstract=2567409
Berinato, S. (2018, September 24). Stop Thinking About Consent: It Isn’t Possible and It Isn’t Right. Harvard Business Review. https://hbr.org/2018/09/stop-thinking-about-consent-it-isnt-possible-and-it-isnt-right
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, 81, 77–91. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Carmi, E. (2018). Do you agree?: What #MeToo can teach us about digital consent”. Open Democracy. https://www.opendemocracy.net/digitaliberties/elinor-carmi/what-metoo-can-teach-us-about-digital-consent
Carmi, E. (2020). Media distortions: Understanding the power behind spam, noise, and other deviant media. Peter Lang.
Cohen, J. E. (2012). What privacy is for. Harvard Law Review, 126. https://cdn.harvardlawreview.org/wp-content/uploads/pdfs/vol126_cohen.pdf
Cohen, J. E. (2018). Turning Privacy Inside Out. Theoretical Inquiries in Law 20.1 (2019 Forthcoming). https://ssrn.com/abstract=3162178
Comissão Econômica para a América Latina e o Caribe (CEPAL). (2021). Afrodescendentes e a matriz da desigualdade social na América Latina: Desafios para a inclusão. Síntese (Report LC/TS.2021/26; p. Santiago). Documentos de Projetos. https://repositorio.cepal.org/bitstream/handle/11362/46872/1/S2000930_pt.pdf
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.
Couldry, N., & Mejias, U. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television and New Media, 20(4), 336–349. https://doi.org/10.1177%2F1527476418796632
Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.
European Parliament Research Service. (2020). The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. Panel for the Future of Science and Technology (Study PE 641.530). European Parliamentary Research Service, Scientific Foresight Unit. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf
Fraisse, G. (2011). Del consentimiento.
Gira Grant, M. (2016). Haciendo de puta. La labor del trabajo sexual. Pólvora Editorial.
Hotaling, A. (2008). Protecting Personally Identifiable Information on the Internet: Notice and Consent in the Age of Behavioral Targeting. Commlaw Conspectus, 16.
Jones, M. L., Kaufman, E., & Edenberg, E. (2018). AI and the Ethics of Automating Consent. IEEE Security & Privacy, 16(3), 64–72. https://doi.org/10.1109/MSP.2018.2701155
Kaufman, E. M. (2020). Reprogramming consent: Implications of sexual relationships with artificially intelligent partners. Psychology & Sexuality, 11(4), 372–383. https://doi.org/10.1080/19419899.2020.1769160
Lee, U., & Toliver, D. (2017). Building Consentful Tech. Ripple Mapping Tool. https://www.andalsotoo.net/wp-content/uploads/2018/10/Building-Consentful-Tech-Zine-SPREADS.pdf
López, J. (2020). Experimentando con la pobreza: El Sisbén y los proyectos de analítica de datos en Colombia. https://doi.org/10.13140/RG.2.2.19489.15207
Marling, B. (2017, October 23). Brit Marling on Harvey Weinstein and the Economics of Consent”. The Atlantic. https://www.theatlantic.com/entertainment/archive/2017/10/harvey-weinstein-and-the-economics-of-consent/543618/
Masiero, S., & Das, S. (2019). Datafying anti-poverty programmes: Implications for data justice. Information, Communication & Society, 22(7), 916–933. https://doi.org/10.1080/1369118x.2019.1575448
Melgar, L. (2012). Pensar el consentimiento desde la libertad. In G. Fraisse, Del consentimiento.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
Muñoz Arce, G. (2019). The neoliberal turn in Chilean social work: Frontline struggles against individualism and fragmentation. European Journal of Social Work, 22(2), 289–300. https://doi.org/10.1080/13691457.2018.1529657
News Center Microsoft Latinoamérica. (2018, April 2). Avanza el uso de la Inteligencia Artificial en la Argentina con experiencias en el sector público, privado y ONGs. Microsoft. https://news.microsoft.com/es-xl/avanza-el-uso-de-la-inteligencia-artificial-en-la-argentina-con-experiencias-en-el-sector-publico-privado-y-ongs/
Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). Crown.
Peña, P., & Varon, J. (2019a). Consent to our Data Bodies: Lessons from feminist theories to enforce data protection” [Medium Post]. Coding Rights. https://medium.com/codingrights/the-ability-to-say-no-on-the-internet-b4bdebdf46d7
Peña, P., & Varon, J. (2019b). Decolonizing AI: a transfeminist approach to data and social justice” [Global Information Society Watch 2019]. Association for Progressive Communications. https://www.giswatch.org/node/6203
Peña, P., & Varon, J. (2020). Teenager pregnancy addressed through data colonialism in a system patriarchal by design”. Why Is AI a Feminist Issue? https://notmy.ai/2021/05/03/case-study-plataforma-tecnologica-de-intervencion-social-argentina-and-brazil/
Pérez, Y. (2016). Consentimiento sexual: Un análisis con perspectiva de género. Universidad Nacional Autónoma de México-Instituto de Investigaciones Sociales. Revista Mexicana de Sociología, 78(4), 741–767.
Powles, J. (2021, July 20). The seductive diversion of “solving” bias in artificial intelligence [Medium Post]. OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53
Ratts, A., Nascimento, M. B., & Carneiro, S. (2007). Eu sou atlântica: Sobre a trajetória de vida de Beatriz Nascimento. Instituto Kuanza : Imprensa Oficial do Estado de São Paulo.
Ricaurte, P. (2019). Data Epistemologies, The Coloniality of Power, and Resistance. Television & New Media, 20(4), 350–365. https://doi.org/10.1177/1527476419831640
Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño et. al. (2019). Dia Internacional de la protección de datos. Carta abierta de la Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño. ONG Emprender con Alas. https://www.emprenderconalas.cl/2019/01/28/dia-internacional-de-la-proteccion-de-datos-carta-abierta-de-la-sociedad-civil-de-chile-defensora-de-los-derechos-humanos-del-nin
Solove, D. J. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2171018
Subsecretaria de la Niñez. (2020). Orientaciones Técnicas para la implementación del piloto de la Oficina Local de la Niñez. https://www.crececontigo.gob.cl/wp-content/uploads/2021/04/Orientaciones-Te%CC%81cnicas-para-la-implementacio%CC%81n-del-Piloto-de-la-Oficina-Local-de-la-Nin%CC%83ez-2020.pdf
Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 1–14. https://doi.org/10.1177%2F2053951717736335
Theodoro, M. (2019). A implementação de uma Agenda Racial de Políticas Públicas: A experiência brasileira. As Políticas Da Política: Desigualdades e Inclusão Nos Governos Do PSDB e Do PT.
Thiong’o, N. wa. (1986). Decolonising the mind: The politics of language in African literature. J. Currey ; Heinemann.
Tufekci, Z. (2018, January 30). The Latest Data Privacy Debacle. The New York Times. https://www.nytimes.com/2018/01/30/opinion/strava-privacy.html
United Nations General Assembly. (1960). Declaration on the Granting of Independence to Colonial Countries and Peoples. https://www.refworld.org/docid/3b00f06e2f.html
United Nations General Assembly. (2007). United Nations Declaration on the Rights of Indigenous Peoples. https://undocs.org/A/RES/61/295
United Nations General Assembly. (2012). Resolution 67/97. The rule of law at the national and international levels. https://undocs.org/en/A/RES/67/97
United Nations General Assembly. (2019). Report of the Special Rapporteur on extreme poverty and human rights. https://undocs.org/A/74/493
Urtubey, J. M. (2018, April 21). La inteligencia que no piensa. Pagina 12. https://www.pagina12.com.ar/109080-la-inteligencia-que-no-piensa
Venturini, J. (2019, October 10). Vigilancia, control social e inequidad: La tecnología refuerza vulnerabilidades estructurales en América Latina”. Derechos Digitales. https://www.derechosdigitales.org/13900/vigilancia-control-social-e-inequidad/
Footnotes
- https://www.amazon.com/gp/help/customer/display.html?nodeId=GX7NJQ4ZB8MHFRNJ&language=pt
- https://www.amazon.com/gp/help/customer/display.html?nodeId=201809740
- Interview of the UN Special Rapporteur to The Guardian in October 2019 https://www.theguardian.com/technology/2019/oct/16/digital-welfare-state-big-tech-allowed-to-target-and-surveil-the-poor-un-warns
- Black feminist scholar Patricia Hill Collins, in her book “Black Feminist Thought: Knowledge, Consciousness and the Politics of Empowerment” describes four interrelated domains that organise power within society: the structural domain, the disciplinary domain, the hegemonic domain and the interpersonal domain.
- In Europe, the General Data Protection Regulation establishes in Article 6 as lawfulness of processing: consent; performance of a contract, a legitimate interest, a vital interest, a legal requirement, and a public interest. Other regulations, such as Brazilian General Data Protection Legislation follows similar requirements.
- This section draws on the article “Consent to our Data Bodies: Lessons from feminist theories to enforce data protection” (Pena & Varon, 2019).
- According to the graphic Latin America (6 countries): poverty and extreme poverty according to ethnic-racial conditions, 2018. Available at https://repositorio.cepal.org/bitstream/handle/11362/46872/1/S2000930_pt.pdf
- The project notmy.ai is currently mapping AI systems being deployed by governments in Latin America that might have critical implications in gender equality and all its intersectionalities.
- https://www.merriam-webster.com/dictionary/self-determination
- https://www.oxfordlearnersdictionaries.com/us/definition/english/self-determination
- And even this example, though coining a collective approach to consent on diplomatic arenas, still has some issues. We should note that more critical voices on the UN Declaration on Indigenous Peoples would see it as problematic that it was negotiated in a state-based forum, therefore, operating under national states’ terms. These are power forces that consider indigenous peoples as part of (settler colonial) states, subjects to be given rights by those, rather than sovereign nations in their own right of who precede the modern nation states represented at the UN. Wouldn’t it be legitimate to ask why one party is reduced to “giving consent” rather than shaping the modalities of the encounter?
- https://www.indigenous-ai.net/