“AI for the common good: strengthening alternative systems”, recommendations from the meeting with Timnit Gebru

#algorithm #algorithmic racism #artificial intelligence #Big Tech monopolies #competition law #consumer rights #data protection #Digital Colonialism #elections #events #facial recognition #feminisms #justiça sócio-ambiental #Latin America #misinformation #Monopólio das Big Tech #privacy #racism #speculative futures

On June 6, 2024, in partnership with the Distributed AI Research Network – DAIR, MediaLab.UFRJ, Lavits and Instituto da Hora, Coding Rights were honored to welcome Timnit Gebru, a renowned computer scientist specializing in the field of ethics and Artificial Intelligence, co-founder of Black in AI and the Distributed AI Research Institute – DAIR, for a debate with researchers and policy makers on alternatives for the development of Artificial Intelligence designed for the common good.

The panel included, in addition to Timnit, Dani Monteiro, State Deputy of Rio de Janeiro and parliamentarian who was part of the “Sai da Minha Cara” initiative, which proposes bills to ban facial recognition in public security; Estela Aranha, Member of the UN High-Level Advisory Council on AI; Roberta Eugênio, Executive Secretary of the Ministry of Racial Equality; and Samara Castro, Director of Promotion of Freedom of Expression from Brazilian Presidency. In addition to insights, a performance and moderation by the organizers: Joana Varon, Founder and Co-Executive Director of Coding Rights; Fernanda Bruno, Coordinator of MediaLab at UFRJ and Nina da Hora, Executive Director of Instituto da Hora;

You can watch the full event here:

We’ve also collected below some of the key recommendations that emerged from the debate – also presented to the T20, the engagement group of research centers and think tanks at the G20:

Key recommendations:

• It is necessary to promote some inflection, some deviation in the hegemonic model of Artificial Intelligence, a model that runs counter to the common good. Concentrated in a few large technology corporations it involves extractive practices on a very large scale, that have proven to be very violent for collective life, democracy, socio-environmental, racial and socio-economic justice, with effects of precariousness of daily life, work and also free time, communication and mental health. AI model of the big tech corporations continues and brings to full power an anthropocentric, colonial and patriarchal perspective of technology that urgently needs to be questioned not only in the realm of ideas, but also in practices and institutions.

One model fits all doesn’t work, it is a fictional argument that feeds a monoculture on tech and a tech monopoly, because just a few companies can build larger models with massive amounts of data, so much that it is impossible to keep safe, and huge computational power, which consumes a lot of energy and water. In the end, it is an excuse for exploiting labor, collecting data, destroying the environment, and maintaining monopolies with bad-quality products that do not respond to local needs, because the long tail is not important for big tech companies. What we want is the complete opposite: smaller models and smaller curated datasets work better. Small community-rooted organizations that care about their people, know the context and care for their environment. To shift this paradigm, direct investments are needed, because the false claim about big tech having a long model that can do anything causes divestment from local smaller companies.

The foundation is the people. Technology has to be distributed, so people who develop it can stay connected to their roots and work on issues they understand and with community standards. Parachute science or parachute research practices that since colonization have been stealing knowledge, especially from indigenous people, shall be discouraged and avoided. Tools shall be built from the needs of the people and by the people they are directed to.

Federation of these small and local organizations working on AI shall be fostered, so, for example, for Large Language Models, there would be an API that covers many translations, but companies would be there using a common infrastructure, while adapting it to serve their communities’ needs.

Data is political. The hegemonic system is monetizing it, but we need to politicize it. Marginalized communities too often are invisibilized, once data produced by public sector for public policies tend to not capture the sensitivities and complexities of their life conditions. There shall be incentives to systematize data on violence, police lethality, sanitation, data collected by and for the territories, citizen data. The so-called “solutions” presented by big tech companies do not emerge from measuring and understanding the local dimensions of the problem, as such, they probably won’t solve anything.

• What does it mean to talk about the common good in a society where recognizing inequalities has not exactly led to the production of efficient responses to transform this reality?  It is not about solely promoting access. Addressing access and permanence within these spaces is not enough. A change within the system is needed, so that the common good is not just a guideline, but a reality in the present. It is important to foster incubators and encourage technology projects that respond to the discriminatory biases that we find in this field today and also invest in tech that helps build horizons of equality in which differences no longer represent violence.

• It is essential to redesign research methods on and with technologies. It is essential to undo the disciplinary boundaries that have historically shaped knowledge production in universities in favor of building effectively transdisciplinary centers that involve computer science, engineering and data science, but also social and human sciences, environmental sciences, the arts, traditional knowledge, and other fields of knowledge.

Universities and research on AI are being funded by big tech, governments need to invest and foster R&D so that technology is also produced for the public good or beyond the market priorities. We need a wider diversity of people and purposes to build alternative socio-technical imaginaries about how technologies are produced, standardized, marketed, used, and integrated into people’s lives beyond the one resulting predominantly from investment options of international corporations.

• More conversations are needed in the field of AI and education, particularly of children and adolescents, who should take part in this debate too. Screen time is already affecting the cognitive development of kids, will artificial intelligence affect the future of human cognition?

Digital Public Infrastructures shall be invested in to avoid over-platformization of public services in big tech infrastructures.

AI doesn’t exist without labor, data and mineral extractivism. Countries from the Global South are major providers of these three inputs. We need to materialize the global production chain of AI, centering a debate about both environmental damage and labor rights of workers in the mining, labeling, content moderation and other work activities that have been invisibilized by big tech narratives. The geopolitics of labor and AI are a political economy debate that shall be further explored in both policies and legislation.

In Brazil, we are highly qualified producers of data and content. This is already a Brazilian talent. No wonder the discussion of culture, and artistic production are also major themes in all the processes of regulating the digital spaces. Talking about the production of content and highly qualified and structured data, remuneration shall be debated. Public policies and legislation shall debate models of compensation for the usage of these content productions by big tech, or at least how to enforce that people who don’t want to have their content production used by companies have their rights respected.

• Also in terms of regulation, we should reverse the burden of proof, so, instead of risk assessment by regulators, companies should need to prove they are not stealing data, or doing harm. Not all technology shall exist.  Whether or not you should develop a technology is a discussion that has to be had.

The original document sent to the T20 with these recommendations is available here:

The translation to Portuguese is available here: