Artificial intelligence is creating a new colonial world order

We recommend this MIT Technology Review series that “investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.”

Image: Edel Rodriguez

Image: Edel Rodriguez

– by Karen Borchgrevink, LA Tech4Good founder & executive director

Karen Hao is senior AI editor at MIT Technology Review where her exposés of Facebook and Big Tech have been groundbreaking.

Hao, along with Heidi Swart, Andrea Paola Hernández and Nadine Freischlad, recently wrote a four part series on AI colonialism, which we think is an important read so we’ve captured some highlights here that we hope will whet your appetite to read more. A theme worth highlighting is that, while starting out with outrages, the series seques into resistance and initiatives that offer hope and optimism.

Introduction: Artificial intelligence is creating a new colonial world order

❝The AI industry does not seek to capture land as the conquistadors of the Caribbean and Latin America did, but the same desire for profit drives it to expand its reach. The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources—data—it can harvest from their activities, their movements, and even their bodies.❞

Part 1: South Africa’s private surveillance machine is fueling a digital apartheid

❝As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.
Not only is South Africa the world’s most unequal country, but the gap is deeply racialized, a part of apartheid’s legacy. The latest government reports show that in 2015 half of the country lived in poverty; 93% of those people were Black… As a result, it’s predominantly white people who have the means to pay for surveillance, and predominantly Black people who end up without a say about being surveilled.❞

Part 2 – Venezuela: How the AI Industry profits from catastrophe

As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.

❝After two hours of work, which included completing a tutorial and 20 tasks for a penny each …the Venezuela-based reporter on this article, earned 0.11 US dollars.❞

Image: Agoes Rudianto

Image: Agoes Rudianto

Part 3 – Jakarta: The gig workers fighting back against the algorithms

❝This sense of community is now at the heart of what distinguishes Jakarta’s drivers from other gig workers around the world.

Base camps became the network through which drivers around the city stayed in tight communication… [I]n Jakarta, things have played out differently. Through base camps, drivers don’t just keep each other informed; they support one another and band together to bend Gojek’s system a little more toward their will. It’s opened up new channels of communication with the company and laid the groundwork for lasting policy change.❞

Te Reo Māori Speech Recognition: a story of community, trust, and sovereignty.

Part 4 – Aotearoa, the Māori name for New Zealand: A new vision of artificial intelligence for the people

❝Like many Indigenous languages globally, te reo Māori began its decline with colonization.
Data sovereignty is thus the latest example of Indigenous resistance—against colonizers, against the nation-state, and now against big tech companies. “The nomenclature might be new, the context might be new, but it builds on a very old history,❞ says Tahu Kukutai, a cofounder of the Māori data sovereignty network.❞

We believe that the work of Te Hiku Media, which nurtures te reo Māori, the Māori language, through data, is ground breaking. In 2018, they recorded over 300 hours of audio of native Māori speakers across New Zealand. It was enough data to build language tech including automatic speech recognition and speech-to-text. Te Hiku Media considers this a treasure in sustaining their language, and in resisting corporate attempts to basically buy their language.

In conclusion, Hao writes:

❝That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.

Now a new generation of scholars is championing a “decolonial AI” to return power from the Global North back to the Global South, from Silicon Valley back to the people. My hope is that this series can provide a prompt for what “decolonial AI” might look like—and an invitation, because there’s so much more to explore.❞

The whole series is worth a read – start here!

Previous
Previous

Are you prepared for the Google Analytics 4 transition?

Next
Next

Building Ethics into Data Management