A just transition to an AI future: Pursuing a labour-first AI agenda

As debates over AI safety, ethics and regulation intensify, Ranjitha Kumar and Vinay Narayan examine how Global South actors are carving out their own approaches — not by trying to win the AI arms race but by reimagining the game entirely.

By Ranjitha Kumar and Vinay Narayan

A global “AI arms race” is underway. It’s so far been dominated by select technology companies from either China or the US, giving these countries a significant advantage. Tech monopolies — particularly companies like Google, Meta, Amazon, Microsoft and OpenAI — have led the charge in building Large Language Models like ChatGPT, now embedded seamlessly into everyday sociotechnical realities. We’re being catapulted towards a future featuring the extensive integration of Artificial General Intelligence (AGI)

Not everyone can compete in this race. It takes massive amounts of computational power to build and run AI; one needs a large number of Graphics Processing Units (GPUs), a robust cloud infrastructure and numerous energy-intensive data centres. Approaches vary on the regulation front: while China favours administrative regulation, the EU has taken a more rules-based approach. Meanwhile, AI regulation in the US is subject to executive action, shifting based on the leanings of the president. Once in office, President Donald Trump signed executive orders that rolled back many of his predecessor’s AI policies, blasting them as “woke Marxist lunacy”

Economies in the Global South are generally falling behind — there’s a lack of resources, from GPUs and data centres to the local talent required to develop cutting-edge AI technologies. But this doesn’t mean they’re untouched by the AI arms race: with the presence of cheap labour and poor enforcement of labour regulations, these jurisdictions are seen as perfect for the precarious but vital data work necessary for AI development. Today’s AI value chain paints a clear picture, drawing a stark line between the Global North and the Global South when it comes to AI development, training and deployment — reinforcing colonial hierarchies of knowledge production and ownership.

The play for digital sovereignty

Not everyone wants to compete in the race. Across the Global South, a growing movement is questioning the need to build large models to drive development, preferring to advocate and build smaller, purpose-specific models and applications. Unlike AGI, Applied (AI) places a greater emphasis on applicability and relevance. In the Global South, innovators are building lightweight, frugal AI applications that can not only have immediate impact but also reduce dependency on the tech giants of the Global North.

The idea of digital sovereignty — a nuanced play for digital power — isn’t new. But AI’s momentum has seen a reframing of this push to also include AI sovereignty. For the EU, this has meant wresting control over the data in the EU (through the EuroStack) away from American corporations, building resilience at the infrastructure level and building safe, trustworthy AI in the EU itself.

The notion of sovereignty has also captured the Indian imagination, particularly when it comes to building a just digital economy. Aadhaar, India’s digital identification system, was launched in 2009; it provides the foundational layer for the country’s digital infrastructure, known as India Stack. With the integration of a Unified Payments Interface (UPI) in 2016 that revolutionised real-time money transfers, the development of India’s digital public infrastructure has aided in conceptualising a Global South vision for digital sovereignty — characterised by low-cost innovations, modularity and, on occasion, contradictory goals.

The movement towards indigenously developed, purpose-specific models points in a promising direction for developing economies to invest in low-cost AI solutions with high payoffs, while reducing reliance on hegemonic powers. But while this strategy for digital sovereignty might help one’s position in the AI race, it doesn’t promise a future of equitable, people-centric AI development. In India, the ousting of international players from its digital ID development bolstered the might of Indian tech giants as both key infrastructure providers and benefactors. 

In other cases, there might not even be viable local options: many Global South countries struggle to be self-sufficient when it comes to such high-level technology and therefore have to balance their desire for a sovereign digital future with the realities of a skewed resource landscape. This challenge was thrown into stark relief in early 2025, when the Trump administration’s sudden gutting of the US government and freeze on foreign aid funding triggered a crisis of confidence in US-based technology and a renewed desire to seek — or build — alternatives.

A pitch for cooperative AI

In this context, a cooperative model of AI development can help resist the tide of Big Tech dominance: by using community-centric frameworks to determine a labour-first AI agenda and promoting regional cooperation among developing nations, cooperative AI can usher in a more equitable AI future in Asia. Starting from a community-centric participatory model for voluntary data sharing — where a data cooperative can serve as an ideal economic model — this paradigm can be extended to national and regional frameworks for sharing resources, computational power and talent to bolster Asia’s ability to navigate and build binding frameworks for AI safety, governance and regulation.

Central to cooperative AI are collective frameworks of data justice, participation and value redistribution. The notion of a cooperative model for AI draws from long-standing investigations and practical engagements with bottom-up data stewardship structures. These intermediary structures mediate data flows on behalf of communities that comprise them, embodying principles of collective decision-making, equity and fairness.

A cooperative structure is well-suited for all stages of AI, from dataset creation and labelling to model training and deployment. Embedding cooperative principles into the AI lifecycle has the potential to redistribute power, value and ownership, and also drive agency over data and infrastructure. A statutory recognition of bottom-up data rights can be transformatory for the future of AI in the Global South, not only allowing groups from the margins the power to determine where and how co-generated data is being used by Big Tech, but also boosting the development of purpose-specific models that solve ongoing developmental challenges to meaningfully bridge the digital divide. Additionally, cooperative and federated governance models can result in equitable participation from citizens to ensure that AI tools are being deployed with sufficient guardrails — recognising that it’s not just about maximising AI opportunities but also minimising AI harms.

A labour-first AI paradigm can better compensate workers through structural frameworks of redistribution in the form of repayment or royalties, while also enhancing Global South data workers’ collective bargaining power to demand labour and occupational safety standards. 

Balancing regulation and innovation

As the role of AI grows increasingly key to developmental gains, a unique opportunity for multilateral cooperation emerges. If such cooperation can be driven by principles of workers’ cooperativism to foster resource-sharing and establish regional standards for safety, the Global South would be in a better position to resist AI colonialism.

The groundwork for envisioning cooperative paradigms for an equitable Fourth Industrial Revolution has already begun in the Global North: the Framework Convention on Artificial Intelligence is a seminal multilateral treaty on AI, with over fifty countries committing to aligning AI deployment with democratic values and addressing the risks of algorithmic discrimination and threat to public institutions.

The Asia-Pacific is ripe for such regional cooperation: it already has countries with frugally designed digital public infrastructure and interoperable data frameworks for secure financial cross-border data flows. Singapore and New Zealand, along with Chile, have also signed a digital trade agreement to create a framework for a shared digital economy. The difference in resource distribution across the region has so far been seen as a disadvantage, but cooperative trade and creative financial assistance policies can enable resource-sharing across the AI value chain. Region-specific GPU trade and assistance can significantly improve the computing capacity of many countries in the Asia-Pacific, while the export of digital infrastructure tech stacks can enable not only cheaper innovation but also create opportunities for regional standards for interoperability. 

Some efforts to cooperate are already underway: the ASEAN Digital Master Plan 2025 is a prominent step towards achieving inexpensive innovation through regional cooperation, with commitments — inspired by the success of the EU’s General Data Protection Regulation (GDPR) — to produce regional-level guidelines for data-sharing in the region.

Pursuing a new global AI deal

Current regional cooperation frameworks are rife with unequal power dynamics, exacerbating debt and inequity. Despite the formation of key international initiatives — such as the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Policy Observatory — that drive for bilateral, cooperative agreements for AI, development and deployment continues to be skewed, prioritising innovation for the Global North often at the cost of developing economies. 

We must envision a renewed form of economic cooperation based on principles of data and economic justice, pursuing instruments such as a binding global AI compact that follows up on the United Nations’ Global Digital Compact, enacted through principles of cooperativism. Key multilateral instruments must recognise the global digital divide and establish appropriate financial and regulatory frameworks, borrowing from the ethos of the UN Capital Developmental Fund to offer an equalised platform for the global majority. Labour justice must be at the centre of all our efforts for a just transition to a future where AI can benefit everyone.

Ranjitha Kumar is a Senior Research Associate at Aapti. Her work at Aapti focuses on equitable data and AI governance models. Her interests include exploring the political economy of labour and digital capitalism and examining data flows within the digital value chain. Her current research at Aapti explores critically examining data sovereignty as response to Big Tech and Big AI.

Vinay Narayan is a Senior Manager at Aapti. He leads Aapti’s work on data stewardship and structures for bottom-up data sovereignty — where the current focus is on building networks of support for bottom-up data initiatives in the Global South. He is also the co-lead of Aapti’s AI research practice that examines the impact of AI technologies on livelihoods and rights frameworks that can address ongoing and potential impacts. 

Join one of the following groups and we'll keep you in the loop, including sending you a copy of the digital version of the first issue of Balance.

Join Signal Group | Join Whatsapp Group | Join Telegram Group