"Aligning AI with planetary justice requires a radical rethinking of what we build, how we build it, and why"
An interview with Sara Marcucci of the AI + Planetary Justice Alliance
[ This article was originally published on Truthdig ]
As the artificial intelligence boom continues apace, the huge and growing environmental footprint of AI has become impossible to ignore.
In its efforts to reach “super intelligence,” Meta is planning a giant data center that will cover an area half the size of Manhattan and require five gigawatts of power at peak demand — equal to roughly half the electricity needed by New York City. OpenAI, Microsoft, Google, Anthropic and xAI are building similarly sized data centers. This has sent U.S. electricity prices soaring, while tech and power companies are scrambling to build new carbon-emitting gas power plants as electricity grids buckle under the strain. These massive “AI factories” will also require vast amounts of fresh water, which mostly evaporates, despite many being built in water-stressed regions such as Arizona, Virginia and Spain.
Data centers are just the most visible part of a complex AI supply chain with a rapidly escalating impact on communities and ecosystems across the planet.
Earlier this year, technology researcher Sara Marcucci launched the AI + Planetary Justice Alliance to investigate and bring to wider attention the under-reported impacts of AI.
“I noticed that most conversations about AI ethics and governance focus narrowly on privacy, bias or safety, while ignoring the deep ecological, social and geopolitical injustices embedded in its supply chains,” she says. The Alliance seeks to correct this by bringing together researchers, activists and artists to examine AI across its entire lifecycle — from mineral extraction and chip manufacturing to model deployment and disposal.
Together, this network brings a number of vantages to a fundamental question:
How much more AI can we and the planet truly afford?

Our conversation has been edited for clarity and length.
Truthdig: What does “Planetary Justice” mean in relation to your work?
Sara Marcucci: For us, planetary justice means looking beyond human-centered rights and considering the interconnected rights, needs and well-being of communities, ecosystems and future generations. It’s about addressing climate, labor and ecological justice together, because the harms of AI are not just abstract or digital, they are physical, localized and planetary in scale.
TD: What resources and processes make up the supply chains for the AI industry?
SM: When we talk about the AI supply chain, we’re talking about one of the most resource-intensive industrial systems on the planet. Every AI model you interact with is the endpoint of a long, complex and often opaque chain of extraction, processing, manufacturing, computation and disposal.
It begins with raw material extraction — lithium, cobalt, rare earths, copper, nickel — mined in places like the Democratic Republic of Congo, Chile, Argentina and Mongolia. These are not just “inputs” in a spreadsheet; their extraction often comes at the cost of polluted rivers, depleted water tables, deforested land and the displacement of Indigenous peoples.
We’re talking about one of the most resource-intensive industrial systems on the planet.
From there, the minerals are refined and processed in industrial hubs like China, Malaysia and South Korea — a stage that is highly energy — and chemical-intensive, producing toxic waste that lingers in surrounding communities and ecosystems. Next comes equipment manufacturing: the chips, servers and cooling systems that form the physical backbone of AI. This stage is concentrated in East Asia and increasingly parts of Southeast Asia and India, where workers face high–pressure, low–wage and sometimes, dangerous conditions.

Once the infrastructure is in place, model training begins. Data centers in the U.S., Ireland, and India often sit near cheap electricity and abundant water sources, but “abundant” is relative. In practice, this can mean competing with local communities for essential resources.
After training, running these systems for users demands ongoing energy and data flows, which can consume significant energy each time they generate a response.
Finally, there’s disposal and end–of–life. When servers and chips are decommissioned, they often end up in the Global South, in countries like Ghana or Pakistan, where informal waste pickers dismantle them without protective equipment, exposing themselves and their environment to toxic materials.
TD: Are there aspects of what you found in the AI supply chain that really stood out to you?
SM: One of the most striking things we’ve found is just how deeply extractive the AI industry is, not just in terms of minerals and energy, but also in terms of human labor and knowledge.
On the material side, the sheer scale of resource use is staggering. Training a single large model can consume millions of liters of water for cooling, and the mining of cobalt, lithium and rare earths often leaves behind toxic waste that will pollute land and water for generations. These impacts are often invisible in the marketing narratives of “clean” or “smart” technology.
But equally striking is that the most profitable stages of the AI supply chain are concentrated in a few countries and corporations, while the social and ecological costs are scattered across some of the most marginalized communities in the world, where the people bearing these costs are not even aware that their suffering is linked to AI.
The thing that strikes me most in doing this work is how little we question whether AI is as “needed” or beneficial as we are taught to think. Progress, as in “more technology,” is often treated as inherently good. But that assumption goes unchallenged. Innovation doesn’t have to mean more AI models, more data centers, more chips, more resource consumption.

It can also mean developing technologies that are pluriversal (serving many worldviews and needs), frugal (minimizing resource use) and task–specific (designed for local, concrete problems rather than universal deployment).
In other words, innovation can be about different ways of thinking about technology, not just “more, more, more.”
TD: What are the key aspects of this that we should be aware of when we think about when and how we use AI?
SM: First of all, it’s important to specify what AI we’re talking about. Is it a small scale, locally developed model that is used for a specific task? Or is it a large-scale model that is highly centralized, resource intensive, designed for general purposes and controlled by very few people? I think this is an opportunity for each of us individually to reflect on what we care about. Do we care about the fact that communities near data centers are being deprived of their water supplies, or that energy bills are skyrocketing, or that entire ecosystems are being disrupted to put data centers in the ocean, or to create new and work on existing mineral mines, or to discard electronic waste as cheaply as possible. It’s hard for us as humans to conceive those impacts, if we don’t see them.
Regulations and policies need to be made at both national and international levels to recognize the huge costs of these industries against the actual, proven benefits for societies. What are the actual benefits being brought about the use of these tools? Is AI really the best solution to tackle this problem? If AI is the best solution, what are the social and environmental costs of using that AI? Are they worth it?
TD: What has been the AI industry’s approach (both manufacturers and digital platforms) to their supply chain impacts?
SM: AI companies’ sustainability reports often focus on narrow metrics, like data center energy efficiency, without addressing the origin of materials, the labor conditions in mining and manufacturing or the environmental and social costs of disposal.
For example, large-scale data centers in Peculiar, Missouri have sparked local concerns over water depletion, yet operators provide no transparent water use reporting. In the Democratic Republic of Congo, cobalt mining — crucial for AI hardware — continues under dangerous labor conditions, and rarely traced back to end users in the AI sector.
When challenged, the industry often responds with green innovation narratives — designing “more efficient” chips, promising renewable-powered data centers or experimenting with new cooling systems. While these can reduce impacts per computation, they don’t address the deeper problem: the size of AI models and AI demand are both growing so fast that total resource use continues to rise.
Until there is independent auditing, public traceability of supply chains and binding environmental and labor standards for all stages of AI production, the industry will continue to treat this as a public relations problem rather than a structural one.
TD: What are the main demands we should make of the AI industry?
SM: We need to focus on the biggest AI companies — OpenAI, Meta, Google, Microsoft, Anthropic, and others at their scale — because they control the infrastructure, use the most resources and set the tone for the rest of the industry.
At the most basic level, governments should require full lifecycle transparency from these companies, with independently audited data on all stages of production, from where minerals are sourced, to how much energy and water is consumed, to the labor conditions in manufacturing and disposal.
Regulators should also introduce enforceable environmental and labor standards for suppliers, monitored by independent bodies and prohibit sourcing from operations with documented human rights abuses.
Governments should require full life cycle transparency from these companies.
Resource use needs to be limited in practice, not just in efficiency metrics. This could mean setting legal caps on energy and water consumption for training and running large models, especially in regions with water scarcity or energy insecurity. Where infrastructure projects such as mines, data centers or disposal sites affect Indigenous lands or local communities, free, prior and informed consent must be a legal prerequisite.
Finally, before training or deploying any high-impact model, companies should be required to publish an environmental and social impact assessment that weighs costs against clear, proven public benefits, with opportunities for independent review and challenge.
None of these steps are impossible. They are already standard practice in other industries with large environmental footprints, from mining to energy to agriculture.
TD: How do you hope to see the Alliance develop over the next few months?
SM: The next few months we’ll be expanding our Observatory project so the human and ecological costs of AI are made more visible; and growing our two investigative projects: Rooted Clouds and Below the Algorithm.
Rooted Clouds looks at AI’s operational footprint, specifically, large-scale data centers, and grounds that in the voices and experiences of the communities that live alongside them. One of the moments I’m most looking forward to is visiting a data center. Not just to see the machines, but to stand in the surrounding community, listen to people’s stories and understand firsthand what it means to live next to one of these sites.
Below the Algorithm is our soon-to-be-launched deep dive into AI’s extractive foundations, tracing the origins of the minerals (lithium, cobalt, nickel, rare earths) that power AI. We’re mapping global mining sites connected to these materials, identifying who is affected and who benefits, and examining how extraction shapes ecosystems and societies.
We want the Alliance to be more than a platform for research and advocacy led by a small team of people. I want it to be a living, breathing coalition that connects different organizations, individuals, experiences and expertise, linking local struggles to global conversations and pushing for a different model of technology altogether.
TD: Do you think AI as a technology can be aligned with planetary justice or is it inherently incompatible?
SM: I think it depends on what kind of AI we’re talking about. Small-scale, locally developed, task-specific AI — could be a tool that supports local priorities, works within ecological limits and complements rather than replaces other forms of knowledge and problem-solving.
But the AI industry as it exists today is built on a very different logic. It’s driven by the accumulation of profit and power in the hands of a few corporations, with decision-making concentrated far away from the communities most affected by the industry’s footprint.
Underpinning this is a worldview that assumes humans are separate from — and should dominate — the rest of the living world. You can see echoes of that in high-profile tech bro narratives like Elon Musk’s vision of colonizing Mars, or in the broader tech culture that treats “progress” as an unquestionable good, regardless of ecological or social cost.
In that paradigm, AI is fundamentally misaligned with planetary justice. It draws on the same growth-obsessed mindset that is at the root of the climate and biodiversity crises. Aligning AI with planetary justice would require a radical rethinking of what we build, how we build it, and why — shifting from bigger-is-better to enough-is-enough, and from conquering the planet to caring for it.
If you’re interested in finding out more about the AI + PLanetary Justice Alliance or getting involved - they’d love to hear from you - you can reach out through the website: https://aiplanetaryjustice.com/
Thanks for reading this far - please leave a comment below….



