AI Is the Future. But Whose?

AI Is the Future. But Whose?

No matter how old you are, it’s hard to have missed one basic fact of modern life: wealth is not evenly distributed. The same, it turns out, goes for artificial intelligence. And by AI I don’t mean ChatGPT, but compute power, data centers, energy, data, skills, rules, and—above all—options. This isn’t just about who gets access to what. It’s about a clean hierarchy: those who get to build, those who are allowed to use, and those who are excluded before the game even begins.

If it’s true that where there’s Barilla, there’s home, it’s also true that where there’s wealth, there’s progress (we’ll come back to that). Could it have gone any other way? Sadly, no. The real problem is that most of us barely grasp what this actually means.

As the global economy pivots, day by day, toward large-scale AI adoption, less developed countries risk falling even further behind, deepening economic and social divides that, if they were already chasms, now threaten to become black holes.

You don’t need to be an expert to notice it. Every day, more and more pieces of our lives migrate from the physical world to the digital one. Human–machine interaction, which until a few decades ago was the stuff of visionary filmmakers and sci-fi writers, has become central to almost everything we do. Whatever hasn’t yet made the jump will likely be sucked in before we even notice.

The problem is that AI, as we are currently building it, is boosting the comfort and privileges of lives, like ours, that were already doing just fine. But what about the others?

Let’s start with energy. According to the International Energy Agency, in 2024 data centers consumed 1.5% of global electricity (about 415 terawatt-hours, roughly the annual consumption of 150 million homes). On current trajectories, that figure could reach around 945 TWh by 2030. What matters here is that this consumption is anything but evenly spread: in 2024, the largest share belonged to the United States (45%), followed by China (25%), with Europe trailing at 15%.

So the obvious question is this: how are smaller, resource-poor countries (without the infrastructure to sustain energy demands of this scale) supposed to enter the game under rules (needless to say) written by us? And no, this is not Darwinian “natural selection,” despite how casually I’ve heard that phrase tossed around. Because there’s nothing natural about it.

Image via Creative Commons.

Then there’s water, the other elephant in the room. AI doesn’t consume water out of malice; it does so because data centers need cooling and energy has to come from somewhere. A study titled How to Make AI Less Thirsty shows that the water footprint of artificial intelligence (the amount of freshwater used directly and indirectly to produce it) is far from negligible, and largely invisible in public discourse. If this is already putting pressure on the world’s strongest economies, imagine what it means in places where electricity is unreliable and water scarcity is structural.

Let’s be honest. How many times has technology arrived with the best intentions and still caused more harm than good? AI is already rattling the job market. In healthcare, yes, it can save your life, but it can also reduce you to a data point. And what about the human brain? Remember Limitless, the 2011 movie in which Bradley Cooper takes a pill that unlocks 100% of his cognitive potential? AI feels like the antidote, one that slowly neutralizes it. The risk isn’t that it will make us stupid; it’s subtler, and more human. If we keep using it as a crutch, if we stop exercising judgment and effort, our brains, like any muscle, will lose tone. Spoiler alert: they already are. In academic literature, this has an uncinematic but very real name: automation bias, our tendency to trust machine outputs even when they’re wrong. Once you get used to delegating, you eventually stop checking.

And then there’s the question of who controls all this. Without clear rules, AI risks becoming a digital Wild West run by the usual suspects. Maybe it’s time to open our eyes. The future of artificial intelligence will not write itself, and it would be nice if it weren’t shaped exclusively by those who worship at the altar of money.

Over the past few decades, especially in digital technologies, private companies have positioned themselves as the primary engines of transformative innovation, often operating with near-zero social oversight. This stands in stark contrast to the approach taken by CERN in Geneva with the World Wide Web (or as we know it, the www.). On April 30, 1993, instead of patenting or privatizing the web’s source code, CERN made it freely available to the public. That decision ensured the web remained an open platform for innovation and global collaboration, free from proprietary restrictions. It enabled, and still enables, the growth of the internet, creating endless opportunities for business, education, and communication worldwide. Spoiler alert: that legacy is dying, undermined by the very development models we see every day.

To be clear: given the enormous social impact of AI, adopting a non-profit approach (yes, really) focused on infrastructure, data, and skills rather than profits could make the difference between genuine progress and yet another upward redistribution. Utopian? In these dark times, probably.Let’s at least agree on one thing: default demonization of AI makes no sense, since its benefits are enormous. As the OECD puts it, “AI has the potential to address complex challenges, from improving education and healthcare to advancing scientific innovation and climate action.” The question, then, isn’t how to go back, but how to make what comes next better for everyone, not just for us. Right now, we live in a world where (to borrow a data-backed metaphor) one person is drowning, three are swimming against the current, five are barely staying afloat, and one is cruising by on a yacht. If technological progress isn’t handled with care and intelligence, the rich will get richer, those who struggle will keep struggling, and those who are starving will keep starving. Meanwhile, as AI hype promises trillions in economic gains, the pie remains in very few hands. According to UNCTAD, the global AI market could reach $4.8 trillion by 2033, roughly the size of Germany’s economy. So here’s the question: if AI has the potential to create equality, why are we using it in ways that will only deepen old inequalities, and calling that progress? Where, exactly, is the progress?

Image via Creative Commons.

The problem starts upstream. Large parts of the world (call it the Global South or not, it hardly matters here) still face a persistent gap in internet access and use. The digital divide has evolved from a simple “connected vs. unconnected” binary into a multidimensional phenomenon. As early as 2006, it was described as layered rather than binary: access matters, yes, but so do cost, quality, gender, skills, and meaningful opportunities for use.

Today, according to the ITU, around 2.6 billion people (32% of the planet) are still offline. The gap is brutal: 93% connectivity in high-income countries versus 27% in low-income ones. Add the urban–rural divide, shaped by infrastructure and service reliability, and the reality becomes clear: it’s not that the internet is missing, it’s that the ecosystem needed to make being online genuinely usable for everyone was never seriously built.

Throw in a dash of gender inequality, and the picture gets even rosier. In low-income countries, about 90% of women aged 15–24 cannot use the internet, compared to 78% of their male peers. From our comfortable daily lives, we have little sense of how many people are excluded from a world that now runs almost entirely on digital rails, and at a speed that even we struggle to keep up with (we being, by definition, the fully connected elite).

Without targeted, coordinated interventions to close these gaps, AI’s potential to support sustainable development and poverty reduction will remain yet another false promise of our time, while vast segments of the global population are left behind in our digital gold rush.

If we want to try, even just try, to do things differently, there are a few minimum (not heroic, minimum) conditions we can demand. Connectivity and cost: having “the internet” isn’t enough; access must be affordable and reliable, or online life remains a luxury. Compute access: universities, researchers, and public institutions in poorer countries must be able to use powerful computing resources (servers, GPUs, cloud) without being excessively dependent on a handful of private giants. If compute remains too expensive or tightly controlled, only a few countries and companies will shape AI, while everyone else merely uses it on terms set by others. Energy and water: efficiency, transparency, and planning. Without this trio, the game is played where energy and cooling capacity are abundant. And guess who starts ahead. Skills and gender: digital literacy and AI literacy, especially where the gap systematically affects young women. Governance: clear, enforceable rules, with accountability and consequences. Negligence doesn’t become progress just because we rename it innovation.Whether you read this as optimistic or pessimistic hardly matters. The impact AI is having on society is comparable to only a handful of epochal moments in human history. With one crucial difference: fire, the Industrial Revolution, and the automobile didn’t transform the world in just twenty-four months.
In The Impossible, the 2012 movie about the tsunami that struck Thailand nearly twenty years ago, people see the wall of water approaching from afar and try to escape. The feeling I have is that we think we’re standing on the shore, watching the wave roll in, when we’re actually already ten meters underwater and didn’t even notice the moment of impact.

Beatrice Galluzzo