AI may create a New Aristocracy: Why Progress Must Serve Society, Not Just Capital
It's argued that Artificial intelligence is becoming the world’s most powerful general-purpose technology since electricity. It's hard to argue, when its diffusion is faster than the internet’s, its potential economic footprint is measured in trillions, and its risks are drawing attention from regulators and philosophers alike. Yet beyond the buzz of the latest training models and stock surges, there is a deeper question we need to confront: what happens to the social fabric when capital flows increasingly to a narrow band of AI firms (and related infrastructure), while demand for human labour declines?
This isn’t a futurist’s thought experiment. The IMF now estimates that 40% of global jobs are exposed to AI, rising to 60% in advanced economies. Goldman Sachs puts the disruption in terms of 300 million full-time jobs. McKinsey projects that half of today’s work tasks could be automated sometime between 2030 and 2060. None of this guarantees mass unemployment, of course, as new roles may well emerge, but it does mean the traditional link between labour and livelihood is fraying. Couple that with predictions from Gartner that $1.5 trillion of private capital will be invested in AI in 2025 alone, and you've got a movement of capital like we've never seen before alongside an uneven and unknown human workforce trajectory.
The Social Contract on Shaky Ground
If productivity growth accelerates but wages stagnate, the result is a world where wealth accrues to the owners of algorithms, data, and compute, not to workers. Daron Acemoglu calls this “so-so automation”: efficiency gains that enrich capital while eroding labour’s share, even if labour is still required in some form. We have seen this dynamic before, in the rise of “superstar firms” whose profits dwarf rivals’ and whose labour intensity is low. AI threatens to supercharge it.
There are those, of course, who point to the "abundance theory": suggesting that artificial intelligence could usher in a post-scarcity world by drastically lowering the cost of producing goods and services, and optimising scientific and engineering outcomes at scale and speed. By automating production, optimising resource allocation, and accelerating innovation, AI - they say - has the potential to eliminate historical resource constraints and make nearly all necessities of life such as food, energy, education, and healthcare, virtually free. But the move to an AI-driven economy is unlikely to be smooth, and managing what is increasingly being termed the "gap phase" where many jobs are displaced by automation, but new systems to uphold fairness, equality, and justice in society have yet to be created, has the potential to cause significant economic and social disruption. The "mass accumulation effect" as capital funnels to AI (and its respective owners), unchecked, is unlikely to seek such radical redistribution. And so experiments have begun to look at interim measures like Universal Basic Income (UBI).
Policy experiments show both the promise and limits of any redistribution. Finland’s universal basic income trial improved wellbeing but didn’t meaningfully boost employment. Stockton’s pilot gave recipients dignity and breathing room, sometimes enabling them to secure better jobs. Yet on national scales, the fiscal weight of permanent UBI is daunting. The IMF and OECD both caution that we may need more targeted supports, portable benefits, stronger safety nets, and progressive taxation, before leaping to blanket income guarantees.
Faith in a Labour-Rich Future?
To be sure, techno-optimists contend that this analysis overlooks AI's ultimate promise. They argue that, like past technological revolutions, AI will ultimately create more jobs than it destroys, unleashing a wave of innovation and entirely new industries we cannot yet imagine. In this view, attempting to steer or redistribute the gains too early is a form of Luddism that risks stifling the very engine of growth that will benefit everyone.
However, this historical analogy is flawed. The pace of AI's diffusion is unprecedented, and its nature as a general-purpose technology for cognition means it can displace swathes of both blue- and white-collar work simultaneously. Furthermore, the capital required to compete at the frontier of AI is so immense that the 'new industries' are likely to be born within, or immediately acquired by, the same existing mega-caps, reinforcing the cycle of concentration rather than breaking it. Waiting for a trickle-down economics outcome that may never come is a dangerous gamble with societal stability.
All this, of course, does little to predict the potential power of wider-reaching Artificial General Intelligence (AGI), let alone Artificial Super-Intelligence (ASI). As the CEO of Google's Deepmind (and Nobel Prize winner) Demis Hassabis has shared: he believes this (AGI) may be the last human invention. What course of human labour might we path in such a world. Certainly an unpredictable one. And certainly one where policymakers should be prepared.
Capital Markets at a Crossroads
Markets are already tilting heavily toward a handful of AI-linked mega-caps. Their valuations rest on expectations of enduring dominance, but the systemic risk of such concentration is obvious: if a small number of firms become indispensable to both investors and economies, fragility rises (note: the AI bubble may well be the topic of a future article I write). Meanwhile, control of scarce assets from compute infrastructure, proprietary data, to foundation models, gives these firms choke-points over the AI value chain.
If we allow this dynamic to run unchecked, capital markets risk becoming a mirror of inequality: a small set of corporations and investors capture extraordinary rents, while the broader workforce loses bargaining power. In such a world, demand itself may wither, forcing society to improvise redistributive fixes. That is not a recipe for stable, healthy growth.
Beyond Money: The Future of Value
The question stretches further: what happens to money and capital if the primary driver of wealth creation is non-human? Central banks are exploring digital currencies, which could in theory enable direct redistribution of AI dividends. Policy thinkers debate “data dividends,” compensating people for the data that trains models. Tokenized markets and “unified ledgers” promise faster financial plumbing, but they also highlight a truth: the way we distribute the fruits of AI is a choice, not a given.
The Broader Imperative
Here lies the crux: if AI progress is pursued solely as a profit-maximizing arms race, we will amplify the very polycrisis we are struggling through - inequality, democratic erosion, fragile capital markets, and social fragmentation. Worse, we risk side-lining the more urgent frame of planetary boundaries. Training large AI models consumes immense resources, but the larger point is that human ingenuity should be marshalled to solve ecological overshoot, not accelerate it.
The purpose of technology is not simply to generate capital; it is to better society. If AI is to be a transformative force, it must be directed toward augmenting human potential, restoring planetary balance, and strengthening the institutions that hold us together. That requires explicit choices: incentivizing labour-augmenting applications rather than labour-displacing ones, ensuring broad access to AI’s infrastructure, taxing excess rents where they accumulate, and embedding sustainability into every AI roadmap.
A Closing Call
We stand at a fork. One path leads to an economy where a few firms, flush with capital and compute, dominate not only markets but the very foundations of work and wealth. The other path requires us to actively redesign our institutions: financial, political, and social - to distribute the dividends of automation fairly, and to keep AI aligned with human and planetary well-being. Potentially even leading us to an AI-enabled radical abundance in line with planetary flourishing. Wouldn't that be nice.
AI is not destiny. It is a tool, albeit a powerful one. The question is not whether it will transform society, but whether it will do so in ways that strengthen or unravel the social fabric. We cannot afford to leave that outcome to chance, or to the balance sheets of a handful of companies.
Climate Change and Sustainability; AI Leader, Industrials and Energy
3wMatthew Bell - a great and typical Bellian thought provoker. My take based on my own Valley safari last week - the AI boom is firmly on, the capital flowing is astonishing and the applications are now being built on incredibly sophisticated data and AI capabilities. What this means for clients? Speed to value...and then of course to your question - what is value?? The foundations are not solid for sure - 1 what we saw with the advent of social media and lack of appetite for regulations in key markets and 2 the inequalities being driven by modern capitalism are the shaky ground this is being rapidly deployed onto. BUT the opportunities for human flourishing is also there - and we need to also promote those opportunities...digital twins for radical energy efficiency and circularity, ML models for resource allocation (food, medicine). Some would say we even need AGI (your AI enabled radical abundance) to manage our planet - a fair conclusion if we lose faith in human behaviour...but in reality as a society/ species we need to chose the future we want. 💚
Senior Manager at EY | 2024 British Rally Champion
1moPhenomenal article Matthew Bell that demonstrates why sustainability is just as much about societal issues as it is about environmental concerns. It seems evident now that the current geopolitical state has caused an AI take-off that exasperates our issues as a society rather than helping us alleviate them. It's obvious we could use AI to help people reduce pain and suffering their lives, increase the amount of time we can take to care for others, and help us supercharge our energy transition. Instead, we see AI supercharging the flaws in our current system that increasingly lead to further income inequality and displacement. Hopefully with a lot more intention and attention around the issue, we can start to see AI that works towards the interests of society as a whole.
Programme Director, Finance Sector Education at Cambridge Institute for Sustainability Leadership (CISL)
1moMatthew Bell, great article. Totally agree. You raise some big geopolitical and moral questions about the kinds of societies we want to finance and create. I think a lot of people also fail to recognise that half of all government revenues across OECD countries come from taxes on income (24%) and social security contributions (26%). The erosion of jobs by AI, even temporarily, could cause havoc with already overstretched public budgets... could this finally cause a shift in our taxation models from taxing ‘goods’ (like labour) to taxing ‘bads’ (like pollution or AI agents)? https://www.linkedin.com/pulse/how-societies-fund-themselves-future-polluters-must-pay-vergunst-vvqte/?trackingId=fy%2FbCffcRTyoM%2FmBkimHOw%3D%3D
Growth Manager at Two99 | Specializing in AI and Web3 | Ex - Product Consultant at Morningstar | GenShark | Binary Wall
1moA thoughtful piece. AI and sustainability often get framed separately, but linking them to uneven work distribution and long-term social impact feels urgent. Do you think leaders are underestimating how quickly these systemic shifts could arrive, or just avoiding the harder questions?
Creating impact through implementing commercial, pragmatic and informed sustainability solutions for private market funds and their portfolio companies.
1moThanks Matt, this is really insightful. The other concern I have around worklessness is the loss of dignity and purpose from working, money is only part of this piece. To some extent millions of us gain dignity and purpose through doing the roles we are now rushing to automate in the name of efficiency. The loss of 'self' that 'left behind' people will experience won't be straightforward to deal with.