The factory worker whose job is automated away by an AI-powered robot arm. The call centre representative replaced by a chatbot. The junior analyst whose research tasks are now handled by AI. These stories dominate public discussions about AI and work, fuelling fears that millions of jobs will simply vanish with AI. While early research suggests a more complex picture, one where AI acts more as a productivity enhancer than a wholesale job destroyer, this apparent good news comes with a significant caveat: the benefits and risks of AI in the workplace are not being distributed equally. This is a clear reflection of what is called the Turing Trap: when AI replaces, rather than augments, human labour, it shifts economic and political power to those who control the technology, undermining workers’ earning power, and increasing inequality.
Current academic research on AI’s workplace effects largely supports the idea that AI enhances productivity more than it eliminates jobs. Studies in customer support, software development, and consulting have found productivity gains of 14-35%, with workers across skill levels benefiting from AI assistance. However, this research examines early AI implementations in limited sectors and doesn’t account for long-term effects as AI capabilities advance and costs decrease. Furthermore, AI is not an autonomous being (yet) but something we make use of, so no outcome (positive or negative) is written in stone.
At Broadpeak, we collaborate with industry experts, impact-driven investors, and academic institutions to address urgent global challanges. Through our articles and trilogies, we aim to share the insights we have gained from these projects with our network. Explore all of our published articles and trilogies in the blog section of our website.
The New Economic Divide
The relationship between technology and inequality follows a familiar pattern. For decades, technological advances have increased income gaps between higher-skilled workers and lower-skilled ones—a phenomenon economists call skill-biased technological change (SBTC). While AI differs from previous technologies, it may still increase inequality through mechanisms beyond traditional SBTC.
Unlike previous technologies that primarily automated physical tasks, AI can handle cognitive work previously considered safe from automation. An idea which is increasingly gaining traction is David Autor’s and Neil Thompson’s expertise framework. Quickly summarised, this framework (model) suggests that automation (e.g., that created by AI) can raise wages when it eliminates ‘inexpert’ tasks, but in these circumstances it will also reduce employment, alternatively, when it eliminates ‘expert’ tasks, automation will lower wages but would increase employment. In the words of the authors: “automation that decreases expertise requirements reduces wages but permits the entry of less expert workers; automation that raises requirements raises wages but reduces the set of qualified workers.”
Young workers face particular challenges as many entry-level positions that traditionally served as career stepping stones involve routine tasks that AI can handle at a fraction of the cost, even for those with elite educations. Dario Amodei, co-founder and CEO of Anthropic, recently claimed that AI may wipe out half of all entry-level white-collar jobs in the next one to five years. While some may argue that the struggles incoming university graduates are facing in the labour market is reducing income inequalities between university and non-university educated individuals, this trend precedes AI, even if made worse because of it, and does not necessarily mean that non-university educated individuals will be better off either, as the skill expertise bar for better paying jobs is getting higher regardless, as indicated by Autor’s and Thompson’s framework.
The concentration of AI development among a few technology companies amplifies these concerns. Seven major tech companies account for approximately 31% of the S&P 500’s market capitalization, and are paying bonuses of up to $100m to poach staff from competing rivals. This means AI’s economic benefits (e.g., rising stock prices, increased profits, high-paying jobs) flow disproportionately to shareholders and employees of these companies, while displacement costs spread more broadly across society. Furthermore, recent research from the Centre for Economic Performance indicates that this increased market concentration by itself may already lead to greater inequality.
Capital vs. Labour in the AI Economy
The most fundamental divide emerging is between those who own AI-enhanced businesses and those who sell their labour to them. When a law firm uses AI to draft contracts, the firm captures most of the value created while the junior lawyers who used to do that work see their earning power diminish.
This dynamic is pronounced in industries where AI handles tasks requiring significant human expertise. Financial firms can use AI for market analysis and trading. Law firms deploy AI for document review and contract drafting. In each case, without proper governance, firms capture most of the value created while workers whose tasks are automated see reduced earning power. This occurs alongside a degree of automation that is above the socially desirable level, as argued by Acemoglu, Manera and Restrepo.
When speaking to us, Su Cizem, AI Governance Analyst at the Centre pour la Sécurité de l’IA, explains the economic logic driving this level of automation: “Labor is expensive and human capital is expensive. Even if you hire employees that do really good work, they come with costs—they need time off, they have families, there are social things that you have to account for that cost companies money. So, if you can cut costs in any way while also maintaining productivity, I think a lot of companies would prefer to do that.” She warns that without governance interventions, “what’s going to happen is that a few people will get very rich and the rest of the world will, if no governance mitigations are put in place, fall behind—especially places like the global south where there’s limited capacity and people don’t even have access to computers.”
The AI Literacy Divide
AI literacy is becoming the new digital divide. This represents a classic case where new technology creates expertise requirements that determine who may benefit and who may get left behind. Workers who can effectively collaborate with AI tools (i.e., using them to enhance productivity and decision-making) become increasingly valuable. Those without access to these tools or training risk being left with lower paying jobs along the skill-expertise dimension.
Lukas Salecker, Co-Founder & CEO at deliberAIde, described to us the widening gap: “Those who are AI literate and have had the time and the resources—you know sometimes you have to pay for a Plus account on ChatGPT to get access to the actually useful tools— are gaining a huge advantage over those without the resources or time to engage with AI and practice, and it’s really all about practice. AI as a tool for people who know how to use it honestly feels like a superpower that’s quadrupling my productivity, my impact, what I’m doing every day.”
This divide doesn’t follow traditional education lines. A construction worker learning AI-powered project management might see improved prospects, while a middle manager struggling with AI-enhanced workflows might find their role diminished. The key factor isn’t education level but adaptability and access to training opportunities, which are unevenly distributed across companies, industries, and regions. Furthermore, many developing countries are lagging in terms of investment and AI education, potentially reinforcing existing income-wealth gaps between countries.
Policy Responses and Their Gaps
Given these widening divides, policymakers are scrambling to respond. The three main approaches (universal basic income, retraining programs, and updated labour laws) each face significant challenges.
Universal Basic Income (UBI) has emerged as the most discussed response to AI-driven displacement. Pilot programs in Finland and Kenya show positive effects on well-being without significantly reducing work incentives. However, meaningful UBI would require unprecedented government spending and taxation, raising questions about sustainability and interaction with existing programs, especially in a time when most countries face constraining levels of debt. Furthermore, UBI would not solve the potentially growing inequality between countries.
Retraining programs have shown mixed results. Many suffer from mismatches between taught skills and market demand, or fail to provide the adaptability increasingly valuable in AI-augmented workplaces. Workers most at risk (e.g., older workers, those with limited formal education) often face the greatest retraining barriers. Successful programs require significant time investment, ongoing support, and employer connections, expensive elements difficult to scale.
When speaking to us, Julian Jacobs, PhD Candidate in Political Economy at the University of Oxford, expressed a level of scepticism about some retraining solutions: “I’m sceptical that we have enough empirical evidence that upskilling programs are effective at the broad national level, at least in the US. And one challenge that is under-discussed is that retraining programs often result in significant short-run costs for workers because you’re usually out of the labour market basically for a period. During that time, a person is learning new skills but experiences costs in terms of lost income and time, which can be particularly challenging for people already in vulnerable circumstances.“
An interesting idea from Professor Tom Mitchell, Carnegie Mellon University, is the creation of a National Employment Data Asset. In this data asset, public and private entities would create a tool which provides the workforce with a real-time, continuously updated register of the current supply (over/under) of skills in different regions, and changes underway; as well as showcase interesting training and continuing education opportunities linked to such skills. A collaboration between companies like LinkedIn and Indeed with ministries of labour, but with the potential to also incorporate universities, news feeds, local communities, and large companies, would prove tremendously useful for both employers and employees, especially now that AI and other technologies are starting to shift the working paradigm. These innovative approaches are required to ensure that the distributional effects of AI are not pervasive but can be mitigated through a redesign of the jobs, and a reengineering of business processes that benefits all.
Existing labour laws struggle with AI-transformed work relationships. Platform worker classification remains contentious, with AI blurring lines between human and algorithmic management. Traditional collective bargaining may be poorly suited to workplaces where job categories shift rapidly and work becomes individualized. Current laws often provide limited protection against biased or opaque algorithmic decision-making in employment, even if certain exceptions exist. Furthermore, supporters of market-led AI development contend that regulatory interventions could slow beneficial innovation and that competitive pressures will naturally drive companies to develop more inclusive AI systems.
The Road Ahead
The gap between expert optimism and public concern about AI’s workplace impact reflects a fundamental truth: while AI may enhance overall productivity, its benefits and costs are not distributed equally, and this is worrying to many. Current research suggesting AI complements rather than replaces workers captures only early deployment stages. As capabilities advance and costs decrease, the balance between augmentation and automation may shift dramatically.
The concentration of AI development among few companies and countries, combined with winner-take-all platform dynamics, suggests AI’s economic benefits may become increasingly concentrated over time. For many workers, AI-enhanced productivity promises feel less relevant than displacement threats or anxiety about being left behind in an AI-driven economy.
This reality helps explain persistent public scepticism despite expert enthusiasm. These concerns reflect real risks that current policies and institutions are poorly equipped to address, yet are not fundamentally unable to mitigate. One key challenge may extend beyond technical development to fundamental questions about economic power, worker voice, and democratic participation in technological change.