Artificial Intelligence (AI) dominates headlines every day, and is being increasingly introduced into our workplaces. Yet, while we may use it to help us write an email, get a quick medical “diagnosis”, or talk about our mental health, insufficient attention is being paid to the effect its introduction may have on inequality, at least not enough outside academic circles. Inequality across most parts of the world has risen tremendously in the past few decades, with the top 1% holding about 35% of the US’ total wealth (as of 2023), nearing pre-WWII levels. Yet while we still debate on what has driven this rise, we must not ignore the distributional consequences of AI, even if these too are highly unpredictable.
At Broadpeak, we collaborate with industry experts, impact-driven investors, and academic institutions to address urgent global challanges. Through our articles and trilogies, we aim to share the insights we have gained from these projects with our network. Explore all of our published articles and trilogies in the blog section of our website.
The Public vs AI-Expert Gap
AI is likely to increase the productivity of most workers (even if the magnitude of improvement is debated), and lead to great advancements in fields like medicine, biochemistry and engineering, where most progress is deemed as positive for human development. Yet, while these productivity improvements have been clear since large language models (LLMs) gained global recognition, public opinion on AI’s impact is deeply fraught and disconnected from that of experts. Only 17% of US adults indicate that AI’s effects over the next 20 years (on the whole) will be positive, versus 56% of AI experts. In the EU, 66% of people believe AI will replace more jobs than it creates, while 84% of them think that AI requires careful management. It is clear from this that stakeholders (developers, state, and public) hold diverse views on the trade-offs and expectations behind AI development and deployment, and this creates a gap in perceptions of AI between them.
For grounding context, AI, like other general-purpose technologies before it (e.g., the steam engine, electricity), will not have the immediate impact that some may fear (or hope for), after all it still requires that companies do not only adopt but adapt its business processes to incorporate AI in a way which can do without a human in the loop. In some areas, AI is already being fully integrated into the business process, but in others it may take decades, especially when the tasks automated are highly consequential. Then, why are some people so scared about AI’s effects on the economy?
When speaking to us, Omer Bilgin, Co-Founder and Chief Ethics & Research Officer at deliberAIde, points to the concentration of power in AI development as a reason for these concerns: “The stories that tend to come out on top are all created by the ones who hold ultimate power over the trajectory of these technologies. You get big actors like OpenAI, you get big actors like Anthropic who keep claiming that AGI is just around the corner and it’s going to solve all these issues, bring about all these universal benefits. But if you look under the hood, these statements are coming from a very narrow set of individuals who aren’t representative of global populations and who have distinct ideas of what a utopian future should look like.“
Inequality at Stake
Most of these concerns and opinion gaps between experts and citizens can also be explained from the lens of inequality and the distributional effects of technological development. One of the main causes associated with the rise in inequality across various parts of the world has been the introduction of new technologies, with high added value and what seems like inherently skill-biased technological change (SBTC), meaning that new technological advancements benefit high-skilled workers (e.g., those with higher education, experience, or abilities) more than low-skilled ones.
AI is an interesting example coming at an even more interesting time. Over the past few decades, we have seen a declining wage premium for certain (if not all) university degrees, partly driven by an oversupply of university-educated workers but also by a levelling of the skills that one can learn at university versus outside of it (e.g., coding), reducing the demand for the better-paid graduates, and weakening the SBTC we experienced since the 1980s and 1990s.
Some economists argue that inequality concerns are overblown if AI ultimately raises living standards for everyone, even if benefits are unevenly distributed initially. Nonetheless, it is important to understand how AI may exhibit some aspects of being a skill-biased technology, and how that could disproportionately benefit those with higher skills, education, or resources. This happens through three key mechanisms: (1) data asymmetries, which refer to the lack of data from certain populations and demographics on which to train these LLMs (effectiveness inequality); (2) governance gaps, due to the seeming inability to efficiently produce effective and non-constraining regulation that escapes capture (state capacity issues); and, (3) a lack of deliberative inputs on how AI should be evaluated, developed and deployed, or at least how its development and deployment should be communicated (part of what can be seen as a democratic crisis). Furthermore, most methods we have designed to ensure AI models do not lead to such negative distributional outcomes are still facing significant challenges. Aysegül Güzel, AI Auditor at BABL AI and Responsible AI Consultant at AI of Your Choice, spoke to us about why that is the case: “The AI evaluation field is continuously changing. In the generative AI space especially, there’s a strong feeling that evaluation is still more of an art than a science. This is because it’s incredibly difficult to establish firm standards for evaluating these models.“
Trust and AI Agents
What seems particularly important given the wide gap in public and expert opinion on AI seems to be greater inclusion of public voice in not only regulating and moderating, but also developing and deploying AI. Yet, current democratic processes are not fast nor capable enough of handling the input required to get to such outcomes. This indicates a need to develop new methods which can facilitate such new input sources. This lack of democratic input leads to a loss of legitimacy and trust, and a loss of trust translates to low confidence in AI.
In practice, despite the lack of trust, AI is being developed and deployed prioritising other maxims such as global competition and opportunity costs. While the arguments that motivate such actions can too be morally correct, and do not necessarily imply negative distributional outcomes, they do carry a level of risk, especially when speed outranks holistic benefits. Furthermore, those most concerned about AI’s effects on the economy know that companies and governments prioritise such competition and opportunity maxims over democratic input.
These trust deficits are likely to deepen as AI evolves beyond today’s chatbots toward more autonomous ‘AI agents’: systems that can execute multi-step tasks and make decisions with varying (lower) degrees of human oversight, rather than operating purely in a conversational mode. Early research shows that agents may, in some sandboxed environments, deviate or even attempt to blackmail us when facing deactivation. Afek Shamir, Analyst in Frontier AI Policy at RAND Europe, highlights potential risks of advanced AI agents relative to chatbots: “AI chatbots generally work by responding to prompts, but if this were all that AI could do, it would be hard to imagine AI substituting humans in skilled labour roles where decision-making has tangible consequences. However, with the introduction of AI agents, one can visualise how autonomy could be delegated in ways that may present problems. Absent effective guardrails, it is not implausible for us to see agents act in unintended ways simply by their increasing autonomy, capability, and general-purpose usage.“
An Example from Healthcare
The stakes of these distributional concerns become particularly clear in healthcare, where AI has the biggest upside potential socially, but also carries the greatest risks. AI will speed up the research process involved in most medical technology and medicines, not only by taking care of the day-to-day administrative aspects of research (acting as synthetic research assistants) but also by creating more effective and efficient diagnostic tools, handling day-to-day medical care (e.g., through agents), and even creating synthetic participants in experiments. Yet, such advancements carry large-scale risks distributionally, particularly given the health data asymmetries we observe in medical research, and the importance healthcare has for a country’s development. Any such distributional concern will translate to public concern on AI.
Medical research inequality has long existed, with a consistent lack of data on certain populations, such as women and racial minorities, despite their increased prevalence for certain diseases. The distributional risk such lack of high-quality data brings for some of these populations is particularly evident when talking about other countries with different genetic predispositions than those on which the AI models are trained (i.e., almost all high-quality health data comes Western countries). We should not extrapolate the same recommendations and medicines across contexts or we may, for example, overlook disease prevalence and environmental exposures (e.g., TB, malaria in tropical or low-income environments).
Significant research has and is being carried out explaining how exactly AI models may be unable to deal with such demographic biases in the data, and more importantly how these biases may impact minorities and what can be done to mitigate them. Concerns about AI worsening inequality need not lead to deterministic pessimism. Instead, these concerns can be constructive, driving changes that mitigate negative distributional outcomes while preserving the substantial healthcare improvements that AI can deliver. These aspects of AI development and deployment involve a range of ethical considerations, especially given the inability to fully predict how these tools will actually be implemented in the future, and what will be possible with them. The current AI development structure is prioritising model improvements despite distributional concerns, and this may explain why some are so wary of it, despite understanding and visualising its positive welfare effects. Through deliberative tools, and efficient coordination of key stakeholders, both developers, the state, as well as the public, some of these concerns may be at least partially mitigated., the state, as well as the public, some of these concerns may be at least partially mitigated.