Picture this: A team of predominantly male, white engineers in San Francisco designs an AI system to detect skin cancer. They train it on thousands of medical images, nearly all from light-skinned patients. The system performs brilliantly in clinical trials at Stanford. Yet when deployed in hospitals serving diverse populations, it consistently misses melanomas on darker skin, potentially costing lives. The engineers never intended this outcome, but their lived experience didn’t include awareness of how skin cancer presents differently across racial groups.

This scenario illustrates a fundamental challenge in artificial intelligence development: those building AI systems often don’t represent the communities most affected by their deployment. The consequences of this representational gap extend far beyond individual projects, shaping how AI reshapes work, healthcare, education, and social services worldwide. As artificial intelligence becomes increasingly central to social and economic life, the question of who gets to shape these technologies (and how) becomes critical for determining whether AI amplifies existing inequalities or helps address them.

At Broadpeak, we collaborate with industry experts, impact-driven investors, and academic institutions to address urgent global challenges. Through our articles and trilogies, we aim to share the insights we have gained from these projects with our network. Explore all of our published articles and trilogies in the blog section of our website.

The Democratic Deficit in AI Development

The concentration of AI development presents a unique governance challenge. Unlike previous technological revolutions that unfolded over decades, AI advancement happens at breakneck speed within a handful of companies and research institutions. This concentration isn’t just geographic, it’s intellectual, cultural, and economic.

The numbers tell the story: women represent only 22% of AI talent globally, racial minorities remain significantly underrepresented in leading AI labs, and researchers from developing countries are largely absent from the teams building systems that will affect billions of people. When the teams building AI systems lack diversity, they often miss critical perspectives about how these technologies might affect different communities.

These representational gaps often translate into broader participation inequalities throughout AI development. Dr. Shyam Krishna, Research Leader at RAND Europe and Member of UNESCO’s AI Ethics Experts Without Borders, explains the scope of this challenge: “A significant source of risk in AI, particularly as a governance issue, lies in the inequality of participation across the AI lifecycle. While AI systems increasingly influence decisions about infrastructure, jobs, finance, and healthcare, those most affected—often marginalised communities—are rarely involved in shaping how these systems are built, evaluated, or regulated. When engagement does occur, it tends to happen after deployment, once the impact is felt, and intervention is far more difficult. Even mechanisms intended to promote fairness—like model evaluation and benchmarking—are typically opaque, expert-driven, and closed to diverse perspectives. As a result, equity becomes an afterthought rather than a design principle.

Beyond the skin cancer example, we see this in hiring algorithms that discriminate against women, speech recognition systems that struggle with non-American accents, and predictive policing tools that reinforce racial biases in law enforcement.

The global nature of this challenge adds complexity. While AI development concentrates in wealthy countries, its applications affect billions worldwide. An AI system designed for American healthcare might be deployed in Nigeria, where different disease patterns, treatment protocols, and resource constraints make the original training data inadequate or even harmful.

Theodora Skeadas, Chief of Staff at HumaneIntelligence and PhD candidate at King’s College London, explains how evaluation can address some of these systemic issues: “Rigorous and accessible methods for testing and evaluating AI systems can help us to better understand the limitations of these models, and build in appropriate guardrails. In particular, red teaming is a method to stress test AI by pushing it to see where it might fail or cause harm. Red teamers try to trick the AI or reveal its blind spots. This approach allows you to catch problems early—like bias, misinformation, or unsafe behaviour—before AI causes harm.

Emerging Models of Participatory AI

Despite these challenges, innovative approaches to inclusive AI development are emerging worldwide. These models recognize that democracy and technology don’t have to be opposing forces, instead, democratic participation can improve both the quality and legitimacy of AI systems, especially when steps are taken to avoid political capture by special interests.

Omer Bilgin, Co-Founder and Chief Ethics & Research Officer at deliberAIde, advocates for working within existing systems: “It’s all about finding the right ways and right leverage points to not fundamentally change the system—so we’re still living in a representative democracy—but to institutionalize and formalize deliberative democratic inputs to these decision makers. In the UK, for example, there’s been this campaign by the Sortition Foundation for the past few years about replacing the House of Lords with a house of citizens, which would be a permanent chamber.”

Citizen juries represent one promising approach. In Ireland, randomly selected citizens deliberated on AI governance questions regarding healthcare, producing nuanced recommendations that balanced innovation with protection of individual rights. Unlike traditional public consultation, which often attracts only organized interests, citizen juries engage ordinary people in sustained deliberation with expert input.

Taiwan has pioneered digital democracy tools that enable large-scale public participation in technology policy. Their vTaiwan platform allows thousands of citizens to contribute to policy discussions through online deliberation, with over 80% of them leading to decisive government action. When applied to AI governance questions, these tools can produce policies with broader public support and better consideration of diverse perspectives.

Community-based approaches offer another model. Rather than extracting data from communities for external analysis, participatory research involves community members as partners in defining problems, collecting data, and interpreting results. In Australia, Aboriginal communities worked directly with AI researchers to develop culturally appropriate language processing tools, with community members helping define success metrics and identify potential harms.

Making AI Systems Accountable

Democratising AI development also requires making AI more transparent and accountable through robust evaluation mechanisms, though current approaches reveal significant limitations. Independent auditing has emerged as a key accountability tool, with companies hiring external firms to assess their AI systems for bias and safety. Yet, these audits often focus on technical performance metrics rather than broader social impacts.

More promising are algorithmic impact assessments that require comprehensive evaluation of AI systems’ social effects before deployment. Several jurisdictions now mandate such assessments for government AI use, requiring agencies to consider potential harms to different communities. However, implementation remains challenging, as agencies often lack the resources and time for meaningful assessments.

Cyril Birks, an AI ethicist, pushes back on the idea that current AI systems can be made accountable. “We must speak carefully and avoid anthropomorphising. Humans can be accountable; AI cannot. Audits and forward-looking risk assessments must keep human factors in view. At best, obscuring humans in the loop is misguided; we write ourselves out of control. At worst, obscurity may amount to malicious misdirection. Most likely, the accidental abdication of our role will provide cover for the advantageous abnegation of responsibility – ‘Don’t blame us, the AI did it!’.

Building Infrastructure for Inclusive Innovation

Creating more democratic AI development requires institutional changes beyond individual projects. This means building new infrastructure (i.e., funding mechanisms, educational initiatives, and governance frameworks) that systematically prioritizes equity and inclusion.

Funding represents a crucial leverage point. Alternative funding models are emerging that explicitly centre equity concerns. The Ford Foundation’s BUILD programme funds AI research that specifically addresses social justice concerns, while several European initiatives require grant recipients to demonstrate community engagement.

Educational initiatives that build AI literacy across communities represent perhaps the most important long-term investment. Finland’s “AI for Everyone” initiative, which aims to educate 1% of the population about AI basics, offers one model for democratising AI knowledge. This doesn’t mean everyone needs to become a programmer, but rather that people need sufficient understanding to participate meaningfully in discussions about AI’s role in their lives.

Creating democratic AI requires fundamental changes in how we approach technology ownership and control. Lukas Salecker, Co-Founder & CEO at deliberAIde, argues that democratisation goes beyond accessibility: “I don’t think that democratising AI is just making it as cheap as possible or as freely available as possible. It also matters who controls the AI technology—who builds it, trains it, who hosts it, who has the information on its source codes and weights and the data that was used. One important factor is that we use AI technology that’s not centrally controlled. If we want to truly empower decentralized communities, I think we need AI technologies to be collectively controlled and not owned by just ‘Big Tech’.

Community-controlled data initiatives give communities collective control over data generated within their boundaries. Indigenous data sovereignty movements have pioneered this approach, asserting community control over research involving Indigenous peoples. Similar models could apply more broadly, giving communities voice in how their data is used for AI training.

Balancing Speed and Inclusion

The tension between competitive AI development and inclusive processes presents a significant challenge. The argument is familiar: democratic deliberation takes time, but AI development moves at breakneck speed. However, this framing presents a false choice. Inclusive development isn’t necessarily slow development: it’s different development.

Early community engagement can prevent costly mistakes that require later correction. Diverse development teams often produce more innovative solutions than homogeneous ones. Democratic legitimacy reduces implementation resistance that can slow AI deployments. Consumer demand for ethical AI creates market opportunities for companies demonstrating inclusive development practices.

Digital democracy tools can accelerate public input processes rather than slow them down. Rapid prototyping methods can incorporate community feedback throughout development cycles. The key is designing processes that enable meaningful participation without sacrificing innovation. The challenge is making these improved processes scalable.

Cyril Birks talked to us about the essence of the issue at hand: “The bridge between our present and a much brighter future is, in essence, intelligence and its application. We must weigh the credible risks of racing towards advanced AI against the tangible benefits it can unlock – earlier cancer detection, faster drug discovery, sharper climate forecasts, sturdier power grids, and more. Delay kills as surely as recklessness.  The proper calculation is one of risk-adjusted reward, balancing the expected harms of haste against those of inaction. Inclusive design helps, because early, widespread participation reveals hidden risks and unexpected gains. Above all, technology must not be a scapegoat for our choices.”

Toward Democratic AI

The future of AI development stands at a crossroads. The path travelled so far (concentrated, technocratic, and distanced from democratic input) has produced remarkable technical advances alongside growing public concern about AI’s societal impacts. While industry voices often emphasize different priorities, the alternative path explored here (i.e., participatory, inclusive, and democratically accountable), offers hope that AI’s transformative potential can maybe serve broader human development.

The examples highlighted, from citizen juries to community-controlled data initiatives to co-design approaches. demonstrate that more democratic AI development is not only possible but already emerging. These initiatives show that inclusion and innovation can be mutually reinforcing rather than competing values.

Yet, realizing this potential requires sustained commitment to building new institutions, funding mechanisms, and governance frameworks that systematically prioritize democratic participation alongside technical expertise. The question isn’t whether AI will reshape society, but whether that reshaping will reflect democratic values and serve broad human interests and goals, even if perspectives vary on optimal approaches.

The choice remains ours, but the window may not remain open indefinitely. As AI systems become more powerful and entrenched, the costs of retrofitting democratic accountability will only grow. The time for inclusive AI development is now, while there is still agency to shape these technologies rather than simply adapting to their consequences.