From automating mundane tasks to pioneering breakthroughs in healthcare, artificial intelligence is revolutionizing the way we live and work, promising immense potential for productivity gains and innovation. Yet, it has become increasingly apparent that the promises of AI aren’t distributed equally — it risks exacerbating social and economic disparities, particularly across demographic characteristics such as race.
Business and government leaders are being called on to ensure the benefits of AI-driven advancements are accessible to all. Yet it seems that for each passing day there is some new way in which AI creates inequality, resulting in a reactive patchwork of solutions — or often no response at all. If we want to effectively address AI-driven inequality, we will need a proactive, holistic approach.
If policymakers and business leaders hope to make AI more equitable, they should start by recognizing three forces through which AI can increase inequality. We recommend a straightforward, macro-level framework that encompasses these three forces but centers the intricate social mechanisms through which AI creates and perpetuates inequality. This framework boasts dual benefits. First, its versatility ensures applicability across diverse contexts, from manufacturing to healthcare to art. Second, it illuminates the often-overlooked, interdependent ways AI alters demand for goods and services, a significant pathway by which AI propagates inequality.
Our framework consists of three interdependent forces through which AI creates inequality: technological forces, supply-side forces, and demand-side forces.
Technological forces: Algorithmic bias
Algorithmic bias occurs when algorithms make decisions that systematically disadvantage certain groups of people. It can have disastrous consequences when applied to key areas such as healthcare, criminal justice, and credit scoring. Scientists investigating a widely used healthcare algorithm found that it severely underestimated the needs of Black patients, leading to significantly less care. This is not just unfair, but profoundly harmful. Algorithmic bias often occurs because certain populations are underrepresented in the data used to train AI algorithms or because pre-existing societal prejudices are baked into the data itself.
While minimizing algorithmic bias is an important piece of the puzzle, unfortunately it is not sufficient for ensuring equitable outcomes. Complex social processes and market forces lurk beneath the surface, giving rise to a landscape of winners and losers that cannot be explained by algorithmic bias alone. To fully understand this uneven landscape, we need to understand how AI shapes the supply and demand for goods and services in ways that perpetuate and even create inequality.
Supply-side forces: Automation and augmentation
AI often lowers the costs of supplying certain goods and services by automating and augmenting human labor. As research by economists like Erik Brynjolfsson and Daniel Rock reveals, some jobs are more likely to be automated or augmented by AI than others. A telling analysis by the Brookings Institution found that “Black and Hispanic workers … are over-represented in jobs with a high risk of being eliminated or significantly changed by automation.” This isn’t because the algorithms involved are biased, but because some jobs consist of tasks that are easier (or more financially lucrative) to automate such that investment in AI is a strategic advantage. But because people of color are often concentrated in those very jobs, automation and augmentation of work through AI and digital transformations more broadly has the potential to create inequality along demographic lines.
Demand-side forces: Audience (e)valuations
The integration of AI in professions, products, or services can affect how people value them. In short, AI alters demand-side dynamics, too.
Suppose you discover your doctor uses AI tools for diagnosis or treatment. Would that influence your decision to see them? If so, you are not alone. A recent poll found that 60% of U.S. adults would be uncomfortable with their healthcare provider relying on AI to treat and diagnose diseases. In economic terms, they may have lower demand for services that incorporate AI.
Why AI-augmentation can lower demand
Our recent research sheds light on why AI-augmentation can lower demand for a variety of goods and services. We found that people often perceive the value and expertise of professionals to be lower when they advertise AI-augmented services. This penalty for AI-augmentation occurred for services as diverse as coding, graphic design, and copyediting.
However, we also found that people are divided in their perceptions of AI-augmented labor. In the survey we conducted, 41% of respondents were what we call “AI Alarmists” — people who expressed reservations and concerns about AI’s role in the workplace. Meanwhile, 31% were “AI Advocates,” who wholeheartedly champion the integration of AI in the labor force. The remaining 28% were “AI Agnostics,” those who sit on the fence, recognizing both potential benefits and pitfalls. This diversity of views underlines the absence of a clear, unified mental model on the value of AI-augmented labor. While these results are based on a relatively small online survey, and do not capture how all of society views AI, they do point to distinct differences between individuals’ social (e)valuations of the uses and users of AI. and how this informs their demand for goods and services — which is at the heart of what we plan to explore in further studies.
How demand-side factors perpetuate inequality
Despite its significance, this perspective — how audiences perceive and value AI-augmented labor — is often glossed over in the broader dialogue about AI and inequality. This demand-side analysis is an important part of understanding the winners and losers of AI, and how it can perpetuate inequality.
That’s especially true in cases where peoples’ perceived value of AI intersects with bias against marginalized groups. For example, the expertise of professionals from dominant groups is typically assumed, while equally qualified professionals from traditionally marginalized groups often face skepticism about their expertise. In the example above, people are skeptical of doctors’ relying on AI — but that distrust may not play out in the same ways across professionals with varying backgrounds. Doctors from marginalized backgrounds, who already face skepticism from patients, are likely to bear the brunt of this loss of confidence caused by AI.
While efforts are already underway to address algorithmic bias as well as the effects of automation and augmentation, it is less clear how to address audience’s biased valuations of historically disadvantaged groups. But there’s hope.
Aligning social and market forces for an equitable AI future
To truly foster an equitable AI future, we must recognize, understand, and address all three forces. These forces, while distinct, are tightly intertwined, and fluctuations in one reverberate throughout the others.
To see how this plays out, consider a scenario where a doctor refrains from using AI tools to avoid alienating patients, even if the technology improves healthcare delivery. This reluctance not only affects the doctor and their practice but deprives their patients of AI’s potential advantages such as early detection during cancer screenings. And if this doctor serves diverse communities this might also result in exacerbating the underrepresentation of those communities and their health factors in AI training datasets. Consequently, the AI tools become less attuned to the specific needs of these communities, perpetuating a cycle of disparity. In this way, a detrimental feedback loop can take shape.
The metaphor of a tripod is apt: a deficiency in just one leg directly impacts the stability of the entire structure, which impacts the ability to adjust angles and perspectives, and inevitably its value to its users.
To prevent the negative feedback loop described above, we would do well to look to frameworks that enable us to develop mental models of AI-augmented labor that promote equitable gains. For example, platforms that provide AI-generated products and services need to educate buyers on AI-augmentation and the unique skills required for working effectively with AI tools. One essential component is to emphasize that AI augments, rather than supplants, human expertise.
Though rectifying algorithmic biases and mitigating the effects of automation are indispensable, they are not enough. To usher in an era where the adoption of AI acts as a lifting and equalizing force, collaboration between stakeholders will be key. Industries, governments, and scholars must come together through thought partnerships and leadership to forge new strategies that prioritize human-centric and equitable gains from AI. Embracing such initiatives will ensure a smoother, more inclusive, and stable transition into our AI-augmented future.