AI Ethics & Risks 101: The Landscape of AI Harms

In the world of AI ethics, not all problems are created equal. While a biased movie recommendation might be annoying, a biased medical diagnosis can be catastrophic. Understanding the different types and scales of harm that AI can cause is the first step toward preventing them. These harms aren’t abstract, futuristic scenarios; they are happening now, affecting real people and communities. From an individual being unfairly denied a job to an entire society being destabilized by misinformation, the potential for AI to cause damage is as significant as its potential to do good.

Mapping this landscape of harms allows us to move from a general sense of unease to a specific, actionable understanding of the risks. It gives us a vocabulary to describe what can go wrong and a framework for prioritizing our attention. Just as doctors categorize illnesses to diagnose and treat them effectively, ethicists and technologists categorize AI harms to build safer and more equitable systems. This isn’t about fear-mongering; it’s about responsible engineering and proactive governance.

Building your AIQ (your AI Intelligence) means developing the ability to spot these potential harms before they materialize. This guide will provide a high-level map of the AI ethics risk landscape, breaking down the different categories of harm with real-world examples. We’ll explore how AI can harm individuals, groups, and society as a whole. By understanding the terrain, you’ll be better equipped to navigate the complex ethical challenges of the AI-powered world.


Table of Contents


    A Framework for Understanding AI Harms

    AI harms can be categorized by the scale at which they occur. While the lines can sometimes blur, it’s helpful to think about three distinct levels: harms to individuals, harms to groups, and harms to society.

    [TABLE]

    Harms to Individuals: The Personal Cost of Algorithmic Error

    This is the most direct and easily understood category of harm. It occurs when an AI system’s decision has a negative and tangible impact on a specific person’s life, liberty, or opportunities.

    In Practice: In 2019, a man in Michigan was wrongly arrested and held in jail for 30 hours because of a false match from a facial recognition system [1]. The algorithm incorrectly identified him from a blurry surveillance video of a shoplifting incident. This case highlights a direct, individual harm: a loss of liberty based on a probabilistic and flawed piece of technology. The harm is not to a statistical group but to a single person whose life was upended by an algorithmic error.

    Harms to Groups: The Scale of Systemic Bias

    Group harms occur when an AI system’s negative impacts are not random but fall along existing lines of social and demographic inequality. The system may not be designed to discriminate, but it learns and perpetuates biases present in its training data, disproportionately harming already marginalized groups.

    In Practice: The landmark “Gender Shades” study from MIT revealed that commercial facial recognition systems had significantly higher error rates for darker-skinned women than for lighter-skinned men [2]. For one system, the error rate for identifying the gender of light-skinned men was less than 1%, while for dark-skinned women, it was nearly 35%. This is a group harm because the system doesn’t fail equally; it fails in a way that reinforces existing societal biases and puts a specific demographic group at a higher risk of misidentification.

    Harms to Society: The Erosion of Shared Reality

    Societal harms are the broadest and perhaps most insidious. They affect the fundamental structures and norms of society, such as trust, democracy, and public discourse. These harms are often the result of AI being used at a massive scale to influence public opinion or behavior.

    In Practice: The rise of AI-powered “deepfakes” and sophisticated propaganda bots poses a significant threat to democratic societies. During an election, AI can be used to create highly realistic but entirely fake videos of a candidate saying something they never said, or to deploy millions of automated social media accounts to spread divisive narratives. This erodes the shared sense of reality that is necessary for a functioning democracy and makes it difficult for citizens to make informed decisions [3]. The harm is not just to one person or group but to the integrity of the entire civic process.

     

    Conclusion: From Awareness to Action

    Understanding this landscape—from the individual to the group to the societal level—is the foundational step in building responsible AI. Each level of harm requires a different set of solutions. Individual harms may require better avenues for appeal and correction. Group harms demand a focus on data diversity and algorithmic fairness. And societal harms necessitate broad public policy and new forms of platform governance. 

    As you continue to build your AIQ, learn to view AI systems through this multi-layered lens. When you encounter a new AI tool, ask yourself: How could this harm an individual? How could it disproportionately affect a particular group? And what is its potential impact on society at large? By asking these questions, you move from being a passive consumer of technology to an active and ethical participant in the future of AI.

    Previous
    Previous

    How AI Goes Wrong: Data, Models, and Deployment Failures

    Next
    Next

    What Do We Mean by “AI Ethics”? A Plain-Language Guide