Maura R. Grossman and Tara Emory have published a new paper: A Primer on the Different Meanings of “Bias” for Legal Practice.
The primer (available for download here) highlights the critical need for legal professionals to understand the multifaceted meanings of “bias” in the context of AI systems. The term “bias” has varied meanings, often leading to confusion in legal and policy discussions. “Bias” can refer to “an inclination of temperament or outlook,” “an instance of such prejudice,” “bent, tendency,” or statistical deviations and errors (Merriam-Webster). In AI, it can range from a mere “tendency” to “blatantly unfair outcomes.” However, “[d]epending on the context, ‘bias’ in an AI system may also be a good thing, and even essential to achieving a desired result.”
Beyond the introduction, the paper discusses different forms and considerations, as follows:
- Positive-Tendency Bias (Productive Bias): This form of bias is inherent to how AI systems function and is often beneficial.
- Statistical or Mathematical Bias (Technical Deviations): These biases refer to quantifiable differences between an AI system’s model/outputs and “real-world” data. They are not inherently prejudicial but can impact performance and lead to discriminatory outcomes.
- “Bias” as a Technical Term in Data Science: Here, bias “is used to calibrate the system; effectively, it determines what thresholds to use to measure the meaningfulness of the weights.”
- Discriminatory Bias (Legally Impermissible Bias): This refers to “unfair bias that discriminates against underrepresented, underprivileged, and protected groups of people.”
Distinguishing between these various forms is important for evaluating AI systems, mitigating harmful outcomes, and developing effective governance approaches that balance functionality and fairness in legal applications. That’s especially true for legal professionals when distinguishing statistical bias (“Inaccuracies or inefficiencies in the model or data”) and discriminatory bias (“Harmful or unfair treatment of protected groups”). While the two are distinct, “statistical biases can be the technical mechanism through which discriminatory outcomes are produced or amplified.”
Grossman and Emory use different scenarios to illustrate points, such as two scenarios involving weighted dice with a cheating gambler and a classroom challenge to distinguish between discriminatory bias and positive-tendency bias, and a Rubik’s cube analogy, where efforts to correct one problematic bias may “disrupt patterns on the others,” as “AI systems’ interrelated components can mean that efforts to solve one bias problem may create or exacerbate others.”
That’s why “calls to ‘de-bias’ AI systems should involve careful analysis of the specific problematic bias at issue, why it may exist, and both technical and other options to mitigate that bias.” That’s also why “De-biasing strategies to improve fairness generally fall into one of three categories of techniques:
- Pre-processing: Cleaning and adjusting the training data to remove biases before the AI system learns from it (e.g., balancing representation of different groups, adjusting data weights, removing features that may cause unfair discrimination);
- In-processing: Teaching the AI system to consider fairness while it learns, so that fair decision-making becomes part of how the system is trained to operate; or
- Post-processing: Adjusting the AI system’s outputs or decisions to ensure fairer outcomes across different groups in real-world applications.”
As Grossman and Emory note in their conclusion:
“Understanding bias in AI systems requires a common vocabulary. “Bias” has many meanings, including positive-tendency bias, the various forms of statistical or mathematical bias (including selection, data-quality, labeling, overfitting/underfitting, and temporal drift), and harmful discriminatory bias.
Discriminatory bias emerges from systems built on statistical biases. As AI systems increasingly influence critical decisions, lawyers and judges must bridge technical and ethical considerations to effectively address the problems of discriminatory bias. When addressing topics of AI governance, procurement, due diligence, and litigation, understanding the different meanings of the term “bias” and how they interrelate is essential in developing effective legal and policy frameworks.”
A Primer on the Different Meanings of “Bias” for Legal Practice is an easy read at 9 pages – it’s a terrific discussion of a topic that legal professionals must understand, given the growing importance of AI models in our industry (not to mention life itself!). You can download the paper here.
So, what do you think? What do you think of this primer on the different meanings of “bias”? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Microsoft Designer, using the term “two robot instructors teaching a room full of robot students”.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



