Adverse Impact Measurement Research

There are a number of different approaches to assessing adverse impact. These approaches can be categorized into three groups: statistical significance testing, practical significance measurement, and miscellaneous other approaches.

Statistical significance tests, also called null hypothesis significance tests, (e.g., Z test, Fisher’s Exact test, etc.), evaluate the probability that a disparity is due to chance. This is the approach endorsed by contemporary courts and enforcement agencies. There are many statistical significance tests available for 2 by 2 table analyses, and in practice the most common are the two-sample Z test (also called the “2 standard deviation rule”) and Fisher’s exact test. In the vast majority of cases all of the tests produce similar results, and diverge only when sample and/or cell sizes are very small.

A second approach to adverse impact measurement is to assess practical significance, which considers whether a disparity is large enough to meaningful. From a scientific perspective, practical significance is typically measured by what are called effect sizes, which are then evaluated by some rule of thumb. As an example, an impact ratio (dividing one group selection rate by another group’s selection rate) is an effect size evaluated via the 4/5th rule of thumb. Other effect sizes include odds ratios, h statistics, phi coefficients, and % of variance accounted for. All of these consider whether the magnitude is large enough to conclude that a difference in selection rates is meaningful. Jacobs, Murphy & Silva (2014) noted the usefulness of assessing the percentage of variance accounted for, and suggested using a 1% standard for demonstrating practical significance.

Practical significance is particularly useful given recent research noting that statistical significance tests as a stand-alone measure of adverse impact are limited for a number of reasons, included issues related to sample size and the fact that significance tests do not assess magnitude of the difference in selection rates. A recent Technical Advisory Committee recommended that practical significance should be considered. More generally, the TAC noted that multiple adverse impact measures should be used in many situations. The TAC generally accepts statistical significance tests, and sees value in practical measures (particularly effect sizes capturing the magnitude of disparity).

A third approach focuses on the size of the disadvantaged group in the analysis, and includes the shortfall, which refers to the difference between the expected number of subgroup selections and the observed number of subgroup selections. This metric can be expressed in absolute terms or relative to the total number of employment decisions made.

One other approach is worth noting, although often is not useful in measuring adverse impact. Flip flop rules that determine whether slight changes to the data could affect who the highest selected group is or the result of a statistical significance test have been endorsed by some courts. These approaches are generally redundant with the standard error associated with significance tests. Flip flop rules are often only informative when samples are small, and even then may be redundant with the other approaches described above.

A holistic approach combining the above three approaches to adverse impact measurement allows for 1) a consideration of chance; 2) an evaluation of the size of the actual disparity in selection rates; and 3) a consideration of other contextual factors like the size of the affected class.


As mentioned briefly above, in 2010 a technical advisory committee of EEO experts was organized by the Center for Corporate Equality (CCE) to wrestle with many data, statistical, and legal issues associated with adverse impact analyses. The goal of the TAC was to help provide the EEO community with technical “best practice” guidance on how to conduct adverse impact analyses. Toward this end, the TAC consisted of 70 of the nation’s top experts in adverse impact analyses. With regard to TAC process, CCE created an extensive survey asking TAC members to indicate how they would handle a variety of data, statistical, and legal issues commonly encountered in conducting adverse impact analyses. Together, expert survey results and discussion from TAC focus groups were used to create a best practice document, “Technical Advisory Committee Report on Best Practices in Adverse Impact Analyses”. This report is provided in the link below, along with the biographies of the TAC committee members and the TAC survey questions and responses. We hope you find these documents useful.

The remainder of this section summarizes (1) a citation list of articles and book chapters from the personnel selection and EEO statistical scholarly literature and (2) a list of relevant white papers related to adverse impact measurement.

List of Relevant Citations from the Scholarly Literature

Biddle, D.A. (2012). Adverse impact and test validation: A practitioner’s handbook (3rd ed).
Folsom, CA: Infinity.

Biddle, D. A., & Morris, S. B. (2011). Using Lancaster’s mid-p correction to the Fisher exact test
for adverse impact analyses. Journal of Applied Psychology, 96, 956-965.

Biddle Consulting Group (2009). Adverse Impact Toolkit. Retrieved

Bobko, P. & Roth, P.L. (2010). An analysis of two methods for assessing and indexing adverse
impact: A disconnect between the academic literature and some practice. In J.L. Outtz (Ed.),
Adverse impact: Implications for organizational staffing and high stakes selection (pp. 29-49).
New York: Routledge.

Cohen, D., B., Aamodt, M. G., & Dunleavy, E. M. (2010). Technical Advisory Committee report
on best practices in adverse impact analyses. Washington, D.C.: Center for Corporate Equality

Collins, M.W. & Morris, S.B. (2008). Testing for adverse impact when sample size is small.
Journal of Applied Psychology, 93, 463-471.

Dunleavy, E. M., & Gutman, A. (2011). An update on the statistical versus practical significance
debate: A review of Stagi v Amtrak (2010). The Industrial-Organizational Psychologist. 48,

Dunleavy, E. M. & Morris, S. B. (2015). Adverse impact measurement in personnel selection. In
C. Hanvey & K. Sady (Eds.) A Practitioners Guide to HR Legal Issues. Springer.

Gastwirth, J.L. (1988). Statistical reasoning in law and public policy (Vol. 1). San Diego, CA:
Academic Press.

Gutman, A., Koppes, L. & Vodanovich, S. (2010). EEO Law and Personal Practices (3rd Edition).
New York: Routledge, Taylor & Francis Group.

Jacobs, R. Murphy, K. R., & Silva R. (2012). Unintended consequences of EEO enforcement
policies: Being big is worse than being bad. Journal of Business and Psychology. DOI

Meier, P., Sacks, J. & Zabell, S. (1984). What happened in Hazelwood: Statistics, employment
discrimination, and the 80% Rule. American Bar Foundation Research Journal, 1, 139-186.

Miao, W. & Gastwirth, J. L. (2013). Properties of statistical tests appropriate for the analysis of
data in disparate impact cases. Law, Probability and Risk, 12, 37-61.

Morris, S. B. (2001). Sample size required for adverse impact analysis. Applied HRM Research,
, 13-32.

Morris, S.B. & Lobsenz, R.E. (2000). Significance tests and confidence intervals for the adverse
impact ratio. Personnel Psychology, 53, 89-111.

Murphy, K., & Jacobs, R. (2012). Using effect size measures to reform the determination of
adverse impact in equal employment litigation. Psychology, Public Policy and the Law.

Office of Federal Contract Compliance Programs. (1993). Federal contract compliance manual.
Washington, DC: U.S. Department of Labor.

Paetzold, R. L., & Willborn, S. L. (1994). Statistics in discrimination: Using statistical evidence
in discrimination cases
. Colorado Springs, CO: Shepard’s/McGraw-Hill.

Roth, P. L., Bobko, P. & Switzer, F. S (2006). Modeling the behavior of the 4/5th rule for
determining adverse impact: Reasons for caution. Journal of Applied Psychology, 91, 507-522.

Siskin, B.R. & Trippi, J. (2005). Statistical issues in litigation. In F.J. Landy (Ed.), Employment
discrimination litigation: Behavioral, quantitative, and legal perspectives
(pp. 132-166). San
Francisco: Jossey-Bass.

U.S. Equal Employment Opportunity Commission, Civil Service Commission, Department of
Labor, & Department of Justice. (1978). Uniform Guidelines on Employee Selection Procedures.
Federal Register, 43 (166), 38295-38309.

U.S. Equal Employment Opportunity Commission, Office of Personnel Management,
Department of Justice, Department of Labor & Department of Treasury (1979). Adoption of
Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines
on Employee Selection Procedures. Federal Register, 44, 11996-12009.

Zedeck, S. (2010). Adverse impact: History and evolution. In J.L. Outtz (Ed.), Adverse impact:
Implications for organizational staffing and high stakes selection
(pp. 3-27). New York: Routledge.

White Papers Related to Adverse Impact Measurement

Title Author
A Review of OFCCP Enforcement Statistics For Fiscal Year 2008 David Cohen, Sr. Vice President and Eric M. Dunleavy, Ph.D.,Senior Consultant, The Center for Corporate Equality
The Center for Corporate Equality Calls for Transparency in OFCCP Reporting CCE News Release
A Review of OFCCP Enforcement Statistics: A Call for Transparency in OFCCP Reporting David Cohen, Sr. Vice President and Eric M. Dunleavy, Ph.D., Senior Consultant, The Center for Corporate Equality
Beauty May Be in the Eye of the Beholder, But Is the Same True of a Validity Coefficient? Michael Aamodt, Ph.D., Associate Editor, Assessment Council News
Personnel Testing Council of Metropolitan Washington D.C. Quarterly, 1, 3-4 Dunleavy, E. M.
Personnel Testing Council of Metropolitan Washington D.C. Quarterly, 4, 3-4 Dunleavy, E. M.
The Industrial-Organizational Psychologist, 46, 77-83 Dunleavy, E. M., & Gutman, A.
The Industrial-Organizational Psychologist, 45, 43-53 Gutman, A., & Dunleavy, E. M
A Consideration of International Differences in the Legal Context of Selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 247-254. (This article is not available online. Please contact CCE if you are interested in obtaining a copy of this article.) Dunleavy, E. M., Aamodt, M. G., Cohen, D. B., & Schaeffer, P.
A primer on adverse impact analyses Joanna Colosimo
Implications of the Internet Applicant Regulation on Adverse Impact Analyses Keli Wilson
Statistical Significance Standards for Basic Adverse Impact Analyses David A. Morgan
A review of adverse impact measurement in case law Marcelle Clavette
Adverse Impact Analysis: Understanding Data, Statistics, and Risk Eric Dunleavy, Editor