Federal Contractors Must Understand Their Compliance Obligations For Using Artificial Intelligence in the Workplace

Federal Contractors Must Understand Their Compliance Obligations For Using Artificial Intelligence in the Workplace

Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law

In April 2024, the US Department of Labor’s, Office of Federal Contract Compliance Programs (OFCCP) released an Artificial Intelligence (AI) landing page with new guidance and information about the use of AI in federal contractors’ employment process.

Covered federal contractors are obligated by law to ensure they do not discriminate in employment and that they take affirmative action to ensure employees and applicants are treated without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. These EEO obligations extend to the federal contractor’s use of AI when making employment decisions.

While some federal contractors may use AI systems to increase productivity and efficiency in their employment decision-making, the use of AI systems also has the potential to perpetuate unlawful bias and automate unlawful discrimination, among other harmful outcomes. OFCCP’s guidance seeks to help federal contractors navigate these emerging technologies in employment.

OFCCP’s AI landing page includes these new resources:

Federal Contractor Guide-Artificial Intelligence and Equal Employment Opportunity for Federal Contractors -This new guide answers questions and shares promising practices to clarify federal contractors’ legal obligations, promote equal employment opportunity, and mitigate the potentially harmful impacts of AI in employment decisions.

Joint Statement On Enforcement Of Civil Rights, Fair Competition, Consumer Protection, And Equal Opportunity Laws In Automated Systems – OFCCP signed the joint statement  committing to protect the public from unlawful bias in automated systems, including AI.  

We pledge to vigorously use our legal authorities to protect workers’ rights. As a part of that commitment, OFCCP recently updated its Combined Scheduling Letter and Itemized Listing to clarify the documentation contractors must provide of systems used to recruit, screen, and hire, including the use of AI, algorithms, automated systems or other technology-based selection procedures. This will ensure that we and federal contractors are evaluating whether these AI systems are creating barriers to equal employment opportunity.

America’s commitment to the core principles of fairness, equality, and justice is deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity. These established laws have long served to protect individuals even as our society has navigated emerging technologies. Responsible innovation is not incompatible with these laws. Indeed, innovation and adherence to the law can complement each other and bring tangible benefits to people in a fair and competitive manner, such as increased access to opportunities as well as better products and services at lower costs.

Today, the use of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” is becoming increasingly common in our daily lives. We use the term “automated systems” broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Private and public entities use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services. These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and costsavings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.

Many automated systems rely on vast amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination. Potential discrimination in automated systems may come from different sources, including problems with:

• Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.

• Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.

• Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.

Need assistance understanding this? Contact today.

Picture of Wendy Sellers
Wendy Sellers
Wendy Sellers, known as “The HR Lady®,” is a dedicated HR consultant and business partner of all size businesses, a conference speaker, and management trainer who specializes in understanding the unique culture and goals of organizations in order to improve business outcomes.

Sign up for email updates from Wendy Sellers, The HR Lady LLC.

Share:

Facebook
Twitter
Pinterest
LinkedIn