Using ChatGPT or Other AI to Help In Recruiting?

AI Robot vs Human Workers

In today’s technologically advanced environment, it’s increasingly common for employers to leverage tools like ChatGPT and other artificial intelligence systems to streamline various aspects of the hiring process. As a consultant, I recommend utilizing AI to assist with updating job descriptions, crafting job advertisements, and formulating interview questions. These are areas where AI can enhance accuracy and efficiency, ensuring that the materials are comprehensive and up-to-date, which is crucial for attracting the right candidates.

However, it is imperative that AI is not used to make final hiring decisions. The use of AI in the selection of candidates can inadvertently introduce biases, despite the intention to achieve fairness and objectivity. AI systems are only as good as the data they are trained on, and if this data reflects historical biases, these prejudices can be perpetuated in the hiring decisions made by the AI. This could lead to discrimination, which is not only unethical but also illegal under U.S. employment laws.

The United States has a robust framework of employment discrimination laws designed to protect job applicants and employees from biases based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age (40 or older), disability, or genetic information. Notable among these are Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act of 1967, and the Americans with Disabilities Act of 1990. These laws prohibit discrimination in any aspect of employment, including hiring, firing, pay, job assignments, promotions, layoff, training, benefits, and any other term or condition of employment.

Employers must also be wary of the “disparate impact” theory of discrimination, which involves employment practices that apply to everyone regardless of race or sex but that have a negative effect disproportionately on one race, sex, or other protected group and are not justified by business necessity. Relying on AI without understanding its decision-making process and ensuring it is free of discriminatory biases could easily result in violations of these laws.

Furthermore, in my advisory role, I emphasize that human judgment plays a crucial role in assessing candidate qualities that AI might overlook, such as interpersonal skills, cultural fit (tread lightly with this terminology), and adaptability. While AI can handle data efficiently, it lacks the human capability to perceive nuances in human interactions and personalities, which are often critical to determining the right candidate for a role. I repeat, AI is not used to make final hiring decisions!

In conclusion, while AI can be a valuable tool for certain tasks in the recruitment process, direct human involvement is essential, particularly in making the final hiring decisions. This approach not only helps mitigate the risk of bias and discrimination but also ensures compliance with employment laws. Employers should also invest in training for their hiring managers to recognize and eliminate unconscious biases, thereby safeguarding the integrity of their hiring processes.

PS: US Federal Contractors have their own compliance obligations. More about that on my previous blog post.

Wendy Sellers can help HR and Managers learn how to use AI in recruitment AND to understand the ethical and legal risks too. Contact: www.thehrlady.com

Picture of Wendy Sellers
Wendy Sellers
Wendy Sellers, known as “The HR Lady®,” is a dedicated HR consultant and business partner of all size businesses, a conference speaker, and management trainer who specializes in understanding the unique culture and goals of organizations in order to improve business outcomes.

Sign up for email updates from Wendy Sellers, The HR Lady LLC.

Share:

Facebook
Twitter
Pinterest
LinkedIn