top of page

The Implications of AI-based Selection Systems on DEIA


A cybernetic hand touching a human hand

In recent years, artificial intelligence (AI) has rapidly infiltrated many industries across the globe, influencing business operations and transforming organizational dynamics. Through its ability to streamline processes, extract insights from vast datasets, and revolutionize decision-making processes, AI transformed the way we work, and reshaped the way that organizations interact with employees and customers. This integration ushered in a new period of innovation and efficiency, enabling businesses to automate tasks, reduce costs, enhance decision-making, and improve productivity. However, while the adoption of AI technologies presents many advantages, it is important to practice caution and consider its limitations and implications for diversity, equity, inclusion, and accessibility (DEIA). One area that warrants particular caution is the use of AI in employment selection systems, where AI algorithms assist in resume screening, candidate evaluation, and predicting applicants’ job performance to make selection decisions (The Institute for Workplace Equality, 2022). Although AI holds promise for mitigating human biases and streamlining hiring processes, its use also raises concerns about fairness, transparency, and the potential negative impacts on certain demographic groups. 

Concerns About the Use of AI in Selection and Assessment 

Although AI has the ability to benefit our society and organizations in many ways, it may also result in negative consequences that we as practitioners are only starting to uncover. The application of AI in selection and assessment processes has drawn scrutiny from researchers, practitioners, lawyers, and government officials due to its potential to perpetuate biases and inequalities. 


First, AI systems may inadvertently perpetuate the biases that are present in data used to train AI systems (Chen, 2023). In other words, bias may enter an algorithm if the dataset used to train the system contains bias or inaccuracies, or does not represent the intended populations. For example, if an organization previously hired applicants with a specific background, the system might lean in favor of applicants with that background, even if this results in adverse impact for certain groups (Angwin et al., 2016). 


Second, the complexity of AI algorithms makes it challenging to predict AI’s decision-making processes. For example, if certain skills were favored among selected applicants in the past, the system might place more weight on those, even if they result in disproportionate representation of certain groups (Reuters, 2018). Indeed, several organizations have faced scrutiny over their use of resume screening algorithms which place more weight on keywords that lead to biased hiring decisions. In one instance, a resume screening tool at Amazon penalized resumes that included words like “women’s” and preferred resumes with keywords that are more likely to appear on men’s resumes (e.g., executed, captured; Reuters, 2018). 


Third, many organizations are implementing AI tools with little focus on validating such tools and little information on how to implement and interpret the scores generated by AI tools and assessments (SIOP, 20231). This can lead to bias in decision-making in addition to the selection of applicants that are not best suited for the role. 


Best Practices When Using AI in Selection and Assessment 

To mitigate the risks associated with AI-based selection, organizations may benefit from adhering to best practices and guidelines outlined by subject matter experts (e.g., Google AI, 2023; Landers & Behrend, 2023; SIOP, 20232; The Institute for Workplace Equality, 2022). Below are some recommendations that organizations may consider when implementing their own AI-driven tools. 


  • Examine raw data used to train the system and identify any mistakes, ensuring representativeness across demographic groups and identifying dataset limitations (Google AI, 2023; Hajian, 2013). 

  • Assess the AI tools’ utility and lack of bias (Landers & Behrend, 2023). 

  • Evaluate the validity and reliability of the data by assessing the quality of the predictors and criteria used (American Psychological Association, 2024; Landers & Behrend, 2023;). 

  • Ensure that AI selection assessments predict job performance, result in consistent scores, are tied to job-related characteristics, and avoid bias (SIOP, 20231).

  • Re-weight data to ensure that AI models are not prioritizing certain skills or keywords that result in biased decision-making (Hajian, 2013). 

  • Establish and conduct regular auditing procedures to ensure that all applicants have equitable access and that the algorithm is not resulting in bias (The Institute for Workplace Equality, 2022). 

  • Apply the EEOC Uniform Guidelines on Employee Selection Procedures (1978) to AI-based selection tools to mitigate adverse impact and establish validity of the tools (The Institute for Workplace Equality, 2022).

  • Foster transparency in AI systems, including decision-making processes and adverse impact (Chen, 2023). Communicate that AI is part of the selection process (The Institute for Workplace Equality, 2022). 

  • Request that vendors document their processes and choices made to train and develop a selection algorithm (The Institute for Workplace Fairness, 2022).


The integration of AI has quickly transformed the way that organizations evaluate and hire talent. While AI offers unique and novel opportunities to enhance hiring practices and organizational outcomes (American Psychological Association, 2024; Harvard Business Review, 2019), the use of AI still requires a high level of caution. Thus, we are faced with unique challenges as we navigate uncharted waters. However, by integrating the best practices outlined above, organizations may leverage the transformative potential of AI while minimizing the risks of bias and discrimination. In an era where AI is available at our fingertips, it is crucial to remember that thoughtful use of AI holds promise for improving organizations, whereas careless implementation may exacerbate current challenges and perpetuate bias. 


Sources


Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Science Communications 10, 567. https://doi.org/10.1057/s41599-023-02079-x

Equal Employment Opportunity Commission (EEOC), Civil Service Commission, Department of Labor, & Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43, 38290–39315.

Google AI Responsible AI practices. 2023. (Accessed February 2024 at https://ai.google/responsibility/responsible-ai-practices/

Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Harvard Business Review. (2019, October). What Do We Do About the Biases in AI? Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Landers, R. N., & Behrend, T. S. (2023). Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist, 78(1), 36–49. https://doi.org/10.1037/amp0000972

bottom of page