From where does the AI stem and to where does it lead?
Many leading organisations today have pioneered the use of AI to streamline their hiring process, but at what cost? Since hiring tools are now implemented at every step to reduce the workload of HR teams, it is pertinent to examine the extent to which fair is foul and foul is fair in the recruitment scenario.
Tracing the origin of AI bias, it is a research-backed and widely accepted claim that the bias grows mainly from two key components:
Historical Data: The stored data used to program screening algorithms are replete with biases. AI trained on this data carries on, almost like a legacy, the bugs and glitches that had once amplified many instances of mismanagement and ethically wrong decisions pertaining to recruitment. Historical bias reflects poorly on the present reality of screening, and the bias is steeped so deeply into the system that organisations are chalking out strategies to incorporate fairness in the system.
Poor Proxies: AI system by default looks for proxies which already are encoded with demographic bias, and, in this way, the bias is perpetuated time and again. These proxies play a vital role in the way the AI models contribute to decision-making. Therefore, one of the key steps towards mitigating bias is to detect proxies, do away with the variables that influenced biased decisions, and repair algorithms.
What happens if human judgment is outright swapped with AI has its most accurate manifestation in the screening process.
AI might parade as a paradigm of equality and future of workplace, yet in its nooks and crevices lurk many murky practices that affect diversity, equity, and inclusion. The working of both conscious and unconscious biases results in the random rejection of deserving candidates. As historical datasets are very conveniently devoid of any measures of diversity, it is rather unrealistic to expect AI models, working on these datasets, to reflect better judgment. The sinister mechanisms affect adequate and just representation of underrepresented groups and undo decades of war against workplace injustices.
AI has the propensity to weed out candidates as it operates on a set of languages that are more inclined towards privileged groups. Throughout the various recruitment steps, this bias continues. Candidates with a proven history of participating and excelling at various extracurricular activities, skills to perform at video interviews and maintain a steady scale of productivity, and potential to be a ‘confident leader’ mostly belong to a specific stratum of society. As a process, the screening phase acts more favourably towards classes with more generation wealth, cultural practices, and privileges.
It is no secret that historically women’s roles across many industries have been marginalised and verging in invisibility. This discrimination is already existing in the pipeline based on which AI algorithms are created. AI’s notoriety for downgrading the CVs of women candidates is popular. In fact, the systematic discrimination against women undoes the original intention behind including AI in the hiring part—to do away with the prejudices that cloud human minds. Presently, the algorithm is stuck in a loop that amplifies rampant sexism.
Bottomline, humans need to be more proactively engaged to eradicate the numerous loopholes and glitches embedded in the working of AI and invest to work on the chinks and gaps before they could widen in monstrous proportions. Perhaps this is the only way to make sure that the cost-effective and faster process is not executed at the cost of fairness and justice.