Pitfalls to avoid in using AI for recruiting
Illustracja_data-15.png
HR Tech
April 12, 2023

Pitfalls to avoid in using AI for recruiting

The use of artificial intelligence in recruitment is becoming more and more trendy. In recent months, chatGPT has become very popular, drawing the attention of a large group of companies to the possibility of using artificial intelligence at work.

Although artificial intelligence can undoubtedly streamline our work, eliminate repetitive tasks and make us able to focus on those that require human sensitivity and typical human skills, it generates challenges in the field of ethics, sociology, psychology and privacy protection.

Therefore, it is particularly important to pay attention to several issues when choosing an AI-based tool.

Detailed feedback

As we already know, the biggest hurdle in teaching AI is properly prepared data. However, it is important to focus on the pitfalls lurking mainly in management of the recruitment process. In this area, the biggest uncertainty is the human factor and knowledge of processes. For example, we can teach a machine to rank candidates against a given query, but it's much harder to teach it to distinguish between the ranked candidates and determine which one is a better fit for the job.

To create a good ranking tool, we need a person to give our system feedback on whether its suggestion is helpful. The people doing this evaluation should be professionals in their industry, or the program may not learn to rank accurately.

A specialist in the field of recruitment, e.g. for the construction industry, can easily assess whether the system suggestions are satisfactory when it comes to construction workers' applications. On the basis of dozens of such feedback processes, we are able to create better rankings immediately. If we have several specialists in a given field, the effectiveness of our tool increases significantly.

Right data systematization

Another obstacle is the systematization and normalization of data. Though we know that the number of professions in each industry is finite, it is almost impossible to predict all variations of job names or possible skills that a candidate may have. Therefore, you can be sure that some of the information will simply not be considered or reprocessed. However, this is not necessarily a bad thing, because a human being is also unable to consider everything. Therefore, you should create a model and systematize data in such a way that it takes into account the most important factors influencing the assessment of a given candidate.

Data protection

When data is involved, especially sensitive data such as personal data of candidates, financial expectations, salaries, place of residence or contact information, it is crucial to protect them. This includes, among other things, protection against unauthorized access, data leakage and mixing of this data between different databases.

Artificial intelligence models learn from huge amounts of data collected from various places. Nowadays, data is a currency. That is why free tools often provide us with interesting functions in exchange for the data we provide them. When choosing a tool that will support processes where data security is particularly important, you should pay attention to whether the company that offers them has a transparent data management policy, ensures their appropriate separation and has certificates confirming that in the area of data protection it does not go for compromises.

For example, our tool ensures that candidates' data is encrypted and completely separated from each other, and additionally anonymized. Moreover, Onwelo undergoes a thorough data security audit at least every year, which includes the analysis of data protection quality in the Hello Astra system, and each time it is awarded a certificate of compliance with the ISO/IEC 27001 standard.

Avoiding biases

In the context of AI "bias" refers to any case when AI and data analytics tools can perpetuate or amplify various cognitive biases. Among the most common biases are  increasing the ratio of a particular sex, race, religion or sexual orientation on recommended candidates lists. Another form may be increasing the disproportion in the remuneration of women and men, and thus generating the gender pay gap. The most common form of bias in AI comes from the historical data used to train the algorithms. When artificial intelligence is trained on incorrectly prepared data that does not take this phenomenon into account, it can exacerbate the problems in applying the principles of diversity, inclusion and equality faced by the company.

Amazon was one of the pioneers in using AI to optimize company's hiring process in 2018. They made a lot of effort to neutralize information about protected groups, but despite that, Amazon choose to terminate the program after discovering bias in candidate recommendations. 

Since then, we have definitely developed better technologies used for data preparation, including data cleaning, information noise removal, and neutralization of sensitive data (including gender, origin, etc.). In addition, technologically responsible systems have the ability to use attention mechanisms, such as Graph Attention Networks (GAT), thanks to which they focus on important information and skip those less important or those that should be omitted.

Look to the future

Very soon, we can expect increasingly frequent implementation of existing solutions in the field of AI. Currently, the recruitment management process uses AI for candidate support, analysis of applications, and predicting market fluctuations. In the future, we will see algorithms taking over more responsibilities and using generated forecasts about the job market to propose possible solutions to upcoming recruitment problems.

Certainly, there are many simpler ideas that have not yet been fully realized, for example, generating job postings from a pool of demand. After enough time and data collection, it will be possible to model a company's demand and plan ahead concerning when to recruit which candidates. Thanks to this, companies will be able to take on much larger and more complex projects with the knowledge that they will find the appropriate staff. In addition, it will be possible to estimate how much a new employee will cost at a certain time. Not to mention the possibility of fully automating the selection of candidates, obviously with final decision-making left in human hands.

The possibilities are endless, and the AI sector is developing so rapidly that recruitment departments can expect many facilitators in the coming years at virtually every stage of recruitment process management. Now the key point in your development plan as a recruiter should be to tame technologies and become comfortable in the world of data that is unfolding before your eyes.

Are you looking for more AI-related content? See our previous articles:

Where does the HR data for machine learning come from?

How can machines learn to support recruiters?

Labor market – a mine of knowledge

avatar
Hello Astra
HR Expert
Tags
artificial intelligence
future of HR
HR tech
recruitment