Incorporating artificial intelligence (AI) into recruitment processes: ethical considerations

Zuzana Sýkorová (School of Business, University of New York in Prague, Prague, Czech Republic)
Dana Hague (School of Business, University of New York in Prague, Prague, Czech Republic)
Ondřej Dvouletý (School of Business, University of New York in Prague, Prague, Czech Republic)
David Anthony Procházka (School of Business, University of New York in Prague, Prague, Czech Republic)

Vilakshan - XIMB Journal of Management

ISSN: 0973-1954

Article publication date: 25 June 2024

13044

Abstract

Purpose

This study aims to explore the implementation of artificial intelligence (AI) into recruitment by considering its potential to maximise the effectiveness of the human resources (HR) processes, challenges associated with the implementation and ethical concerns.

Design/methodology/approach

A qualitative research approach was used to reach the stated objectives within the context of the small open economy – the Czech Republic. Interviews were conducted with four participants, Czech-based recruiters, each with five or more years of experience in their field. The interviews were conducted in Autumn 2023 within the online platform. The answers were transcribed and thematically analysed.

Findings

The participants who were interviewed heavily emphasised the importance of the role of the human factor in recruitment, yet several observations and insights were obtained. In particular, some interviewees indicated a possible usage of a chatbot for the first round of the candidates' selection, but they see it as problematic in the final decision on the position fulfilment, where the human factor is not replaceable so far. The key ethical challenges of the broader implementation of AI in the recruitment practices of the respondents remain the risks regarding privacy and data protection, especially the General Data Protection Regulation (GDPR) legislation.

Originality/value

This article delivers pertinent insights for recruiters on using AI in recruitment, bringing forth a more subtle understanding of the faceted subject of AI-based recruitment.

Keywords

Citation

Sýkorová, Z., Hague, D., Dvouletý, O. and Procházka, D.A. (2024), "Incorporating artificial intelligence (AI) into recruitment processes: ethical considerations", Vilakshan - XIMB Journal of Management, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/XJM-02-2024-0039

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Zuzana Sýkorová, Dana Hague, Ondřej Dvouletý and David Anthony Procházka.

License

Published in Vilakshan - XIMB Journal of Management. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence maybe seen at http://creativecommons.org/licences/by/4.0/legalcode.


1. Introduction

Generally, recruitment has existed in one form or another for millennia. What is considered the modern recruitment format was popularised during the Second World War, and it, along with its procedures, has evolved significantly. The recruitment industry has migrated from the classified section of the newspaper to the Internet, to websites, and to the online advertisements of today (Yu and Cable, 2014), and it is still subject to continuous development and changes (Basu et al., 2023).

All fields, including human resources management (HRM) and recruitment in particular, are influenced by the advancement of technology (Patil and Priya, 2024). Artificial intelligence (AI), one of the most impactful and life-changing topics of modern times, represents one example of how technology is injected into recruitment (Koechling et al., 2023; Kambur and Yildirim, 2023).

The utilisation of AI in recruitment offers many benefits, including cost-cutting aspects and increased effectiveness. The financial alleviation is the consequence of the (partial) hiring process automation. This can be accomplished, for example, by incorporating AI into the analysis of resumes, saving human recruiters time and allowing them to focus their efforts on more value-adding activities within recruitment (Lisa and Talla Simo, 2021). However, these advantages and opportunities do not come without a price. The use of algorithms in the hiring process can have ethical implications involving bias, data privacy, and the elimination of the human factor (Koechling et al., 2023; Slaughter and Allen, 2024).

This work explores the implementation of artificial intelligence (AI) into recruitment by considering its potential to maximise the effectiveness of the human resources (HR) processes, challenges associated with the implementation and ethical concerns. The study pursues a qualitative research approach within the context of a small open economy – the Czech Republic, a European Union member country (Mammadova, 2022) with over 5,275 thousand employees in the labour market and over 282 thousand active enterprises with more than a single employee (Czech Statistical Office, 2024a; 2024b). Interviews were conducted with four participants, Czech-based recruiters, each with five or more years of experience in their field. The interviews were conducted in Autumn 2023 within the online platform. The answers were transcribed and thematically analysed. The article contributes to the existing discussion and has practical lessons for business practitioners considering the implementation of AI tools in recruitment, as well as for the scholarly community investigating challenges associated with implementing AI tools into HRM.

The article proceeds by reviewing the existing literature documenting the experience with AI implementation in recruitment. Then, the qualitative research methodology and the findings from the interviews are introduced. In the concluding part of the article, the study addresses its limitations and discusses its implications for the community and future research.

2. Literature review

We begin by recalling that recruitment can be described as finding and hiring candidates best suited for a position, a process companies use to identify and attract candidates for open positions. The recruitment process has several steps, including attracting candidates. The next steps involve screening and interviewing these individuals and selecting the one best suited for said position based on the information gathered from the person's resume and the conclusions of the screening and interview phases (Slaughter and Allen, 2024).

Technological innovation in the form of AI, i.e. algorithms that use data to make predictions and conduct tasks at a human or near-human level (Tambe et al., 2019), spread even into the recruitment area. The unsupervised form of AI is used in recruitment today and has been linked to the “black-box effect”. Candidate's data (e. g., completed education, experience in the field, language skills) is inputted into the system, and the system identifies trends and makes conclusions that have notoriously not been explainable, leading to a less sound recruitment process (Lacroux and Martin-Lacroux, 2022; Islam et al., 2022).

Albert (2019) discusses 11 possible areas of AI usage in recruitment. According to Albert (2019), vacancy prediction software, for example, predicts the probability of an employee leaving based on the employee's behavioural data inputted into the software. Job description optimisation software is used to enhance existing job descriptions. Targeted job advertising optimisation uses machine learning to target the correct group with a specific job advert. AI can also be incorporated into the outreach stage of recruitment through multi-database candidate sourcing to comb through platforms such as LinkedIn in search of potential candidates, as Hunkenschroer and Kriebitz (2023) note. AI-based recruitment has been found to increase accuracy and efficiency within recruitment processes, along with reducing company expenses on recruitment and enlisting a set of long-term benefits for the company, such as the elimination of human bias involved in the hiring process (Kot et al., 2021; Lisa and Talla Simo, 2021). AI can help fill vacant positions quickly due to the analysis of strengths and experience, which is impossible to do at such a rate by a human employee, but its broader implementation is context-specific (London, 2019; Pan et al., 2022).

London (2019) explains that decisions yielded through the hiring process must be both transparent and explainable. The foundation for why something was decided must be clear if one were to ask, and it must have a valid explanation. These factors of AI-based recruitment have been questioned in their level of presence. Suppose any recruitment process is not transparent or explainable. In that case, the reputation of that company can be endangered as its processes do not stand up to high standards, and candidates have poor experiences with the hiring process. One of the identified weaknesses of AI was observed regarding the possible occurrence of biases, discrimination, and prejudice in the selection process evaluation (Köchling and Wehner, 2020). One such example of discriminative decisions made by algorithms in recruitment is the case of Amazon (Dastin, 2022). Amazon's recruitment algorithms were found to discriminate against female applicants. The algorithm was fed historical employment data, which showed that male employees dominated. The AI system identified this trend and disadvantaged female applicants during the AI-based hiring process, scoring much lower than the male applicants. Consequently, this finding led to the abandonment of these algorithms as there was no method to fix the issue inherent in the system, and it negatively affected the company's reputation (Dastin, 2022).

A frequently cited ethical worry over AI's incorporation into the recruitment process has been the issue of “dehumanisation” (Fritts and Cabrera, 2021). Gonzalez et al. (2022) assessed candidates' attitudes towards different hiring models. The study participants reacted towards hiring decisions by a recruiter or an AI/ML (machine learning) system. Gonzalez et al. (2022) found that participants reacted more positively to the decisions (and interactions) made by human-based recruitment. According to Fritts and Cabrera (2021), incorporating algorithms into the hiring process can damage the employer-employee relationship because human values are abstract and multifaceted in nature, and algorithms give a certain structure that dictates impartiality. Also, if the candidate is accepted for a position by algorithms rather than a human recruiter, the “victory” may be lessened. Notably, the amount of elimination of human participation relies on how much a company, whether it be to cover only the first step of attracting candidates and screening or if it is used in further steps such as interviews or even to make the final selection decisions (Hunkenschroer and Kriebitz, 2023).

The broader implementation of AI into recruitment is linked with the data privacy and protection legislator frameworks of the country where the business operates. In New York, Local Law 144 of 2021 was implemented, and it allows for the use of AI-hiring processes under the conditions that an audit of possible bias is done in the system and that the data yielded is made available to the public. Along with these requirements, a company utilising algorithms in their hiring process must be transparent about this fact to candidates applying for positions (Bell et al., 2023). In the European Economic Area, the General Data Protection Regulation (GDPR) is an example of an existing regulation pertaining to AI and personal data protection (Lukács and Váradi, 2023). However, GDPR is not alone, as the EU Charter of Fundamental Rights also pertains partly to privacy and data protection, which could be hazarded against by AI incorporated in recruitment (Viljanen and Parviainen, 2022). The AI Act is a prime example of an AI-specific legislation that is quite novel (Lukács and Váradi, 2023). The European Commission proposed the AI Act in 2021 and has become the centre of debate today as its progress along the validation chain continues. The European Parliament supports the AI Act as a legislative restriction to AI systems that aims to limit the impact of risk caused by AI. The AI Act is set to have a different set of rules for the different levels of incorporation risks. Those that are found to be most detrimental are to be banned completely, while those presenting with high risk are severely restricted. As AI-based recruitment falls into the category of high risk, it will be heavily affected by the restrictions set in this section of the AI Act, which further necessitates that any practical uses of AI-based recruitment be registered in the EU database (European Parliament, 2024).

Guidelines and policies are being set at the company and government levels. Organisations strive to find the balance between supporting innovation and setting effective regulations. Companies must have adequate security policies in place that account for the use of AI in their organisation, and conduct regular audits of their AI-based recruitment systems (Van Dijk et al., 2021). In case there could be risks to individuals' privacy or freedom, then that company should conduct a data protection impact assessment, also known as DIPA. Chen (2023) further recommends that if an AI system makes conclusions used to make decisions about candidates, then this fact needs to be presented to the candidate before the hiring process begins. This is to adhere to transparency as generally set forth by GDPR.

It is also important to mention that artificial intelligence systems absorb and analyse massive amounts of data, and the places where this data is collected can infringe upon data privacy. The data can be drawn from social media accounts and other websites, with the possibility that some of the data are personal and financial or health information. Such personal data can include information regarding a person's sexual orientation, political views, or religious beliefs. Though the data itself is not seen as harmful, those who have access to it, those to whom it is sold, what it is used for, and how it can be exploited is the matter of the ethical implications of AI and privacy and data protection. Furthermore, AI can be used during the recruitment process during video interviews. Applicants may answer questions in a video interview, which are analysed through algorithms with the help of cameras, microphones, and other sensor devices. Algorithms then analyse several components of the candidate, such as the tone of their voice, facial expressions, and speech. Natural language processing (NLP) and facial expression processing (FEP) can analyse the tone of one's voice, intonation, rhythm, and pace of speech, or facial expressions, smiles, and movement (likens the use of this AI format during interviews to the use of polygraph tests which establish what is true and what is false). Thus, It is not surprising that using polygraph-based tests during recruitment is unethical and forbidden by most international laws (Hinkle, 2021; Abulibdeh et al., 2024). With that said, we again emphasise accountability in practice, record-keeping with AI, and regular checks that the system is doing what it was designed to do.

3. Methodology

A primary research approach has been chosen to study the ethical implications of AI in recruitment and the potential mitigations that most benefit both candidates and companies. A primary research approach consisted of conducting qualitative interviews, which can thus be analysed based on the transcripts of the interviews. We focused on recruiters from the Czech Republic to maintain the business and regulatory framework from one particular country. The research team developed the first interview protocol, discussed the ethical aspects of doing interviews, and followed the established methodological standards (Taylor et al., 2015).

The interviews took place in November 2023. The sampling method used was nonprobability convenience sampling. Participants were contacted through the LinkedIn (2024) platform. The first participant was found through a group titled VIP: AI Recruitment CZ/SK (LinkedIn, 2024). The first participant suggested several personal contacts, of which one became the second participant. Both the third and the fourth participants were found through networking. Regardless of the nonprobability convenience sampling used, the individuals had to fulfil a set of criteria. They must currently work in recruitment, have at least five years of experience in the field, and have basic knowledge of AI-based recruitment. The final and total amount of participants interviewed was four, and the research team concluded that this number was sufficient, as the proportion of new insights was significantly decreasing, i.e. the research was reaching saturation point. The study also followed the ethical principles for conducting qualitative research (Taylor et al., 2015).

Before the interview, each participant was educated on how the interview would be conducted, its average duration, and the purpose and aim of the interview. Each participant was made aware that their participation in the study was voluntary, and each gave consent to be interviewed and to have the interview transcribed afterwards. The interviewer also informed each participant that they would be kept anonymous and sent a copy of the final study if they opted for this option. The interviews were conducted through online Microsoft Teams meetings. The average duration of the interview was one hour. Once each interview was completed, the interview was saved and transcribed for analytical reasons. The names have been changed to protect the identity and privacy of each of the interviewees.

Participant A has experience in internal recruitment but now manages his recruitment firm as its co-founder. Participant B has been a senior recruiter for the last six years. Both Participants A and B specialise in the IT recruitment sector. Participant C has been working in a recruitment agency for the last four years, and Participant D started his career in HR, where he gained the foundational knowledge of recruitment before joining an international recruitment firm.

The interviews were all structured and did not allow for any diversion from the preset questions. All four participants were asked the same set of questions. The questions were created to encompass an introduction of the candidate in the realm of their professional experience in recruitment and AI, their knowledge of the ethical implications of AI recruitment, and their mitigations. The selection of the topics was based on the literature review, i.e. most recent studies, but the general approach selected for pursuing this empirical study was following an exploratory perspective, as this is still a relatively novel topic in the current literature, and the body of knowledge is emerging (Kalu and Bwalya, 2017).

Specifically, the interview consists of nine questions, translated both into English and Czech to provide better access to participants in the case that the question posed in English is too complex. The translation was checked by two individuals fluent in both English and Czech to compare the translation between the two languages and ensure that both versions have the same meaning and ask the same thing. Below is a detailed explanation of what each question aims to cover. The interview included the following nine questions (with explanatory notes):

Q1.

Can you please introduce yourself and your background in recruitment?

This question introduces the participant. It asks the participant to state the nature of their experience in recruitment. Participants gave whatever information about themselves they so chose to.

Q2.

What would you say are the core factors that guide your decisions in the recruitment process?

This question asks for a framework or set of factors that each participant utilises while making their decisions within recruitment activities. This question was included in the interview to help further fill out each participant's recruitment background. In question one, the participants are asked to introduce themselves and their professional experience. This question aims to uncover a bit about each participant's recruitment strategy.

Q3.

Do you use AI in your recruitment activities? If so, how has it impacted the effectiveness of the process? If not, would you consider its use? What would your expectations be for the update process?

Question three covers the extent of each participant's experience with AI. It is significant to establish not only if the participants have used AI or not but also how they perceive its incorporation into recruitment. It was assumed during the development of the interview questions that most, if not all, of the participants, will not have had any personal/professional experience with AI as it truly is a novel addition to recruitment. For this reason, the participants were also asked about their expectations if AI was to be incorporated into their recruitment activities.

Q4.

Ethical implications of AI in recruitment mentioned in existing literature include biases and discrimination against potential candidates and privacy and data protection. In your opinion or experience, how could these concerns be mitigated?

The first three questions lay the foundation, and with the fourth question, the participants start to be questioned about the specific topic of the ethical implications of AI's incorporation into recruitment. The question mentions the ethical concerns explored in the literature review and asks the participants to draw on their experience as a recruiter in suggesting potential mitigation for the issues presented.

Q5.

Can you describe any other risks of using AI in recruitment other than those mentioned?

Question five aims to cover the possibility that the participants may have experience with or knowledge of other ethical implications concerned with AI's incorporation into recruitment.

Q6.

In your opinion, could AI's incorporation into recruitment potentially mean that human reasoning and intuition are to be phased out of the recruitment process eventually?

Question six focuses specifically on the connection between the existence of the human factor within recruitment and AI-based recruitment. The recruiters are asked their opinion on how these two recruitment foundations (human and AI) can potentially coexist. Though the assumption that the participants have little to no experience with AI-based recruitment stands, the participant's professional expertise exists. Their knowledge of their field and the necessary tools they must have allowed them to be reliable sources of information on this question.

Q7.

To what extent do you think that the human factor is a risk or benefit to the recruitment process?

The participants are asked to evaluate the worthiness of the human factor within recruitment. It is true that, as human recruiters, they are most likely to be biased on this topic. However, AI's incorporation into recruitment has been justified partially by its utility in minimising human bias, and thus, it is important to inquire with those experienced in the field about their experience with the human factor and its cost-benefit analysis.

Q8.

One of the reasons for introducing AI into recruitment is to eliminate human bias. In your experience or opinion, does AI fulfil this aim?

Question eight further focuses on what question seven introduced. This question is designed to explore the level of the recruiter's knowledge of the benefits of AI in recruitment and potentially its practical use as well.

Q9.

Do you think that the rapid innovation and incorporation of AI can negatively impact its ethical use in recruitment?

The ninth and final question can be perceived as leading. However, it stands to evaluate the participant's views on the future of their field and its potential evolvement and adaption to new technology from the ethical angle. The speed of AI's incorporation in recruitment is vital in conversations surrounding the development of legislation and regulation.

4. Findings and discussion

4.1 Findings from the interviews

This section presents the findings from the structured interviews. The analysis was conducted in two layers. The first is the comparison of the interviewee's answers amongst each other to each interview question. This is done to identify the similarities and differences, which are further described. The identified trends between participants' answers are then used in the next layer of analysis, which consists of comparing these patterns to the literature review previously present and presently available literature on the topic (Taylor et al., 2015).

In question two, the interviewees were asked about the core factors that guide their recruiting decisions when it comes to a candidate. All four respondents agreed that the client's aspects were the most important. Participants A, B, and D all described a compatibility that they seek to achieve between the candidate and the client. Participant A stated that they take parameters such as the age of the client's team, their level of extraversion, and other characteristics as guidelines for finding a suitable candidate. Participant B balances the wants and needs of their client when searching for the perfect candidate. All recruiters spoke about the candidate's past experiences along with other competencies that contribute to their level of compatibility with an employment position. As reviewed in the practical use of AI in recruitment by L'Oréal in the chatbot Mya, compatibility is tested by this system as well (Sharma, 2018). In the concrete example, Mya questions the participants and labels them as either being best-fit or non-fit. This is the AI-based recruitment equivalent of external recruiters judging their candidates on their potential compatibility with their client's company.

Regarding the first portion of question three, all four recruiters gauge their experience with AI as low to moderate. Participant A uses ChatGPT to understand better positions they may not have met with. Though ChatGPT is an AI tool, it is not meant for recruitment as its function as a generative AI is not tailored to recruitment. Participant B has briefly seen the use of chatbots in the first round of interviews, though they have not been involved in the process. Participant C has tested AI by generating job adverts and comparing them with those they have on file. This function has been reviewed by Zhang (2023) in his benefits list of alleviating recruiters' stress and serving as a cost-saving mechanism in tasks such as language processing, as experienced by Participant C. Participant D also has minimal experience with the technology. AI recruitment, such as chatbots, is not yet the norm in the Czech Republic. For this reason, it is understandable that none of the participants have extensive experience with this tool yet.

The second portion of question three asks if the subject would consider the use of AI and what their expectation for the system would be. Here, each participant presented a different perspective. The interviewees had different views on the potential effectiveness of using AI in recruitment. Participant A found that his mild use of ChatGPT can save him up to two hours' worth of work each day. For this reason, Participant A said they would consider its implementation in his recruiting process. However, as previously stated, ChatGPT is not AI-developed for recruitment. For this reason, it is impossible to accurately assess Participant A's perception of true AI-based recruitment as the foundation they use to justify their opinion is not completely valid. The effect of AI implementation in recruitment on improving efficiency and time conservation of recruiters has been stated to be so in several academic studies (Nawaz, 2019; Pan et al., 2022; Zhang, 2023). Though inspired by the generated AI job adverts, Participant C could not see further implementation of it in his recruitment process. Participant D reviewed the benefits that AI-based recruitment could bring to his company, for example, improving efficiency through the automation of basic tasks. However, Participant D cautioned that AI's implementation would need to be monitored or kept out of it, such as in cultural fit assessment or soft skills evaluation, where Participant D finds human judgement irreplaceable.

In question four, participants were presented with a list of the ethical issues this work is concerned with and asked for their opinions on mitigating these concerns. Participants A and B mentioned GDPR as a mechanism that mitigates privacy and data protection risks. GDPR is a regulation closely tied to data protection concerns concerning AI-driven recruitment processes. GDPR clearly states that an applicant must give a hiring company/agency explicit consent to collect, store, and/or use the applicant's data (Hunkenschroer and Kriebitz, 2023). Along with GDPR, the AI Act has also been introduced recently (Lukács and Váradi, 2023).

In question five, each recruiter listed different risks. Participant A found that the most significant risk of AI's implementation into recruitment could be eliminating the human factor. From their experience, it is the use of the perception of facial expressions and mimics that enrich Participant A's knowledge base regarding a certain client. Participant A finds that as these details cannot yet be perceived by AI, then the recruitment process would be of lower quality for its elimination. The topic of the loss of the human factor as a consequence of incorporating AI in recruitment has been a previously explored ethical concern. However, the terminology shifts partially from “dehumanisation” (Fritts and Cabrera, 2021) to the loss of human-centeredness (Hunkenschroer and Luetge, 2022). Participant A characterised this human factor as an experienced human recruiter's skills, such as reading facial clues. The human factor and its connection with AI recruitment have also been tied to the concept of autonomy in that hiring decisions previously made by humans are being made by algorithms (Hunkenschroer and Kriebitz, 2023). Autonomy has also been explored in connection with the candidate's lessened ability to fully participate in the hiring process when faced with only an AI system in the screening or interview phase. The candidate is judged solely based on their response, not on their emotional intelligence and soft skills (Giermindl et al., 2022). Participant B found that a potential risk of incorporating AI in recruitment could be the shrinking of recruitment teams, as it would lead to jobs done by humans being given to machines. Participant B focused on how this would affect the diversity of a recruitment office and the damage this could do to a company and its recruitment decisions in the long run.

An example of the potential negative impact of AI on a company's recruitment process has been explored in the literature section in the form of the Amazon incident (Dastin, 2022). Though the case of Amazon's AI-hiring was seeded in prejudiced historical training data, the decisions of the AI-hiring system had a detrimental effect on Amazon's reputation and was immediately cancelled. The comparison of this practical example with Participant B's suggested risk produces a similar consequence of the eroded reputation of a company based on its recruitment process. Of course, the conclusion of this comparison must be cautioned as it was not AI's incorporation into recruitment that negatively affected the reputation of the company; rather, it was training. The significance of the teaching and the quality of the input training data have been explored in the prejudice portion of the literature review. Participant B's suggested risk and the Amazon scandal further ensure the ultimate necessity of the presence and quality of these aspects of AI recruitment.

Participant C minded AI as a tool used to analyse potential candidates' CVs. In terms of this use, Participant C reasoned that a potential risk of incorporating this factor into their recruitment process could lead to a lower quality of candidates. According to Participant C, not all candidates have elaborate CVs that may contain the right keywords, and it is necessary to contact them and discuss their competencies and expertise. This step would otherwise be skipped in the recruitment process in which AI would take care of analysing the CVs. Consequently, this would lead to some candidates, which could be a perfect mix, being lost in the AI CV analysis. The subject of this proposed risk has been researched; however, no relative formal literature has been found to support or contradict this concern.

Participant D lists many risks ranging from the loss of human factors, as mentioned by Participant A, to implementation costs, potential inaccuracies in the recruitment predictions made by the system, negative candidate experiences, and data security concerns. The risks mentioned by Participant D have already partially been covered. Participant D's concerns, such as inaccurate predictions and negative candidate experiences, touch on transparency and explainability, two critical factors that establish a valid and quality recruitment process. The decisions made by recruiters must be explainable, and as such, if a candidate is to inquire, the recruiter who interacted with them would have to be able to explain why the candidate was or was not hired. The hiring decision needs to have the ability to be explained. This factor in a recruitment decision is not commonly present in an AI-based hiring process (London, 2019; Kot et al., 2021). Not only do the hiring decisions need to be explainable, but the recruitment process itself must also be fair, and there must be equity present regarding the candidates. These are basic principles that underline standard and accepted recruitment practices. However, explainability can be achieved in an AI-based recruitment process through other processes, such as that demonstrated by Lee and Cha (2023) with their FAT-CAT model.

In question seven, the interviewed recruiters heavily emphasised the necessity of the human factor in recruitment. The research results yielded from the study by Gonzalez et al. (2022) can serve as a vantage point for this comparison. The attitudes of Participants A, B, C and D, who partook in the interviews for this work, are similar to those described in the study by Gonzalez et al. (2022). The participants of our study tend to favour the human factor in the recruitment process. Participants B and D weighed the pros and cons of an entirely AI-run recruitment process, both mentioning the financial benefit of alleviating the cost of human capital and eliminating human bias. Participant D further described issues that the human factor in recruiting may inflict, such as mistakes in processing large data sets and inconsistencies. Overall, both Participants B and D sided with the human factor as irreplaceable and necessary in the recruitment process. Participant B believed that AI cannot perceive and judge what a recruiter can, while Participant D finds that as a recruiter, they can assess emotional intelligence, make context-aware judgments and build rapport with their candidates and clients. Skills that AI does not possess. However, these beliefs can be compared to the existing literature on the full extent of what AI in recruitment is capable of. AI, in its incorporation into recruitment, can not only manage CV scanning but has moved further to offer other services such as face and speech recognition and has even been used to evaluate a candidate's degree of honesty (Albert, 2019; Fraij and László, 2021). Participant D was concerned about the accuracy of these systems.

In question eight, two of the four interviewees suggested a similar path. Participant D mentioned that implementing AI into recruitment to eliminate human bias depends on the quality of the data used to teach the system, algorithm fairness, and ongoing system monitoring. This has been discussed, as the significance of the quality of data and the format in which the AI system is taught is the foundation of potentially ethical AI-based hiring practices. Participant A suggested that AI-based recruitment can be fair and eliminate human bias to a degree. However, it can also do the opposite. Again, the ethical magnitude of an AI-based recruitment system is based on how it is trained.

4.2 Discussion and practical implications

The interviews helped us explore ethical concerns present in AI's incorporation into recruitment and how these concerns can be mitigated to ensure fairness and effectiveness in the recruitment process while minimising the negative consequences for job seekers and the implementing organisations. The derived findings strayed towards a larger emphasis on the human factor and its role in AI-based recruitment. It is possible that the lack of extensive participants' hands-on experience with AI-based recruitment could serve as an obstacle to the wider implementation of AI in their day-to-day business practice. Though the human factor in AI-based recruitment may not wholly qualify as an ethical concern of AI's integration in recruitment, its presence or elimination does prove to be an apprehension regarding the fairness and effectiveness of recruitment activities.

The analysis of the participants' answers found that neither the apprehensions of prejudice and discrimination nor of privacy and data protection posed the biggest concern to the participants. The emphasis on the human factor within AI-based recruitment was one of the main trends in the analysis of data gathered through the interviews. The human factor aspect has already been described in the existing literature as human-based recruitment that can only be offered by a human recruiter (Fritts and Cabrera, 2021; Gonzalez et al., 2022). Participants A and D provided concrete examples of defining the human factor at work. Participant A described it as their ability to perceive and analyse facial expressions and mimics, an ability cultivated from experience in the field. Participant D found that the human factor was best exemplified by context-aware judgments and the ability to build rapport with one's candidates. Interviewed participants shared the belief that recruitment cannot be successful without the presence of the human factor. These views can be considered as testimonies to the necessity of the human factor within recruitment. Nevertheless, what are the consequences of the loss of such a necessity?

Lacroux and Martin-Lacroux (2022) mentioned the black-box effect, which can present concerns in terms of explainability, and how this decrease in the quality of a recruitment process can negatively impact a company's reputation. Gonzalez et al. (2022) shared the results of the perceptions of candidates on an AI-based recruitment process. Fritts and Cabrera (2021) mentioned how eliminating the human factor means eliminating nuance within a recruitment process and further goes on to illustrate how this can negatively impact a company's reputation and even worth. It would be a complex task to evaluate if the loss of the human factor and the damage that consequences could potentially do to a company could equate to cost savings that are driving the incorporation of AI in recruitment.

The findings and patterns further identified the need to focus on possible mitigation. The use of AI in recruitment has such impactful benefits that its popularity will heighten in the near future (Michailidis, 2018; Fraij and László, 2021). Because of these advantages, the ethical concerns of AI-based recruitment must be mitigated. There has been an avid push for more research on the ethical concerns of AI recruitment, and indeed, the focus has shifted from the research on the performance of these systems to ensuring their equality (London, 2019; Otoo, 2024). Raghavan et al. (2020) notes the rising interest in automated hiring systems that are flawed in their practical use, making high-risk decisions in the workplace that could be threaded through with bias. Furthermore, Ajunwa (2022) argues and urges companies to instil regular audits and mandatory data retention to protect candidates further. Very recently, new systems and technologies that do their best to mitigate ethical concerns about the incorporation of AI into hiring activities are being developed and introduced to the recruitment sector. One such new framework is Sapia.ai, which is employed by recruitment agencies or companies looking to hire new talent (Sapia, 2024). Programs such as InterpretMLor FairLearn, developed by Microsoft, can be applied to AI systems and used to correct for biases, as seen in the study which focused on the use of these new technologies to mitigate the age bias that is often prevented in the screening phase of recruitment, conducted by AI (Harris, 2023).

The push for more research focusing not on AI's performance in recruitment activities but on the ethical components that pose dangers to transparency, equality, and explainability, the recently published literature focusing on the ethical concerns of AI's incorporation into recruitment, the technologies being developed that help to mitigate bias within AI systems, all of these events are the sign of optimal movement. One could discuss to what extent and how AI could be incorporated to handle routine tasks that could be automated and thus save recruiters time to focus on more demanding tasks requiring a human way of thinking and decision-making. As previously mentioned in the research, AI can employ the combination of Natural Language Processing (NLP) and Facial Expression Processing (FEP) to process information. However, it is important to note that technology cannot fully detect an employee's personality. Movement not only towards awareness of the risks that could potentially occur as a result of an unethical AI recruitment system but also towards the innovation of mitigation tools and legislation, as mentioned previously in relation to the AI Act (Hocken and King, 2023; Tharkude, 2023).

5. Conclusions, limitations and recommendations for future research

This study brings to the wider audience and the scholarly community perspectives of human resources professionals who will be using AI recruitment systems in the future, and their views on the presently known ethical concerns of these systems are crucial bits of knowledge. The qualitative insights from the four skilled recruiters, each with five or more years of experience in their field, interviewed in November 2023, offer a window into the opinions of those in charge of these AI systems, their audits, and their training potential.

Though the generalisability of the sample size of this study is quite low, and the qualitative method chosen yields subjective data, the study provides novel observations from the perspective of the European open economy, a member state of the European Union. Yet, it needs to be acknowledged that the qualitative study findings cannot be generalised and cannot be used to predict the results of a greater population but rather serve as a guide for future quantitative surveys distributed towards the statistically representative sample of recruiters with knowledge about AI (Taylor et al., 2015). Furthermore, we acknowledge that three of the four interviewees did not feel comfortable answering the questions in English and chose the Czech translation of the questions instead. It is important to note that subtle differences in linguistics may have occurred during the translation of the questions. This may lead to inconsistencies; however, they were mitigated to the best of the author's abilities.

Future research should focus not on the newest innovation linking AI and recruitment but on studying relationships and trends already present with AI-based recruitment today. As could be seen from the answers of the four participants interviewed, the recruiters have an opinion about the use of AI recruitment, and many may not have been put in a position where they must utilise it yet. There is then a call for further research specifically concentrating on the perceptions of recruiters on AI-based recruitment. The attitude of those at the helm of these new hiring formats is a worthwhile angle that could implicate several different topics simultaneously. How does a recruiter's disapproving or distrusting attitude affect AI's integration into recruitment, and how can it be overcome? Can a recruiter's stance towards AI-based recruitment potentially implicate further ethical concerns? The answers to these crucial questions can further bolster and inform those who participate in regulating AI's presence in recruitment.

Bringing forward a recommendation for future research suggested in the discussion: a cost-benefit study that would compare the risks and advantages of both scenarios (implementing AI into recruitment and not). Such a study could consider the ethical issues presented in this work and stand them up against the financial alleviation companies strive for when incorporating AI into their recruitment activities. The results of this study would be a foundational set of evidence that could better serve the recruitment sector and the legislators.

Modern advancements in AI recruitment are not of little significance, and they, too, should draw the attention of researchers. One recommendation would be to study the newest technologies available to assess a candidate's facial recognition and how candidates perceive this recruitment format. New legislation is being discussed, and new technology that aims to feature ethical AI to be built into recruitment is being developed. These new facets of this topic should be reviewed and analysed for their efficiency and validity in protecting the candidate and the company.

References

Abulibdeh, A., Zaidan, E. and Abulibdeh, R. (2024), “Navigating the confluence of artificial intelligence and education for sustainable development in the era of industry 4.0: challenges, opportunities, and ethical dimensions”, Journal of Cleaner Production, Vol. 437 No. 2024, p. 140527.

Ajunwa, I. (2022), “Race, labor, and the future of work”, in Houh, E., Bridges, K. and Carbado, D. (Eds), Oxford Handbook of Race and Law, Oxford University Press, Oxford, pp. 1-18.

Albert, E.T. (2019), “AI in talent acquisition: a review of AI-applications used in recruitment and selection”, Strategic HR Review, Vol. 18 No. 5, pp. 215-221, doi: 10.1108/SHR-04-2019-0024.

Basu, S., Majumdar, B., Mukherjee, K., Munjal, S. and Palaksha, C. (2023), “Artificial intelligence–HRM interactions and outcomes: a systematic review and causal configurational explanation”, Human Resource Management Review, Vol. 33 No. 1, p. 100893.

Bell, A., Nov, O. and Stoyanovich, J. (2023), “Think about the stakeholders first! Toward an algorithmic transparency playbook for regulatory compliance”, Data and Policy, Vol. 5, p. e12.

Czech Statistical Office (2024a), “Employment and unemployment as measured by the LFS – 4. quarter of 2023”, available at: www.czso.cz/csu/czso/ari/employment-and-unemployment-as-measured-by-the-lfs-4-quarter-of-2023 (accessed 12 February 2024).

Czech Statistical Office (2024b), “Business demography time series”, available at: www.czso.cz/csu/czso/res_cr (accessed 12 February 2024).

Dastin, J. (2022), “Amazon scraps secret AI recruiting tool that showed bias against women”, in Kirsten, M. (Ed), Ethics of Data and Analytics, Auerbach Publications, Boca Raton, pp. 296-299.

European Parliament (2024), “AI act”, available at: www.digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed 12 February 2024).

Fraij, J. and László, V. (2021), “A literature review: Artificial intelligence impact on the recruitment process”, International Journal of Engineering and Management Sciences, Vol. 6 No. 1, pp. 108-119.

Fritts, M. and Cabrera, F. (2021), “AI recruitment algorithms and the dehumanisation problem”, Ethics and Information Technology, Vol. 23 No. 4, pp. 791-801, doi: 10.1007/s10676-021-09615-w.

Giermindl, L.M., Strich, F., Christ, O., Leicht-Deobald, U. and Redzepi, A. (2022), “The dark sides of people analytics: reviewing the perils for organisations and employees”, European Journal of Information Systems, Vol. 31 No. 3, pp. 410-435, doi: 10.1080/0960085X.2021.1927213.

Gonzalez, M.F., Liu, W., Shirase, L., Tomczak, D.L., Lobbe, C.E., Justenhoven, R. and Martin, N.R. (2022), “Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes”, Computers in Human Behavior, Vol. 130, p. 107179.

Harris, C. (2023), “Mitigating age biases in resume screening AI models”, The International FLAIRS Conference Proceedings, Vol. 36 No. 2023, pp. 1-8, doi: 10.32473/flairs.36.133236.

Hinkle, C. (2021), “The modern lie detector: AI-Powered affect screening and the employee polygraph protection act (EPPA)”, Georgetown Law Journal, Vol. 109 No. 5, pp. 1201-1263.

Hocken, E. and King, G. (2023), “AI in the hiring process”, Strategic HR Review, Vol. 22 No. 3, pp. 81-84.

Hunkenschroer, A.L. and Kriebitz, A. (2023), “Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring”, AI and Ethics, Vol. 3 No. 1, pp. 199-213, doi: 10.1007/s43681-022-00166-4.

Hunkenschroer, A.L. and Luetge, C. (2022), “Ethics of AI-enabled recruiting and selection: a review and research agenda”, Journal of Business Ethics, Vol. 178 No. 4, pp. 977-1007.

Chen, Z. (2023), “Collaboration among recruiters and artificial intelligence: removing human prejudices in employment”, Cognition, Technology and Work, Vol. 25 No. 1, pp. 135-149.

Islam, M., Mamun, A.A., Afrin, S., Ali Quaosar, G.A. and Uddin, M.A. (2022), “Technology adoption and human resource management practices: the use of artificial intelligence for recruitment in Bangladesh”, South Asian Journal of Human Resources Management, Vol. 9 No. 2, pp. 324-349.

Kalu, F.A. and Bwalya, J.C. (2017), “What makes qualitative research good research? An exploratory analysis of critical elements”, International Journal of Social Science Research, Vol. 5 No. 2, pp. 43-56.

Kambur, E. and Yildirim, T. (2023), “From traditional to smart human resources management”, International Journal of Manpower, Vol. 44 No. 3, pp. 422-452.

Koechling, A., Wehner, M.C. and Warkocz, J. (2023), “Can I show my skills? Affective responses to artificial intelligence in the recruitment process”, Review of Managerial Science, Vol. 17 No. 6, pp. 2109-2138.

Köchling, A. and Wehner, M.C. (2020), “Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development”, Business Research, Vol. 13 No. 3, pp. 795-848.

Kot, S., Hussain, H.I., Bilan, S., Haseeb, M. and Mihardjo, L.W. (2021), “The role of artificial intelligence recruitment and quality to explain the phenomenon of employer reputation”, Journal of Business Economics and Management, Vol. 22 No. 4, pp. 867-883.

Lacroux, A. and Martin-Lacroux, C. (2022), “Should I trust the artificial intelligence to recruit? Recruiters' perceptions and behavior when faced with algorithm-based recommendation systems during resume screening”, Frontiers in Psychology, Vol. 13, p. 895997.

Lee, C. and Cha, K. (2023), “FAT-CAT—explainability and augmentation for an AI system: a case study on AI recruitment-system adoption”, International Journal of Human-Computer Studies, Vol. 171, p. 102976, doi: 10.1016/j.ijhcs.2022.102976.

LinkedIn (2024), “VIP: AI recruitment CZ/SK”, available at: www.linkedin.com/groups/12775862/ (accessed 12 February 2024).

Lisa, A.K. and Talla Simo, V.R. (2021), “An in-depth study on the stages of AI in recruitment process of HRM and attitudes of recruiters and recruitees towards AI in Sweden”, Umeå, Sweden, available at: www.urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184521

London, A.J. (2019), “Artificial intelligence and black-box medical decisions: accuracy versus explainability”, Hastings Center Report, Vol. 49 No. 1, pp. 15-21, doi: 10.1002/hast.973.

Lukács, A. and Váradi, S. (2023), “GDPR-compliant AI-based automated decision-making in the world of work”, Computer Law and Security Review, Vol. 50, p. 105848, doi: 10.1016/j.clsr.2023.105848.

Mammadova, R. (2022), “The contextual analysis of HRM practices in multinational companies in the Czech Republic”, Organization and Human Capital Development, Vol. 1 No. 2, pp. 1-15.

Michailidis, M.P. (2018), “The challenges of AI and blockchain on HR recruiting practices”, Cyprus Review, Vol. 30 No. 2, pp. 169-180.

Nawaz, N. (2019), “Artificial intelligence interchange human intervention in the recruitment process in indian software industry”, SSRN Scholarly Paper 3521912, doi: 10.2139/ssrn.3521912.

Otoo, F.N.K. (2024), “Assessing the influence of financial management practices on organisational performance of small-and medium-scale enterprises”, Vilakshan-XIMB Journal of Management, doi: 10.1108/XJM-09-2023-0192.

Pan, Y., Froese, F., Liu, N., Hu, Y. and Ye, M. (2022), “The adoption of artificial intelligence in employee recruitment: the influence of contextual factors”, The International Journal of Human Resource Management, Vol. 33 No. 6, pp. 1125-1147.

Patil, B.S. and Priya, M.S.R. (2024), “HR data analytics and evidence based practice as a strategic business partner”, Vilakshan-XIMB Journal of Management, doi: 10.1108/XJM-07-2023-0148. (online first).

Raghavan, M., Barocas, S., Kleinberg, J. and Levy, K. (2020), “Mitigating bias in algorithmic hiring: evaluating claims and practices”, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 469-481. doi: 10.1145/3351095.3372828.

Sapia (2024), “An AI hiring firm says it can predict job hopping based on your interviews”, available at: www.sapia.ai/resources/blog/chat-interview-predicts-job-hopping/ (accessed 12 February 2024).

Sharma, A. (2018),“How AI reinvented hiring practice at l'Oréal, people matters”, available at: www.peoplematters.in/article/techhr-2018/how-the-worldslargest-cosmetic-company-transformed-its-hiring-practicewith-ai-19006

slaughter, J.E. and Allen, D.G. (Eds) (2024), Essentials of Employee Recruitment: Individual and Organisational Perspectives, Taylor and Francis, Oxfordshire.

Tambe, P., Cappelli, P. and Yakubovich, V. (2019), “Artificial intelligence in human resources management: challenges and a path forward”, California Management Review, Vol. 61 No. 4, pp. 15-42.

Taylor, S.J., Bogdan, R. and DeVault, M. (2015), Introduction to Qualitative Research Methods: A Guidebook and Resource, John Wiley and Sons, Hoboken, NJ.

Tharkude, D. (2023), “Challenges of adopting AI technology with special reference to HR practices and employees' acceptability and accountability”, in Tyagi, P., Chilamkurti, N., Grima, S., Sood, K. and Balusamy, B. (Eds), The Adoption and Effect of Artificial Intelligence on Human Resources Management, Part B, Emerald Publishing Limited, Leeds, pp. 45-64.

Van Dijk, N., Casiraghi, S. and Gutwirth, S. (2021), “The ‘ethification’of ICT governance. Artificial intelligence and data protection in the European Union”, Computer Law and Security Review, Vol. 43, p. 105597.

Viljanen, M. and Parviainen, H. (2022), “AI applications and regulation: mapping the regulatory strata”, Frontiers in Computer Science, Vol. 3 No. 2022, p. 779957.

Yu, K.Y.T. and Cable, D.M. (Eds) (2014), The Oxford Handbook of Recruitment, Oxford Library of Psychology, Oxford.

Zhang, Y. (2023), “The impact of ChatGPT on HR recruitment”, Journal of Education, Humanities and Social Sciences, Vol. 19, pp. 40-44.

Corresponding author

Ondřej Dvouletý can be contacted at: odvoulety@unyp.cz

Related articles