AI governance and ethics: using AI responsibly in recruiting

Imagine you are the head of HR at a company. Applications are piling up and time is short, so the use of artificial intelligence (AI) in recruiting sounds tempting. In fact, around 70% of HR departments are planning to use AI in applicant selection or are already doing so. But at the same time, there is uncertainty: according to a survey, almost half of applicants believe...

Written by

Udo Wichmann

Published on

BlogBlog
decoding="async"

Imagine you are the head of HR at a company. Applications are piling up and time is short, so the use of artificial intelligence (AI) in recruiting sounds tempting. In fact, around 70% of HR departments are planning to use AI in applicant selection or are already doing so. But at the same time, there is uncertainty: according to a recent survey, almost half of applicants believe that AI selection tools make more biased decisions than human recruiters. So how can AI be used without jeopardizing fairness and humanity? You'll find the answers in this blog article: We look at the opportunities and risks, shed light on AI governance and ethics in HR processes and give you specific tips on how to use AI responsibly and in line with the Diversity Charter in recruiting and applicant selection.

Opportunities and challenges: AI in recruiting between hype and skepticism

AI can work wonders in recruiting. It completes routine tasks such as screening CVs, scheduling appointments or matching candidates to jobs in seconds. This gives you more time for interpersonal relationships 😊. Many companies are already reporting efficiency gains - AI tools speed up processes and help to filter out suitable talent from large applicant pools. This can be an advantage, especially in highly competitive fields such as IT and finance. As an HR consultancy and career advisor, we see in these sectors that AI can help to identify the right specialists and managers more quickly. But there is a downside to the hype: if the algorithms are built incorrectly, there is a risk of bias and errors.

A double sword: on the one hand, AI takes the pressure off the recruiting team, but on the other hand, there is a risk of reinforcing unwanted biases. Studies show that while AI-supported recruiting can increase quality and speed, algorithmic biases can sometimes lead to discriminatory decisions, for example based on gender or ethnic originnature.com. A well-known example isAmazon's experimental recruiting tool, which systematically disadvantaged women in 2018 due to data bias and was therefore scrapped. More recent tests with generative AI are also alarming: in one experiment, GPT-3.5 preferred certain names on CVs and thus disadvantaged entire groups to an extent that clearly failed to meet employment law discrimination benchmarks. Such cases show that uncontrolled use of AI can mean automated discrimination on a large scale. No wonder skepticism remains: According to organizational psychologist Adam Grant, while many professionals are fascinated by how AI can take monotonous work off their hands, there is also concern about job loss. It is up to us to take these concerns seriously and use AI in such a way that it relieves the burden without disempowering.

Algorithmic fairness: diversity as a guiding principle 🤝🌈

Recruiting in particular determines who gets career opportunities, and AI must be particularly fair here. As a recruitment consultancy for managers in IT and finance (and a member of the Diversity Charter), we know this: Diversity and equal opportunities are a must. Algorithmic fairness means that AI systems should make objective and unbiased judgments. But this only works if we actively train them to do so. AI is not neutral per se; it reflects the values and data with which it is fed. AI ethics expert Kate Crawford warns that many systems cement existing power structures, as they are primarily shaped by a homogeneous group of developers (often white, male perspectives). Leading researcher Timnit Gebru also emphasizes that without external controls, companies are unlikely to ensure that their AI does not act out prejudices on its own: "We need regulation, and we need more than just profit as an incentive," says Gebru, internal ethics guidelines alone are not enough.

What does this mean in practice? First of all: identify and eliminate bias! Take a close look at your AI tools' training data. Are all groups adequately represented? Are stereotypical patterns unintentionally reinforced? A recent review study found that algorithmic biases are often due to unbalanced data sets or developer bias. The good news: There are technical solutions. For example, data can be cleaned and diversity checks can be performed before an algorithm becomes a career planner. One tip from research is to train AI systems with impartial data sets and conduct regular audits. Some organizations use so-called fairness forensics – they specifically test AI for disadvantages before live deployment, for example by simulating different applicant profiles. This approach also aligns with the principles of the Diversity Charter: Diversity as a standard in the data ultimately means diversity in the results.

Secondly, human judgment is needed as a corrective. AI can support a recruiter, but should not completely replace him or her. On the one hand, because humans contribute skills such as empathy and contextual knowledge, and on the other, because applicants feel they are treated more fairly if a human ultimately has the last word. New studies show something exciting: Female candidates feel a lack of respect when an algorithm is the sole judge of them. They want to be seen as a whole person, not just a data set. The "human touch" therefore remains worth its weight in gold. Why not use AI as a career companion that makes suggestions but discusses the final decision as a team? By combining human and machine, you increase both fairness and acceptance among applicants.

Transparency and traceability: Creating trust 🤔✨

Alongside fairness, transparency is the second key word for ethical AI management in recruiting. Applicants have a right to know whether and how AI is used in the selection process. Many candidates can sense when an algorithm is involved anyway – surveys show that 79% of employees want to know when AI is used in the application process. The reasons are obvious: Applicants want to be sure they aren't falling victim to a black-box decision and want to be able to understand the selection criteria . As an HR manager, you should actively meet this need. Communicate openly when you use AI tools: For example, by posting a note on the job portal (“Note: The AI software used supports the pre-selection process; the final decision is made by our team.”*).

Transparency has a direct impact on employer branding. By explaining how the AI works, you signal respect and build trust. It is important to understand exactly what the AI does. So ask your providers: What criteria does the tool use to sort CVs? Can you explain why candidate X was selected? If a system cannot do this, caution is advised. Traceability also means that you can subsequently justify why a person was hired or rejected. Ideally, the decision can be explained step-by-step, either because the AI uses explainable models or because you have incorporated human feedback. Remember: in sensitive areas such as personnel decisions, there could soon even be regulatory requirements for transparency (think of the EU AI Regulation). It is therefore doubly worthwhile to be a proactive pioneer here.

Another aspect of traceability is accountability. AI governance also means defining clear responsibilities: Who regularly checks the results of the AI selection? Who intervenes if something goes wrong? The following applies here: technology must never be used as an excuse. Even if an algorithm has suggested rejecting someone, you and your team are ultimately responsible for the decision. This conscious approach creates trust both internally and externally. Sam Altman, the CEO of OpenAI (the company behind ChatGPT), said in a hearing before the US Senate: When AI goes wrong, it can go really wrong - that's why oversight is absolutely critical. Take this plea to heart: create mechanisms of control within the company before it becomes an external obligation.

Ethical use of AI: 5 recommendations for HR

So how can you use AI ethically and effectively in recruiting? Finally, we have put together a compact list of best practices, from algorithmic fairness to traceability:

  1. Ensure diversity in data and development: Make sure that the training data of your AI system is diverse and up-to-date. Include different perspectives, for example by involving employees from different areas or with different backgrounds in the selection and testing of AI tools. This will reduce bias right from the start.

  2. Conduct bias checks and audits: Regularly test your AI for biases. For example, you can run fictitious profiles to see if certain groups systematically perform worse. Document these tests and adjust the algorithm if necessary. External audits or certifications (similar to a TÜV for AI) can provide additional assurance.

  3. Put people in the loop (human-in-the-loop): Automate tasks, not complete decisions, or in the words of AI thought leader Andrew Ng: "Govern applications, not technology. Automate tasks, not jobs.". Let AI do the preliminary work (e.g. make pre-selections or take over routine communication), but always involve human decision-makers before final judgments are made. This improves the quality of results and gives candidates the feeling that they are being considered fairly and personally.

  4. Promote transparency and education: Develop guidelines on how and when applicants are informed when AI is involved in the process. Train your recruiting team so that everyone understands the basics of AI and can communicate confidently. Clear internal AI governance (including guidelines on its use) helps to ensure that everyone is pulling in the same direction. Remember to be transparent about what criteria the algorithm uses and make sure that these criteria are job-relevant and non-discriminatory.

  5. Ethics and compliance as decision criteria: Choose AI tools and service providers wisely. Go for providers that demonstrably pay attention to AI ethics, disclose their models and adhere to regulatory requirements. It is better to test a tool a little longer (or carry out a pilot) than to hastily introduce something that later makes negative headlines. Keep up to date with the latest studies, laws and industry initiatives (e.g. working groups on AI ethics or the Diversity Community Charter ) so that your recruitment consultancy in IT and finance is always up to date when it comes to the responsible use of AI.

With these measures, you can turn AI into a real career companion for you as an HR professional and for candidates. You use technology to make career planning and personnel decisions smarter without losing sight of the values that define good HR work: Fairness, transparency and appreciation.

Conclusion: Stay human – with AI as support 🤗

AI in recruiting is no longer a sci-fi topic of the future, but a reality. The question is not whether, but how you use it. Today, you have the opportunity to shape change proactively and ethically. Approach AI with a curious openness, but also with common sense. As Adam Grant has aptly observed, many people see AI primarily as an opportunity to offload tedious tasks; take advantage of this opportunity, but keep the important decisions under human control. This will keep your recruiting efficient and empathetic.

Finally, a practical call to action: Start now! For example, check an ongoing selection process to see whether unconscious bias is occurring. Get feedback from applicants: Do they feel they are being treated fairly, especially when AI tools are involved? Raise awareness in your team, perhaps with a short workshop on AI ethics. And if you would like support with this, we will be happy to assist you 😃. As an experienced recruitment consultancy, we can help you integrate AI responsibly into your recruitment process. Contact us today and let's work together to build a recruitment process that is both innovative and inclusive. Your team and the talent of tomorrow will thank you! 🚀👫

More blog posts

Cookie Consent with Real Cookie Banner