AI governance and ethics: using AI responsibly in recruiting 🤔

Imagine you're the head of HR at a company. Applications are piling up, time is short—and the use of artificial intelligence (AI) in recruiting sounds appealing 🚀. In fact, around 70% of HR departments are planning to use AI in applicant selection or are already doing so. But at the same time, uncertainty reigns: Almost half of applicants believe...

Written by

Udo Wichmann

Published on

May 28, 2025
BlogBlog
decoding="async"

Imagine you're the head of HR in a company. Applications are piling up, time is short - so the use of artificial intelligence (AI) in recruiting sounds tempting 🚀. In fact, around 70% of HR departments are planning to use AI in applicant selection or are already doing so. But at the same time, uncertainty prevails: According to a recent survey , almost half of applicants believe that AI selection tools make more biased decisions than human recruiters. So how can you use AI without compromising fairness and humanity? You'll find answers in this blog article: We look at opportunities and risks, examine AI governance and ethics in HR processes, and give you concrete tips on how to use AI responsibly - and in accordance with the Charter for Diversity - in recruiting and applicant selection.

Opportunities and challenges: AI in recruiting between hype and skepticism

AI can work wonders in recruiting. It completes routine tasks like resume screening, scheduling appointments, or matching candidates to jobs in seconds. This gives you more time for interpersonal interactions 😊. Many companies are already reporting efficiency gains – AI tools accelerate processes and help filter suitable talent from large applicant pools. This can be a particular advantage in highly competitive fields like IT and finance. As a recruitment consultancy and career coach, we see in these industries that AI can help identify the right specialists and managers more quickly. But the hype has a downside: If the algorithms are built incorrectly, there's a risk of bias and errors.

A double-edged sword 🔪: On the one hand, AI relieves the burden on recruiting teams, but on the other hand, there is a risk of reinforcing unwanted bias . Studies show that while AI-supported recruiting can improve quality and speed, algorithmic distortions can sometimes lead to discriminatory decisions – for example, based on gender or ethnic origin nature.com . A well-known example is Amazon's experimental recruiting tool , which systematically disadvantaged women due to data bias in 2018 and was therefore discontinued. More recent tests with generative AI are also alarming: In one experiment , GPT-3.5 favored certain names on resumes, thereby disadvantaging entire groups – to an extent that clearly fails to meet employment discrimination benchmarks. Such cases show that uncontrolled use of AI can lead to automated discrimination on a large scale. No wonder skepticism remains: According to organizational psychologist Adam Grant, while many professionals are fascinated by how AI relieves them of monotonous work, they also fear job loss. It's up to us to take these concerns seriously and use AI in a way that relieves stress without disempowering them.

Algorithmic fairness: diversity as a guiding principle 🤝🌈

Recruiting is particularly important in determining who gets career opportunities – and this is where AI must be particularly fair. As a personnel consultancy for executives in IT and finance (and a member of the Diversity Charter ), we know that diversity and equal opportunities are a must. Algorithmic fairness means that AI systems should make objective and unbiased judgments. But that only works if we actively train them to do so. AI is not neutral per se – it reflects the values and data it is fed. AI ethics expert Kate Crawford warns that many systems cement existing power structures because they are primarily shaped by a homogeneous group of developers (often white, male perspectives). Leading researcher Timnit Gebru also emphasizes that without external controls, companies are unlikely to ensure on their own that their AI is free from prejudice: “We need regulation, and we need more than just profit as an incentive,” says Gebru – internal ethics guidelines alone are not enough.

What does this mean in practice? First of all: identify and eliminate bias! Take a close look at your AI tools' training data. Are all groups adequately represented? Are stereotypical patterns unintentionally reinforced? A recent review study found that algorithmic biases are often due to unbalanced data sets or developer bias. The good news: There are technical solutions. For example, data can be cleaned and diversity checks can be performed before an algorithm becomes a career planner. One tip from research is to train AI systems with impartial data sets and conduct regular audits. Some organizations use so-called fairness forensics – they specifically test AI for disadvantages before live deployment, for example by simulating different applicant profiles. This approach also aligns with the principles of the Diversity Charter: Diversity as a standard in the data ultimately means diversity in the results.

Secondly , human judgment is needed as a corrective . AI can support a recruiter, but it should not replace them completely. On the one hand, because humans bring skills such as empathy and contextual knowledge - and on the other hand, because applicants feel more fairly treated when a human has the final say in the end. New studies show something exciting: candidates feel a lack of respect when an algorithm alone judges them. They want to be seen as a whole person , not just as a data set. The “human touch” therefore remains invaluable. Why not use AI as a career coach that makes suggestions, but discusses the final decision with the team? By combining humans and machines , you increase both fairness and acceptance among applicants.

Transparency and traceability: Creating trust 🤔✨

Alongside fairness, transparency is the second key word for ethical AI management in recruiting. Applicants have a right to know whether and how AI is used in the selection process. Many candidates can sense when an algorithm is involved anyway – surveys show that 79% of employees want to know when AI is used in the application process. The reasons are obvious: Applicants want to be sure they aren't falling victim to a black-box decision and want to be able to understand the selection criteria . As an HR manager, you should actively meet this need. Communicate openly when you use AI tools: For example, by posting a note on the job portal (“Note: The AI software used supports the pre-selection process; the final decision is made by our team.”*).

Transparency directly contributes to employer branding . By explaining how the AI works, you signal respect and build trust. It's important to understand exactly what the AI does. So ask your providers: What criteria does the tool use to sort resumes? Can you explain why candidate X was selected? If a system can't do that, caution is advised. Traceability also means that you can subsequently explain why a person was hired or rejected. Ideally, the decision-making process can be explained step by step – either because the AI uses explainable models or because you've incorporated human feedback. Remember: In sensitive areas like personnel decisions, regulatory requirements for transparency could soon even be introduced (think of the EU AI Regulation). So it's doubly worthwhile to be a proactive pioneer here.

Another aspect of traceability is accountability . AI governance also means clearly defining responsibilities: Who regularly reviews the results of AI selection? Who intervenes if something goes wrong? The following applies here: Technology should never be used as an excuse. Even if an algorithm has suggested rejecting someone, ultimately you and your team bear responsibility for the decision. This conscious approach builds trust both internally and externally. Sam Altman, CEO of OpenAI (the company behind ChatGPT), said in a hearing before the US Senate: When AI goes wrong, it can go really wrong – that's why oversight is absolutely critical . Take this plea to heart: Create internal control mechanisms within your company before it becomes a requirement from outside.

Ethical use of AI: 5 recommendations for HR

So, how can you use AI ethically and effectively in recruiting? Finally, we've compiled a compact best-practice list for you – from algorithmic fairness to traceability :

  1. Ensure diversity in data and development : Ensure that your AI system's training data is diverse and up-to-date. Incorporate different perspectives—for example, by involving employees from different departments or backgrounds in the selection and testing of AI tools. This way, you can reduce bias right from the start.

  2. Conduct bias checks and audits: Regularly test your AI for biases. For example, you can run fictitious profiles to see if certain groups systematically perform worse. Document these tests and adjust the algorithm if necessary. External audits or certifications (similar to a TÜV for AI) can provide additional assurance.

  3. Human-in-the-Loop : Automate tasks, not entire decisions – or in the words of AI pioneer Andrew Ng: " Govern applications, not technology. Automate tasks, not jobs ." Let AI do the legwork (e.g., pre-screening or handling routine communication), but always involve human decision-makers before final decisions are made. This improves the quality of the results and gives candidates the feeling of being considered fairly and personally.

  4. Promote transparency and education: Develop guidelines on how and when applicants are informed when AI is involved in the process. Train your recruiting team so everyone understands the basics of AI and can communicate confidently. Clear internal AI governance (including guidelines for use) helps ensure everyone is on the same page. Remember to be transparent about the criteria the algorithm uses—and ensure these criteria are job-relevant and non-discriminatory .

  5. Ethics and compliance as decision criteria: Choose AI tools and service providers carefully. Rely on providers who demonstrably adhere to AI ethics , disclose their models, and adhere to regulatory requirements. It's better to test a tool for a while (or conduct a pilot) than to hastily introduce something that will later generate negative headlines. Stay up to date on current studies, laws, and industry initiatives (e.g., working groups on AI ethics or the Charta der Vielfalt community) – this way, your recruitment consultancy in IT and finance will always be up to date on responsible AI use.

With these measures, you'll turn AI into a true career companion —for you as an HR professional and for candidates. You'll use the technology to make career planning and personnel decisions smarter, without losing sight of the values that define good HR work: fairness, transparency, and appreciation.

Conclusion: Stay human – with AI as support 🤗

AI in recruiting is no longer a sci-fi future topic, but reality. The question is not whether, but how you use it. Today you have the opportunity to shape change proactively and ethically . Approach AI with curious openness, but also with common sense. As Adam Grant aptly observed , many see AI primarily as an opportunity to outsource tedious tasks – seize this opportunity, but keep the important decisions under human control. This way, your recruiting remains efficient and empathetic.

Finally, a practical call to action: Start now! For example, review an ongoing selection process to see if unconscious bias is occurring. Get feedback from applicants: Do they feel they're being treated fairly, especially when AI tools are involved? Raise awareness within your team – perhaps through a short workshop on AI ethics. And if you need support with this, we're happy to help 😃. As an experienced HR consultancy, we can help you responsibly integrate AI into your recruitment process . Contact us today , and let's build a recruiting process together that's both innovative and inclusive. Your team and the talents of tomorrow will thank you! 🚀👫

More blog posts

Cookie Consent with Real Cookie Banner