Building ethical AI governance in HR: From policy to practice
30 Apr, 2026

 

Mandisi Dube, Client Executive, at 21st Century

 

AI systems learn from historical data. If that data reflects past bias in hiring, performance ratings or pay decisions, the algorithm will simply amplify it. A poorly governed recruitment model, pay structure, or grading system will not only replicate existing inequities but also hardcode them into future decisions creating a self-reinforcing loop where incorrect input metrics produce incorrect output recommendations. Over time systemic errors become masked by algorithmic precision making them harder to detect and even harder to correct for regulatory compliance.

 

The Draft National AI Policy identifies several specific risks that make governance urgent. Fairness risk means models can reproduce past bias or create new unfair outcomes. Privacy risk arises because HR data is sensitive and must not be overused, misused, or exposed. Transparency risk means managers and employees may not understand how a model reached a recommendation. Accountability risk blurs responsibility when a bad decision is made. Data quality risk allows poor or inconsistent data to produce misleading outputs. Finally governance risk emerges when tools are introduced faster than the organisation’s policies, controls and oversight structures.

 

Thus AI governance in HR must begin not with the algorithm itself but with a critical audit of the historical data that feeds it. Without this foundational step even the most sophisticated analytics will deliver flawed outcomes and expose the organisation to POPIA liability, EEA non-compliance and misalignment with the national AI policy’s emphasis on fairness, non-discrimination and human-centred values.

 

Eight Building Blocks of a Practical HR AI Governance Framework

 

Drawing on the Draft National AI Policy’s Strategic Pillar 3 (Responsible Governance) and Strategic Pillar 4 (Ethical and Inclusive AI), responsible intent becomes practical control through eight connected building blocks.

 

1. Governance Principles: Fairness, transparency, accountability, privacy, security, explainability, human oversight, and data quality form the foundation, aligning with the policy’s six key principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability.

 

Two principles in particular explainability and data quality reflect the specific demands of HR decision-making, where employees are entitled to understand how a recommendation about them was reached.

 

2. Use-case classification:Separate low, medium and high-risk use cases so that controls are proportional to the seriousness of the people decision, reflecting the policy’s risk-based approach inspired by international frameworks such as the EU AI Act.

 

3. Data governance: Define approved data sources, ownership, access, retention, lineage and quality controls, consistent with POPIA and the policy’s emphasis on data protection by design and default.

 

4. Model governance: Document purpose, input variables, validation, fairness testing, monitoring and retirement rules, incorporating the policy’s call for regular algorithmic audits and bias testing.

 

5. Human oversight: Decide what the system can recommend and what a human must always review or approve, directly applying the policy’s Human-in-the-Loop (HITL) approach and the principle of human control of technology.

 

6. Policy and compliance alignment:Translate the framework into practical rules, standards and approval checkpoints, ensuring alignment with the proposed AI Ethics Board, AI Regulatory Authority and sectoral strategies.

 

7. Monitoring and audit:Track bias, drift, complaints, exceptions, overrides and performance over time, as required by the policy’s monitoring processes and mandatory reporting frameworks.

 

8. Communication and trust:Explain what the tool does, what data it uses and how employees can question an outcome, supporting the policy’s goals of sufficient transparency and sufficient explainability.

 

Responsible analytics design in practice

 

Responsible design means building fairness and control into the solution from the start, not discovering problems after the tool is already affecting people decisions.

  1. Start with the business problem, not the technology.
  2. Define what the model will do: inform, recommend, prioritise, or automate.
  3. Test whether available data is complete, current, relevant, and consistent.
  4. Remove or control variables that may act as direct or indirect bias factors.
  5. Test outcomes across employee or candidate groups before deployment.
  6. Make outputs explainable — managers must understand what a score means.
  7. Keep a human in the loop, especially for high-impact employment decisions.

 

This approach directly supports the policy’s Strategic Pillar 6 (Human-Centred Deployment) and its emphasis on non-maleficence, the principle that AI should not harm individuals, society or the environment.

 

The Draft South Africa National AI Policy makes one thing clear: the question is no longer whether HR should use AI but how AI must be governed. The eight building blocks and the principles of responsible analytics design provide a practical framework for HR leaders. When applied faithfully they transform national policy into daily practice.

 

For HR professionals this means conducting AI inventories, classifying risk, auditing for bias and keeping humans in the loop for high-impact decisions. AI governance is not a barrier to innovation, it is the foundation for fair, transparent, and accountable people decisions. Governing AI properly today is what determines whether AI can be trusted tomorrow.

 

ENDS

 

Author

@Mandisi Dube, 21st Century
+ posts
Share on Your Socials

Share

Subscribe to the EBnet Daily Newsletter and WhatsApp Community for the latest retirement funding, financial planning, and investment news, along with market updates and special announcements.

Subscribe to

Thank You. You have been subscribed. Please check your emails for a confirmation mail.