Nathalie Burrows, Editor at EBnet
I think it’s worth spending some time on Commissioner Unathi Kamlana’s opening address at the 2026 FSCA Conference – which surfaced many of Day 1’s topics and issues.
Opening address by the FSCA Commissioner
The Commissioner asked: what will define the next phase of South Africa’s financial sector, and how must both regulators and industry evolve to respond? He framed this against the FSCA leadership’s first five‑year term, using the end of that term as a moment to reflect not only on regulatory achievements, but on how the sector itself has been changing.
Kamlana positioned the South African financial system in a world undergoing what Mark Carney has called a “rupture in the world order” – rising geopolitical tensions, economic fragmentation and weakening international cooperation. Shocks in global politics and trade quickly transmit into financial markets, capital flows and investment decisions, and ultimately into the retirement outcomes and financial resilience of South African savers and investors.
He balanced this sober view with a reminder of some of our recent wins. South Africa’s exit from the FATF grey list, after sustained effort by government, regulators, enforcement agencies and the private sector, and the November 2025 upgrade of the sovereign credit rating from BB‑ to BB, both demonstrate that coordinated action can restore confidence and strengthen the system.
Looking ahead, Kamlana identified three forces that will shape the sector’s next phase: rapid technological change, especially artificial intelligence; evolving regulatory frameworks; and the foundational importance of trust and integrity. On technology, he highlighted a joint FSCA–Prudential Authority study showing that AI is already widely adopted across the financial sector and is likely to be one of the defining technologies of the coming years. And this was a topic that took up much of Day 1’s conversation.
Crucially, Kamlana stressed that the FSCA supports innovation. Technology can improve efficiency, reduce costs and expand access, which are all positive for financial inclusion and retirement adequacy. But he drew a clear line: innovation must be pursued responsibly. As decisions about credit, advice, claims, underwriting or risk management are increasingly supported or even made by algorithms, questions of governance, accountability and fairness become sharper, not weaker. Institutions must be able to explain how automated decisions are reached and ensure they remain fair and transparent.
Ruminating on the topic of AI both during and after the day’s conferencing, I thought of practical questions that retirement fund trustees and manco’s might want to ask their service providers to surface some of the issues: Do the fund’s service providers use AI in areas that affect member outcomes? What controls, model‑validation processes and data‑governance frameworks are in place? Can the board get clear, comprehensible explanations of how models work and how bias is managed?
For advisers, it raises similar issues around advice tools, suitability assessments and product recommendations: if AI is embedded in the tools you use, you remain responsible for understanding their limitations and ensuring outcomes are fair.
Kamlana also tied AI to the broader theme of trust. In a more volatile and contested world, the credibility of financial institutions – and of the regulatory framework that governs them – becomes an essential asset. Conduct failures, opaque algorithms or unfair outcomes can rapidly erode that trust.
His keynote signalled that the FSCA’s intention is to balance openness to technological progress with a firm conduct lens, keeping customer and member outcomes at the centre of supervision. For trustees and advisers, this is a clear prompt: the direction of travel is towards demonstrable fairness, explainability and accountability across the value chain, including where AI is involved. And he captured their intentions perfectly with his comment: “It’s not about regulating more, it’s about regulating more effectively”.
AI in finance: Fatos Koc’s global lens
Fatos Koc, Head of the Financial Markets Unit in the OECD’s Directorate for Financial and Enterprise Affairs, brought an international perspective to “AI in Finance: Policies, Practices and Challenges”, with lessons trustees and advisers can apply immediately. She anchored her remarks in the OECD AI Principles, first adopted in 2019 and updated in 2024, which set out a framework for trustworthy, human‑centric AI. The principles rest on five values‑based pillars – inclusive growth and well‑being; respect for the rule of law, human rights and democratic values (including fairness and privacy); transparency and explainability; robustness, security and safety; and accountability – supported by recommendations on investment, skills, governance and international cooperation.
Drawing on a 2024 OECD survey of 49 jurisdictions, Koc showed that AI is already deeply woven into financial services. Major use cases include customer relations and personalisation, process and claims automation, fraud detection, data and text analysis, risk management and compliance, AML/CFT, portfolio management, cyber security and onboarding. The benefits are significant: productivity gains, cost reduction, operational efficiencies, better customer experience and more sophisticated products and services, with many institutions now experimenting with advanced machine‑learning models and generative‑AI tools. For trustees and advisers, this explains why firms are so interested in AI-driven personalisation and efficiency – and why these tools are appearing in administration, investment and advice offerings.
Koc was equally clear about the risks. Cybersecurity, market manipulation, bias and discrimination, data‑quality and privacy concerns, governance and model‑risk issues, explainability gaps, consumer and investor protection risks, as well as new operational and third‑party vulnerabilities all feature prominently in supervisory feedback. In simple terms, the same characteristics that make AI powerful – speed, complexity, autonomy and the ability to learn from data – also make it harder to oversee, validate and explain.
Most jurisdictions, she noted, already have applicable regulation in place, typically through a technology‑neutral approach: existing conduct, prudential, data‑protection and market‑integrity rules apply irrespective of whether AI is used. These frameworks usually cover risk management, data protection, model risk management, investor and consumer protection, disclosure, cyber‑risk, governance, ethics and human rights, outsourcing and operational resilience. The challenge is no longer the complete absence of rules, but their interpretation and application to AI. Supervisors and firms alike wrestle with questions such as: what level of explainability is sufficient; how should “human in the loop” work in practice; how fair is “fair enough”; and how should boards oversee complex, often third‑party AI models.
Koc’s call was for clearer supervisory expectations and practical guidance, both internally and publicly, to reduce ambiguity and support consistent oversight while giving firms the legal certainty they need to invest in AI safely.
Looking ahead, she pointed to the need to prepare for more advanced forms of AI, including agentic AI and open‑finance ecosystems, and for supervisors to build their own SupTech capabilities so they can keep pace with industry.
Her message dovetailed neatly with Kamlana’s: AI offers material opportunities to improve outcomes, but it raises demanding questions about governance, explainability and fairness. The practical takeaway is that oversight of service providers, product solutions and advice processes now needs to include an informed view of where and how AI is used – and whether the safeguards match the promises being made to members and clients.
COFI Bill Framework – Navigating and embracing change
Three points sum up this session.
First, COFI is positioned as a strategic reset in how the sector serves South Africans, not simply another compliance layer. Fair outcomes across the value chain – from product design and pricing to distribution, advice and post‑sale servicing – are meant to become the organising principle.
Second, accountability shifts upward: boards and key persons are explicitly responsible for conduct, culture and outcomes, with value‑chain transparency becoming a regulatory expectation.
Third, COFI will be phased in with reasonable lead times, but the direction is clear: conduct becomes a system embedded in governance, MI, risk management and culture, rather than a project bolted onto existing structures.
Where is COFI, I hear you ask? Eugene Du Toit, Divisional Executive for Regulatory Policy at the FSCA confirmed that he “is fairly certain that the COFI Bill will see Parliament this year”.
The second big conversation of the day was around financial health. But I’ll include my thoughts on that in my reflections of Day 2.
ENDS
Ed’s note: The theme of the 2025 EBnet Evolutionaries Conference was AI – you can catch the latest thoughts on the impact of AI on investing, decision making, service provision and cybersecurity here.







