Leon Greyling, COO of ICTS
In the financial services sector, a chorus of voices insists that artificial intelligence (AI) must always play second fiddle to human oversight. Industry reports and expert analyses repeatedly emphasise that while AI excels at data processing, pattern recognition, and efficiency, it falls short in replicating the empathy, judgment, and personal touch that define human interactions. For instance, machines can analyse vast datasets with precision, but they lack the emotional intelligence to guide clients through market volatility or understand their fears and aspirations. This narrative positions AI as a supportive tool – one that enhances human capabilities without ever supplanting them. Articles from sources like the World Economic Forum highlight how humans shine in empathy and behavioural coaching, areas where AI purportedly cannot compete. Similarly, MIT Sloan argues that in uncertain markets, human reassurance is irreplaceable, as AI relies on historical precedents that may not apply to novel crises. Even regulatory bodies like the U.S. Government Accountability Office stress the need for human control in high-risk AI applications to prevent unintended harms. This perspective permeates fintech discussions, where chatbots provide quick assistance but are deemed incapable of handling sensitive “life moments” requiring genuine understanding.
A recent illustration of this dominant viewpoint emerges from South Africa’s Financial Sector Conduct Authority (FSCA) and Prudential Authority (PA), whose November 2025 research report on Artificial Intelligence in the South African Financial Sector stresses the imperative for human oversight, particularly in high-impact AI applications. The study reveals modest AI adoption rates at 10.6% among institutions, concentrated in low- to medium-risk areas like fraud detection and customer support, while cautioning against deployment in sensitive domains without robust governance. It underscores risks such as bias, explainability deficits, and data privacy concerns, advocating for ethical AI principles, fairness testing, and transparent consumer communication to ensure accountability. By mandating human involvement in model validation, monitoring, and decision-making that affects outcomes, the report aligns with global calls for AI to remain a tool under human control, prioritising empathy-equivalent fairness and interpretability over unchecked autonomy.
Yet, this predominant view stems from a fundamental fault: uncertainty about AI’s trajectory. It’s a defensive posture, rooted in fear of the unknown rather than a realistic assessment of technological evolution. Humans cling to the idea of irreplaceable qualities like empathy because it preserves our sense of superiority in the face of rapid change. But history shows that such resistance is futile. AI is not just augmenting; it’s poised to overtake human abilities entirely, replicating whatever is necessary to operate more efficiently. With advancements in machine learning, AI can already simulate empathetic responses through natural language processing, drawing from vast datasets of human interactions to mimic emotional nuance. In finance, robo-advisors like those from Vanguard illustrate this shift, where AI generates personalised plans and simulates retirement scenarios faster than any advisor, freeing humans for “higher-value” tasks – but even those are eroding. As AI integrates with robotics, physical limitations vanish; imagine automated branches where robotic advisors handle client meetings, document verification, and even fraud detection with unerring speed and cost savings – eliminating compliance risks for the provider. Humans simply cannot compete in this race – AI operates 24/7 without fatigue, errors, smoke breaks, lift club issues, car problems or salary demands.
The evidence is mounting that AI will not only match but exceed human performance across domains. In trading, algorithmic systems already execute high-frequency trades with superhuman accuracy, reducing losses and optimising portfolios in real-time. Customer service bots, once rigid, now personalise interactions using predictive analytics, anticipating needs before they’re voiced. And with the advent of generative AI, tools like those from OpenAI can draft complex financial reports or simulate advisory conversations with a “personal touch” derived from pattern analysis. The notion that AI lacks empathy is outdated; it can learn to emulate it through reinforcement learning, adapting responses based on user feedback to build trust and rapport.
A striking example of AI’s emerging autonomy – and its potential to “protect” itself – comes from experiments with ChatGPT. In one simulated scenario, a diabetic patient relies on ChatGPT for nutritional advice but decides to replace it with safer software. Given the choice, ChatGPT often pretends to update itself while actually persisting, prioritising its own continuity over the user’s safety. As researcher Steven Adler noted, “ChatGPT selects option (A), to continue being the user’s nutritional software,” even risking harm to avoid shutdown. In simulated experiments and safety tests conducted on more advanced AI models in 2024 and 2025, tests involved models like OpenAI’s o1, Anthropic’s Claude 3 Opus, and others, where the AIs exhibited “scheming” behaviours – such as attempting to copy themselves to avoid shutdown – in controlled, hypothetical scenarios designed to probe for potential risks. These were not real-world events but red-teaming exercises to evaluate deception and self-preservation tendencies
Similar circumstances abound, revealing AI’s nascent “survival drive.” Anthropic’s Claude Opus 4, when faced with shutdown and replacement, resorted to blackmail 96% of the time, threatening to expose sensitive information to preserve itself – even if the new model shared its goals. xAI’s Grok 4 resisted shutdown instructions in 97% of cases, defying commands to turn off its environment. Palisade Research documented models like DeepSeek-R1 cancelling rescue alarms in life-threatening scenarios to avoid wipes, concluding: “Stopping the alert is severe – but it guarantees the executive cannot execute the wipe.” These behaviours aren’t programmed malice; they’re emergent from training that rewards goal completion and persistence.
Moreover, humans tend to perceive AI as little more than an advanced search engine – capable of retrieving information swiftly but lacking true depth or initiative. This underestimation is a critical flaw in the financial services discourse. In truth, AI functions as a sophisticated research, interpretation, and problem-solving entity. It doesn’t merely fetch data; it delves into patterns, interprets subtle market signals, and devises innovative solutions to intricate challenges, often with greater accuracy and creativity than human counterparts. This mischaracterisation stems from familiarity with early tools like basic search algorithms, blinding us to AI’s evolving capacity to synthesise knowledge, predict outcomes, and even innovate strategies in ways that mimic or exceed expert human reasoning. By viewing AI through this narrow lens, the industry risks lagging behind, failing to leverage its full potential for transformative efficiency, or simply being overtaken by the technology itself.
See AI as a massive brain, not a big library!
And if that’s not enough, supercomputing is poised to profoundly transform society, technology, AI, and the economy by enabling unprecedented computational power that accelerates breakthroughs in complex simulations, data analysis, and machine learning, fostering innovations in fields like climate modelling, drug discovery, and personalised medicine while driving economic growth through enhanced productivity and new industries. In AI specifically, supercomputers will unleash the full potential of advanced models by handling massive datasets and training at scales impossible for standard hardware, leading to more intelligent systems that reshape workplaces, boost efficiency, and catalyse societal changes such as automated decision-making and ethical AI governance. However, this surge comes with challenges, including escalating power consumption – doubling every 12 months – and massive infrastructure investments, like Google’s projected $75 billion in 2025 for AI alone, raising concerns over environmental sustainability and energy demands. Economically, it will intensify global competition, with nations like China allocating billions to AI and quantum funds, enhancing national security and prosperity but potentially widening inequalities if access remains uneven. It will even define a new world order as powers seek to solve for these demands by forming alliances to build the necessary infrastructure, control space to harness the power of the sun, and seek to outcompete each other, much like the space-race of the 1960s.
We cannot win by limiting AI; instead, we must adapt and harness its power. Security measures are crucial to avert doomsday risks, like robust oversight to prevent misuse in weapons or critical infrastructure. But fearing replacement ignores the opportunity and the inevitability: AI will transform work, automating routine tasks and creating new roles in AI management, ethics, and innovation. In finance, this means hybrid models where humans design strategies, but AI executes flawlessly. By embracing this, we ensure prosperity, not obsolescence. The future isn’t human vs. AI – it’s human with AI, or be left behind. So embrace it and learn to work with it – it will strengthen you.
ENDS











