Anna Collard, Evangelist and SVP of Content Strategy at KnowBe4 Africa
Artificial intelligence can enhance our lives. Yet, there is a dark side to this technology evolution. Criminals, scammers, and disinformation drivers are using AI to attack our instincts and trust, undermining the benefits of digital civilisation. How do we fight back?
It doesn’t matter if its AI content
Cybercrime is a massive problem infesting the digital world. Criminals have been using the same manipulations for centuries, harnessing new technologies to advance their goals. AI is no different. However, it is a significant amplifier, says Cameron Losco, DR Insight’s General Manager of Cybersecurity.
“Cybercriminals are using AI to scale and fine-tune their attacks. For example, they create scam messages in many more languages and scrape online sources to build profiles of victims. It’s the same bag of tricks, except the bag is much bigger and the tricks are faster and more targeted. The average user is not prepared to counter these attacks.”
This topic is close to Anna Collard, Evangelist and SVP of Content Strategy at KnowBe4 Africa. She studies the impact of technology on our wellbeing and safety and has trained many people and companies to rebuff cyber attacks.
“More and more criminals are using or benefitting from AI. We’ve seen that a lot of phishing kits have deepfake generation tools built in. It’s getting less complicated and more accessible. You don’t have to be an expert in AI in order to make use of it.”
Yet, criminals and grifters have been using fake news, fabricated materials, and emotionally charged messages for decades to con online users. While AI broadens their abilities and provides them with more options, the problem isn’t necessarily whether something was generated by AI but whether it intends to do harm.
“It doesn’t matter so much if something is AI-generated or not. What matters is the intent behind the message. Is it being used maliciously? That’s what we need to teach people to look for,” says Collard.
We want to believe
When a criminal sends a message saying you’ve won the lottery or you are in deep trouble with tax authorities, the goal is to trigger an emotion and act without thinking critically.
Criminals exploit that we want to believe certain things, especially when these are presented to us in an emotional way. Seeing videos of grave markers on roadsides or photos of children stuck on boats during a flood triggers fear or empathy. We respond reactively, often based on our personal world views and experiences—natural ways for our brains to respond.
“We have cognitive biases that help us to make sense of the world,” Collard explains. “If you used your critical thinking brain for everything that triggers you, it would take up too much energy. So, we develop these heuristics, shortcuts, to conserve energy. It makes sense from a survival point of view, but it also means that we can be easily manipulated.”
Some of our reactions are instinctual. But criminals also want to manipulate us through repetition. This is called the mere exposure effect: we’ll start believing false information if it is repeated often enough. The effect is particularly potent when it is something we want to believe—common in get-rich-quick or romance scams. AI is incredibly useful to perpetuate such techniques.
Protection through mindfulness
How do we protect ourselves from these attacks? The answer isn’t simple, nor is there a single solution. Protecting against modern cyber threats—especially those exploiting human vulnerabilities—requires a layered defence strategy, much like how we approach technical cybersecurity: with defence in depth. This means combining robust technical controls, clearly defined processes, and meaningful, ongoing training to build human resilience. One such essential layer is digital mindfulness. In the context of cybersecurity, it is about cultivating what Collard refers to as meta-awareness.
“Meta-awareness is about slowing down and enabling your critical thinking when it matters. You can’t walk around thinking critically all the time—it would be exhausting. But you can train yourself to recognise the moments that matter and consciously switch on that deeper attention.”
For instance, you can read an email on autopilot. But when your meta-awareness detects something odd, it’s time to slow down and think critically. This skill is more potent than most security measures, says Losco:
“The absolute majority of cyberattacks involve targeting people and fooling them into making a mistake. Often, all they want is for you to click on a link. Then they can steal account details and infiltrate systems. Cybercriminals want you to not think, just react.”
Digital mindfulness won’t replace technical controls or organisational policies—but it strengthens the human firewall, helping individuals recognise manipulative cues, resist impulsive reactions, and make better decisions in real time.
ENDS











