What is all this about DeepSeek and how does this affect me as a board member?
5 Feb, 2025

 

Zeldeen Muller, CEO of inSite Connect, creator of AgendaWorx.com with AI

 

There’s a new player in the AI world called DeepSeek, a company from China that’s changing the way AI models are trained. And, because it’s open source (see our explanation at the end of the article), it’s likely free to use.

 

Think of AI as a super-smart assistant that can help with everything from writing reports to crunching financial numbers for retirement fund boards.

 

Normally, teaching these AI models takes loads of time and money, but DeepSeek says they’ve found a cheaper, faster way to do it—and they’re getting a lot of attention.

 

But here’s the catch—while their AI is efficient, it has some serious security flaws that could put sensitive data at risk.

 

Why is everyone talking about it?

 

DeepSeek’s AI model, called DeepSeek R1 (sounds a bit like R2-D2 from the Star Wars days doesn’t it?), is like the underdog team that shows up with surprising skills. It performs as well as (or better than) some of the biggest names in AI, like OpenAI’s models, but with a fraction of the budget. While other companies spend billions training their models, DeepSeek claims they did it for only $6 million. That’s like winning a race in a car that costs way less than everyone else’s.

 

But here’s the twist: their cost-cutting methods may have come at a price—possible security flaws.

 

Is there a problem with the security of DeepSeek’s AI?

 

The jury’s still out on this, and you can read an interesting article on it at the bottom of this page. AI security researchers from Robust Intelligence, now part of Cisco, and the University of Pennsylvania tested DeepSeek R1 using 50 challenging prompts designed to trick AI into saying or doing harmful things (like spreading misinformation or suggesting illegal activities). The results? DeepSeek R1 failed every test. Yes, it couldn’t block a single harmful request, giving it a 100% failure rate. Compare that to other AI models, which at least managed to block some harmful prompts.

 

Here’s the leaderboard according to the AI researchers:

 

  • DeepSeek R1: 100% failure rate (ouch!)
  • OpenAI O1: Only 26% failure rate (much safer)
  • Others (like GPT-4o and Claude-3.5 Sonnet): Somewhere in between

 

Why is this happening?

 

DeepSeek trains its AI models using different methodologies, which are great for making models fast and efficient, but they might not yet have strong enough “guardrails” to prevent the AI from being tricked into bad behaviour. See the different methodologies at the bottom of this page.

 

What’s the lesson for retirement funds using AI?

 

Let’s bring this back to something practical—like retirement fund boards considering AI tools to help them review financial statements, analyse reports, write member communication and explain investments. When choosing an AI provider, boards shouldn’t just be looking at the price tag.

 

Boards should double-check their board portals to see which AI providers they’re using and ensure those providers meet the necessary security standards.

 

Why does this matter?

 

Just like you wouldn’t put all your hopes on an untested team in a championship match, retirement funds shouldn’t rely solely on cost-saving AI tools without thinking about risks. By balancing affordability with security, they can avoid surprises and keep their funds safe while still benefiting from the power of AI.

 

Difficult terms explained and more information

 

Open source means that the code or design of a product, like software or an app, is shared openly for anyone to see, use, or improve. Think of it like sharing a recipe online—anyone can try it, tweak it, or add their own twist to make it better.

 

Famous examples of open-source projects include Linux (an operating system) and Firefox (a web browser). The main idea is collaboration, where the community helps improve and maintain the product for everyone.

 

DeepSeek trains its AI models using methods like:

  • Chain-of-thought reasoning: The AI breaks problems into smaller steps, kind of like showing your work in maths.
  • Reinforcement learning: The AI rewards itself for getting steps right, even if the final answer isn’t perfect.
  • Distillation: They take a big, powerful AI model and shrink it down into a smaller, more affordable version.

 

If you’re curious and want to dive deeper, just click the link below.

https://blogs.cisco.com/security/evaluating-security-risk-in-deepseek-and-other-frontier-reasoning-models#comments

 

ENDS

Author

@Zeldeen Muller, inSite Connect
+ posts
Share on Your Socials

You May Also Like…

Budget 2025

Budget 2025

  Helen Littlewood, Sygnia Umbrella Retirement Fund Head of Product at Sygnia   This document provides an overview of changes proposed in the 2025 national Budget that are relevant to retirement funds.   Background   Finance Minister Enoch Godongwana’s...

Share

Subscribe to the EBnet Daily Newsletter and WhatsApp Community for the latest retirement funding, financial planning, and investment news, along with market updates and special announcements.

Subscribe to

Thank You. You have been subscribed. Please check your emails for a confirmation mail.