AI policies for companies | What you need to know
5 Feb, 2025

 

Deike Tamm, Trainee at Michalsons

 

Your employees are probably already using AI to complete their tasks. It is your decision whether you want to close your eyes and hope for the best or empower them by using AI in a skilled and responsible manner. To ensure your company is leveraging AI lawfully, we recommend setting up a well-structured AI policy that promotes ethical practices, aligns with legal requirements, and minimises risks. Here’s a guide on what to consider when preparing the policy.

 

1. Types of AI policies

 

Before we dive into what you should consider, let’s take a quick look at the different types of AI policies out there:

 

  • Acceptable use of AI. Governs how employees and stakeholders can use AI tools and systems responsibly within the organisation.
  • AI development. Outlines guidelines and standards for designing, training, and deploying AI systems to ensure they align with ethical, legal, and organisational objectives. You can look at Microsoft’s AI safety policies, as an example.
  • AI governance. Provides a framework for oversight, accountability, and risk management across all AI-related activities.
  • AI risk management. Addresses risk identification, assessment, mitigation, and monitoring in the context of AI use and development.
  • AI incident response. Defines protocols for detecting, reporting, and addressing AI-related incidents or failures.
  • AI security. Focuses on safeguarding AI systems from cyber threats, unauthorised access, and vulnerabilities.

 

Now a lot of these policies can stand on their own, but you don’t always have to start from scratch. Many existing policies can simply be updated to include AI-specific aspects. For example, your Acceptable Use Policy could cover how employees use AI tools, or your Privacy Policy could address how AI handles personal data. If you’ve already got a Cybersecurity Policy, just tweak it to include AI security measures. It’s all about finding what works best for your organisation whether that’s standalone policy or weaving AI into what you already have.

 

2. Determine the purpose of the policy

 

Defining the purpose is a good first step. Consider the scope of AI Systems. Will it be focused on generative AI (GenAI), other AI systems, or both? For instance, GenAI may raise unique concerns like copyright or content generation ethics, while other AI systems might involve decision-making or automation challenges.

 

Clearly articulate why it is being introduced to ensure the audience understands its importance. Also check with your organisational goals, like its the alignment with company values, ensuring it supports innovation while maintaining ethical standards.

 

3. Identify the target audience

 

Specify who the policy is for. That could be employees developing AI systems, teams deploying or managing AI tools or even external contractors or vendors supplying AI solutions. Tailoring the policy to these groups ensures relevance and enforceability.

 

4. Refer to related policies

 

As mentioned above, there is no need to reinvent the wheel and it is probably best to avoid redundancy or contradictions by referencing existing organisational policies, such as data protection policies, cybersecurity protocols and ethics and compliance guidelines. Cross-referencing ensures consistency and provides a comprehensive governance framework.

 

5. Structure of the AI policy

 

Think of an AI policy as your organisation’s guidebook for using AI responsibly and effectively. A clear structure isn’t just about ticking boxes, it’s about making sure everyone knows why AI is being used, what it applies to, and how it fits into your operations. It sets the tone by explaining the purpose and goals of the policy, highlights key principles like fairness, transparency, and security, and gives a framework for navigating the legal landscape, like the EU AI Act or GDPR. It’s also a way to build trust and accountability, showing your team and stakeholders that you’ve thought through the risks and responsibilities. A well-structured policy helps keep everyone aligned and ready to adapt as technology or regulations change.

 

Actions You Can Take Next

  • Ensure your executives and governing body are aware of the need for an AI policy.
  • Evaluate the current AI usage of your employees and identify areas where policies and training is necessary. On that basis, customised AI policies and training programmes should be implemented. Michalsons is happy to assist you in that process.
  • Download our free generative AI acceptable use policy template.

 

ENDS

Author

@Deike Tamm, Michalsons
+ posts
Share on Your Socials

You May Also Like…

The risk in material non-disclosure

The risk in material non-disclosure

  Lize de la Harpe, Senior Legal Advisor at Sanlam Corporate   Introduction   Over the years our courts have frequently confirmed the position that an insurer has the right to avoid a contract of insurance not only if the applicant had misrepresented a...

Share

Subscribe to the EBnet Daily Newsletter and WhatsApp Community for the latest retirement funding, financial planning, and investment news, along with market updates and special announcements.

Subscribe to

Thank You. You have been subscribed. Please check your emails for a confirmation mail.