Four people are seated at a table in a meeting room, while a presenter speaks in front of a screen.

AI Governance: Why Your Company Needs Rules for AI Now

AI Governance: Why Your Company Needs Rules for AI Now

Franco T.,

Too Long; Didn't Read

More than half of AI users in companies use tools that are not officially approved. A pragmatic AI governance framework can be established within four weeks.

Illustration of DNA strands and abstract figures, symbolizing genetics and research.

Imagine this: Your sales team uses ChatGPT to generate proposal text with customer data. Your HR department has applications pre-screened by an AI tool. And your marketing team uses Copilot to create content that no one checks for hallucinations.

Are you aware of this? Did you authorize it? Are there rules for it?

If you hesitate on at least one of these questions, you are not alone. We see this in practically every Swiss SME we advise. AI tools are here. Employees are using them. But governance? It is missing.

Shadow AI: The Invisible Risk

"Shadow AI" is the little brother of shadow IT—only with greater damage potential. Employees use AI tools on their own initiative because they want to be more productive. Understandable. But the consequences are real.

According to a Salesforce study (2024), more than 50% of enterprise AI users use tools that are not officially approved. And in 2024, Cyberhaven analyzed that around 11% of the data pasted into ChatGPT is confidential. Customer data, financial figures, strategy documents—everything ends up with a third-party provider.

This is not a theoretical risk. Samsung banned ChatGPT internally in 2023 after engineers had entered proprietary source code and internal meeting notes into the chatbot. And a New York law firm team was sanctioned in 2023 because it cited AI-generated, non-existent court decisions in a proceeding.

(We do not know any Swiss SME that intentionally sends customer data to OpenAI. But we know many where it still happens—simply because there is no rule against it.)

Why now? Regulatory pressure is increasing

Two developments are turning AI governance from a "nice to have" into a "must have now":

The EU AI Act is reality. The world’s first comprehensive AI law came into force in August 2024. Implementation deadlines are staggered: prohibited practices since February 2025, rules for general-purpose AI from August 2025, full application for high-risk systems from August 2026. The penalties? Up to €35 million or 7% of global annual turnover.

"It does not affect us as a Swiss company"—we hear that often. But it is not true. The EU AI Act has extraterritorial effect. If your AI system is used in the EU market or its output is used in the EU, you are affected. And almost every Swiss company with EU customers falls into this category.

The nFADP requires transparency. Switzerland’s new data protection law (in force since September 2023) requires transparency about automated decision-making processes and gives affected individuals the right to human review. If your HR tool pre-screens applications with AI support and you do not disclose that, that is a problem.

Taken together: the regulatory landscape has shifted. AI without governance is not only risky—it is increasingly unlawful.

What an AI governance framework includes (without a bureaucracy monster)

Here is the good news: AI governance for an SME does not have to be a 200-page rulebook. We work with a three-pillar model that is pragmatic enough to set up in four weeks—and robust enough to cover the key risks.

Pillar 1: AI usage guidelines

The foundation. A clear, understandable policy that defines:

  • Which AI tools are allowed? Define an "approved list" of authorized tools. Everything else is off-limits until it has been reviewed.

  • What data may be entered? Clear categorization: Public data—yes. Internal data—only with approved enterprise versions. Customer data and confidential information—never in third-party tools without a data processing agreement.

  • How is output reviewed? Every AI-generated output that feeds into decisions, communication, or documents requires human review. Four-eyes principle for everything that goes outside the company.

  • Who is responsible? Define an AI owner (it does not have to be a full-time role) and clear escalation paths.

Sounds doable? It is. Most companies can have this in place within a week—if someone initiates it.

Pillar 2: AI risk assessment

Not every AI tool carries the same risk. A text generator for marketing copy is different from an AI-supported system that assists with credit decisions. Risk assessment sorts your AI applications by risk category:

  • Low risk: Internal productivity tools without sensitive data (e.g., meeting summaries, draft texts without customer reference)

  • Medium risk: Tools with internal data or customer interaction (e.g., chatbots, proposal assistants, analytics tools)

  • High risk: Systems that feed into decisions about individuals (e.g., HR screening, credit checks, security monitoring)

For each category, you define appropriate controls. Not everything requires the same level of effort—and that is exactly what makes this approach pragmatic.

Pillar 3: Monitoring and control

Governance without monitoring is like a seatbelt without a buckle. You need:

  • An AI inventory: Which AI tools are being used in the company? (You will be surprised how many there are.)

  • Regular reviews: Quarterly checks to ensure guidelines are being followed and to identify whether new tools have appeared

  • Incident process: What happens if confidential data still ends up in an unauthorized tool? A clear process prevents panic reactions

  • Training and awareness: The best guidelines are useless if no one knows them. Short, regular training sessions (not an annual 3-hour compliance training everyone clicks through)

An AI governance framework in 4 weeks

Sounds ambitious? It is feasible. Here is the roadmap we use with our clients:

Week 1: Take stock. Inventory all AI tools currently in use. Ask proactively—not just the IT department, but all teams. (A tip: Do not ask "Do you use AI?" but "Which AI tools do you use?" The answers are more revealing.)

Week 2: Draft guidelines. Create an AI usage policy based on the inventory. Define allowed tools, data categories, and review processes. Keep it to 3–5 pages—no one reads more than that.

Week 3: Conduct risk assessment. Categorize the identified tools by risk level. Define controls per category. Prioritize: high-risk applications first.

Week 4: Set up monitoring and communicate. Establish the review process, define the incident workflow, and—most importantly—communicate governance to all employees. Short onboarding, clear expectations.

This is not a perfect framework. But it is a working one. And a working framework in four weeks beats a perfect one in twelve months.

Governance is not a brake—it is an enabler


Here is the point many people miss: Companies that take AI governance seriously innovate faster, not slower.

Why? Because clear rules eliminate uncertainty. When employees know which tools they may use and which data is allowed, they use AI more productively. Without guidelines, there is either uncontrolled sprawl (risky) or hesitation out of fear (missed opportunities). Both are costly.

We have seen with our clients that a clear AI policy increases productive AI use by a factor of two to three—simply because the barrier is lower when the framework is clear.

The next step

You do not need to have a perfect AI governance framework tomorrow. But you should start with one step tomorrow:

Find out which AI tools are being used in your company. Ask your team. The answers will surprise you—and show you where the most urgent need for action lies.

(We help Swiss companies build AI governance pragmatically and vendor-neutrally—not as a compliance exercise, but as a foundation for secure innovation.)

Join us on the journey

Effortlessly schedule a conversation and discover how we bring success in the digital world to your company.

Two men are sitting together in a cozy setting, smiling and enjoying a conversation over drinks.

Join us on the journey

Effortlessly schedule a conversation and discover how we bring success in the digital world to your company.

Two men are sitting together in a cozy setting, smiling and enjoying a conversation over drinks.
Abstract design featuring vibrant purple and blue gradients with geometric shapes and lines.
The text reads: "Let’s begin our digital journey."
Contact us!

Grabenstrasse 15a

6340 Baar

Switzerland

+41 43 217 86 70

Copyright © 2026 ODCUS | All rights reserved.

Abstract design featuring vibrant purple and blue gradients with geometric shapes and lines.
The text reads: "Let’s begin our digital journey."
Contact us!

Grabenstrasse 15a

6340 Baar

Switzerland

+41 43 217 86 70

Copyright © 2026 ODCUS | All rights reserved.