
Franco T.,
Too Long; Didn't Read
The EU AI Act does not apply only to EU companies. Swiss companies with ties to the EU fall under the regulation, with fines of up to EUR 35 million. The high-risk obligations take effect from August 2026.

You are based in Switzerland. An EU law. Doesn’t concern you.
Wrong.
If your company uses AI systems whose results end up in the EU — whether it’s a CV screening tool for your EU branch, a credit scoring model for EU customers, or a chatbot on your website serving EU visitors — then you are in scope. Article 2 of the EU AI Act is crystal clear: the regulation applies to any provider and deployer whose AI output is used in the EU. Company headquarters? Irrelevant.
We see this constantly with our Swiss clients: awareness is lacking. And the clock is ticking.
Why the EU AI Act also affects you
Its extraterritorial effect is not a side note — it is a core component of the regulation. Similar to the GDPR, which affects Swiss companies handling EU customer data, the EU AI Act (Regulation EU 2024/1689) extends across national borders.
Specifically, you fall under the EU AI Act if you:
develop or offer AI systems that are used in the EU
use AI tools whose output affects EU citizens (HR screening, credit decisions, insurance assessments)
sell AI-based products or services to EU customers
have subsidiaries or branches in the EU that use AI
This applies to more Swiss companies than most people think. Financial service providers with AI-supported credit scoring, pharma and medtech companies using AI in clinical trials, industrial firms with predictive maintenance, HR tech providers with automated applicant screening, insurers with AI risk assessment — AI is used everywhere. And everywhere there are EU connections.
(And no, Switzerland does not have its own AI law. The Federal Council is still assessing whether existing laws are sufficient. Until then, the rule is: if you do business in the EU, you comply with the EU AI Act.)
The four risk categories — and why they are decisive
The EU AI Act follows a risk-based approach. Not every AI system is treated the same. The category determines your obligations:
Unacceptable risk — prohibited. Since February 2025, certain AI practices have been completely banned. Social scoring by authorities, manipulative AI using subliminal influence, real-time mass biometric surveillance, emotion recognition in the workplace. Anyone using such systems risks fines of up to EUR 35 million or 7% of global annual turnover.
High risk — strictly regulated. This is the category that affects most Swiss companies. And most underestimate the scope. High-risk AI systems include, among others: biometric identification, AI in critical infrastructure, education and examination systems, CV screening and personnel decisions, creditworthiness assessment, insurance risk evaluation. The obligations are extensive — risk management system, technical documentation, logging, human oversight, conformity assessment, EU database registration, and post-market monitoring. Sounds like a lot? It is. But it is manageable if you start early enough.
Limited risk — transparency obligations. Chatbots must disclose that the user is interacting with AI. Deepfakes must be labeled. Emojis and nice words are not enough — the information must be clear and unmistakable.
Minimal risk — no obligations. Spam filters, AI in video games, simple recommendation systems. Here, the EU recommends voluntary codes of conduct, but there are no binding requirements.
The question you should ask yourself: which category do your AI systems fall into? And do you even have an overview of where AI is running across your company?
The timeline: what applies when
The regulation comes into force in stages. This is both helpful and dangerous — helpful because you do not have to implement everything at once. Dangerous because the phased rollout creates a false sense of security.
February 2025 (already in force): Prohibited AI practices apply. Social scoring, manipulative AI, mass biometric surveillance — anyone still using these is already in violation.
August 2025: Rules for General-Purpose AI (GPAI) models enter into force. Transparency obligations for GPAI providers, technical documentation, compliance with EU copyright law. Additional obligations apply to GPAI with systemic risk (training compute above 10^25 FLOPs). The EU AI Office and national supervisory authorities begin operations.
August 2026: Full application. All high-risk AI systems must meet all requirements. Conformity assessments, EU database registration, post-market monitoring — everything must be in place.
August 2027: High-risk AI in regulated products (medical devices, machinery, lifts) is integrated into existing sectoral regulation.
For Swiss companies, this means: less than six months remain until August 2026. Anyone operating a high-risk AI system needs 6–12 months of preparation time. The math is simple — and it does not work if you only start planning now.
The fines: no minor offense
A brief announcement on the consequences:
Violation of prohibited AI practices: up to EUR 35 million or 7% of global annual turnover (whichever is higher)
Violation of high-risk requirements: up to EUR 15 million or 3% of global annual turnover
Providing false information to authorities: up to EUR 7.5 million or 1% of global annual turnover
For SMEs and startups, proportionally lower fines apply — whichever amount is lower. Still: these are existentially threatening sums, even in the reduced version.
(For context: GDPR fines were also “only” theoretical at first. Meta has now paid over EUR 2 billion in GDPR penalties. The EU is serious.)
What you can do right now
This is where it gets practical. Six steps we recommend to our clients:
1. Create an AI inventory. Sounds trivial? We have never seen a company that immediately knew all of its AI systems. Not only the obvious ones (ChatGPT, Copilot), but also embedded AI in purchased tools — your CRM, your HR system, your ERP. They often have AI functions no one actively switched on, but that still run.
2. Perform risk categorization. Assign each identified AI system to a category based on EU AI Act criteria. High risk? Limited? Minimal? This classification determines everything that follows.
3. Check the EU nexus. Clarify whether and how your company falls under the extraterritorial scope. EU customers, EU branches, EU users — the connection can be surprisingly direct.
4. Conduct a gap analysis against existing compliance. If you are already GDPR compliant, have ISO 27001, or meet NIS2 requirements — then you have a foundation. An integrated compliance approach reduces effort by 40–60% compared to isolated implementation. Data protection impact assessments, risk management processes, documentation obligations — much of it overlaps.
5. Create a compliance roadmap. Prioritize: high-risk systems first. Define milestones, responsibilities, and budget. August 2026 is your deadline for high-risk compliance.
6. Build governance structures. Who is responsible for AI compliance? Which processes ensure new AI systems are classified before deployment? Monitoring is not a one-time project — it is an ongoing process.
The integrated approach: not another compliance silo
Here’s the thing... most companies already have GDPR programs, ISO certifications, and perhaps NIS2 preparations underway. Now the EU AI Act comes on top.
The mistake we see: each regulation is treated as a separate project. Separate teams, separate documentation, separate audits. The result? Double the work, triple the cost, zero visibility.
We recommend an integrated compliance approach. EU AI Act, GDPR, NIS2, ISO 27001 — in one framework. The overlaps are substantial: risk management, data protection impact assessment, technical documentation, incident response, monitoring. Those who bring this together instead of handling it in isolation not only save effort, but actually gain visibility over their compliance landscape.
Practical example: a data protection impact assessment that you already need for GDPR covers a large part of the risk assessment required by the EU AI Act for high-risk systems. Your ISO 27001 documentation framework? It can be directly expanded for the technical documentation of AI systems. These are not theoretical synergies — this is measurably less work.
The next step
Forget the panic. Forget the 200-page legal texts. Do one thing tomorrow morning: create a list of all AI systems in your company. All of them. Even the ones you "believe" are only minimal.
This list is the beginning. Everything else builds on it.
(We help Swiss companies implement the EU AI Act pragmatically — with an integrated compliance framework that combines EU AI Act, GDPR, NIS2, and ISO 27001. Vendor-neutral, without panic, with a clear roadmap. View compliance consulting or Discover AI advisory)


