
Franco T.,
Too Long; Didn't Read
Most business continuity plans fail in a crisis—not because they are poorly written, but because they solve the wrong problem. Traditional disaster recovery (DR) plans focus on IT systems: backups, failover, RTO. However, if your cloud provider fails or ransomware cripples your production, a DR plan alone won't bring your business back. The solution? Resilience instead of recovery—the ability to continue working at 70% capacity while you resolve issues.

Let's be honest... nobody likes reading 80-page BC plans.
And that's exactly the problem.
Over the past few years, we have reviewed dozens of Business Continuity concepts at Swiss companies. The pattern is always the same: impressive documentation, detailed process diagrams, neatly numbered escalation chains.
And then the real incident happens.
Suddenly it turns out: the documented contact person left the company two years ago. The recovery steps refer to servers that were migrated to the cloud long ago. The escalation chain? No longer matches the current org structure.
(This is not an isolated case. This is the rule.)
The fundamental problem: BC as a document instead of as a capability
This is where the thinking error lies: most companies treat Business Continuity as a documentation project. Create it once, store it in SharePoint, done.
That feels good. You have "done" something. The auditor is satisfied. Management can tick the box.
But operational resilience doesn't work that way.
Why traditional DR plans fail
Problem 1: IT focus instead of business focus
Classic Disaster Recovery thinks in systems: "We have backups for server X, failover for database Y, RPO of 4 hours, RTO of 8 hours."
That sounds technically solid.
But if your ERP system fails, your customers don't care about your RTO. They want to know: Can I place an order? When will my delivery arrive? Will I get support?
We saw this at an industrial company. Perfect IT DR plan. But when their main supplier for a critical component went down due to a cyberattack, production stopped. The DR plan was no help at all—because the problem wasn't their own IT, but a lack of business process redundancy.
Problem 2: The multi-cloud illusion
"We're running multi-cloud, so we're covered."
We hear that often. Reality is more complicated.
If several major cloud providers have issues at the same time—and this has already happened multiple times—your multi-cloud strategy helps little. Especially if your application was not truly designed to be provider-agnostic.
(Spoiler: most aren't.)
The major cloud providers experience dozens of service outages per year collectively. That's the new normal. The question is not "if," but "when" the next one will happen.
Problem 3: Backups that don't work when it counts
"We have backups" is the most common answer when we ask about Business Continuity.
But here's the catch: modern ransomware specifically targets backup repositories. Attackers know that backups are your last line of defense. If the backups get encrypted too, recovery becomes... difficult.
The more critical problem: many companies run daily backups for years. But no one ever tests a real recovery.
Then when the real incident hits? Backups can't be restored because of outdated encryption keys. Or changed infrastructures. Or incompatible software versions.
Recovery then takes weeks instead of hours.
Problem 4: Plans become outdated faster than you can update them
A BC plan from three years ago is usually wastepaper today.
Your IT landscape is constantly changing: new SaaS tools, cloud migrations, remote work infrastructure, new suppliers, organizational restructuring.
But your BC plan might get skimmed once a year. If at all.
The hidden costs no one tracks
When we speak with CFOs, most only calculate direct IT outage costs. That massively underestimates the real damage.
What is often forgotten:
Opportunity costs: Which deals could you not close because your systems were down for 3 days? Which customers bought from competitors?
Reputational damage: How many customers permanently switch to competitors after you were unable to deliver for 2 weeks? Which deals fell through?
Regulatory consequences: In Switzerland, reporting obligations become enforceable from October 2025—not reporting can cost you up to CHF 100'000 per day. Similar regulations are coming across the EU with NIS2.
Recovery resources: How many person-hours go into manual workarounds, crisis management, and restoration? What else could those resources have created?
For a typical Swiss SME with CHF 20-100M in annual revenue: a major incident without preparation can quickly cost CHF 200K-2M. Directly and indirectly.
The paradigm shift: From recovery to resilience
The question must be framed differently.
Instead of:
"How long does restoring our servers take?"
"What is our Recovery Time Objective?"
"Do we have enough backup capacity?"
Ask instead:
"Which business processes generate our revenue and must never fail?"
"Can we continue operating at 70% capacity while we resolve issues?"
"Where are our real dependencies—suppliers, payment service providers, critical employees?"
"Who makes decisions in a crisis—and is that person reachable?"
That is the difference between Disaster Recovery and Resilience Engineering.
What you can do differently tomorrow
Here are four questions you should ask this week:
Which 3 business processes generate 80% of our revenue? Start there. Not with IT infrastructure.
What happens if our most important cloud service fails for 48 hours? Not theoretically. Concretely. Who does what? Can we keep working manually?
Have we conducted a recovery test in the last 12 months? Not a backup test. A real business recovery test with everyone involved.
Who makes decisions in a crisis—and is that person reachable? Also on weekends? On vacation? At 3 a.m.?
The answers show you where you stand.
The 70% rule
Here is the most practical takeaway from this article:
Perfect redundancy is unaffordable. But 70% capacity within 30 minutes protects more revenue than 100% capacity after 48 hours.
We call this Minimum Viable Operations (MVO). For each critical business process, you define: what is the absolute minimum needed to keep operating?
A real-world example:
An e-commerce company with CHF 45M in annual revenue (~CHF 180K daily revenue). If the main shop on AWS fails:
Option 1 (activation: 30 min): Emergency landing page on Azure with "Order by phone now" + increased hotline staffing
Option 2 (activation: 2 hrs): Basic shop on Azure with reduced features
Result: Instead of CHF 180K daily loss, only CHF 15K during the switchover. 92% of revenue protected.
The investment in this fallback infrastructure paid for itself during the first major outage.
Conclusion
BC plans fail not because of missing documents, but because of the wrong focus
IT-focused DR solves the wrong problem—your customers want business continuity, not server recovery
The cost of "good enough" is higher than most CFOs calculate
The paradigm shift: From "How do we restore system X?" to "How do we keep business function Y running?"
The 70% rule: Fast degraded operations beat perfect late recovery
What next?
Start small. Take one critical business process this week and answer honestly:
What happens if technical support fails for 24 hours?
Do we have a Plan B?
Who decides when we switch to Plan B?
If you stumble on these questions, you're not alone. But now you know where to start.
(And if you realize you need support to tackle this systematically—that's exactly what we do here.)
Further reading
Why most risk analyses fail – A pragmatic approach
NIS2 Directive for Swiss companies – The ultimate implementation guide
Is it cheaper to recover after a ransomware attack? – The real costs of being unprepared


