
Yannick H.,
Too Long; Didn't Read
20 AI ideas, but no idea where to start? Prioritizing AI use cases with the ODCUS framework.

TL;DR
Most companies leave their first AI workshop with a list of 20+ ideas — and no idea which to tackle first. Those who prioritize by "coolness" instead of business value burn time and budget. With a simple 2x2 matrix (Business Value vs. Implementation Complexity), the list can be reduced to 2-3 concrete quick wins. Start there. The rest can wait.
After an AI workshop, the same thing always happens.
The whiteboard is full. Everyone is enthusiastic. Someone has suggested "Predictive Analytics," someone else "Automated Customer Communication," and a third person proposes training a custom language model. The list grows. So does the energy in the room.
And then Monday comes.
Who starts where? Which idea truly has priority? Who is responsible? What does it actually cost? And what happens if the pilot project fails?
We see this pattern with almost all customers who genuinely want to start using AI. The ideas are rarely the problem. The problem is the lack of a method for deciding which ideas are worth implementing — and in what order.
The expensive misunderstanding: Coolness instead of value
The most common mistake is not ignorance. It's enthusiasm without a framework.
When teams prioritize AI use cases, they tend to tackle the most exciting first. The project with the best story. The use case mentioned in the latest Forbes article. The idea everyone nodded at during the workshop.
That is human. And expensive.
We worked with a Swiss industrial company that wanted to start directly with predictive maintenance. Good idea, honestly — for a company with clean sensor data, an ML-experienced team, and a CTO who had already planned six months of lead time. For an SME with 180 employees and no unified data foundation, that was the wrong first step. Eighteen months and a lot of money later, they had a pilot project that did not scale.
What would have worked? Automated translation of supplier communications. Implementation time: three weeks. Annual time savings: over 400 hours. No data engineer required.
The framework: two axes, four quadrants
When we help companies evaluate AI use cases, we work with a simple 2x2 matrix. Two axes. Four fields.
Axis 1: Business Value
Here you ask: What concrete benefit does this use case bring to the company? Not "improves efficiency" or "optimizes processes" — but measurable: How many hours does it save per week? What revenue contribution can it generate? What risk does it reduce? What becomes cheaper, faster, or more reliable?
Business value must be expressible in francs or hours. If it can't be, the idea is not concrete enough yet.
Axis 2: Implementation Complexity
Here you ask: How difficult is it to implement? Four factors determine this: data readiness, technical complexity, organizational change, and regulatory requirements.
The two axes create four quadrants.
The four quadrants in detail
Quick wins: high value, low complexity
These are your first three projects. Full stop.
Typical quick wins for Swiss SMEs: Automated email categorization and forwarding (easily saves two hours per day), correspondence and translation automation (ROI within weeks), document classification (no in-house AI infrastructure required).
Strategic bets: high value, high complexity
These projects are worth it — but not now. They require preparation, resources, and usually a quick win as a foundation. Predictive maintenance for manufacturing companies belongs here. So does AI-assisted demand forecasting in procurement. Plan these projects. Don't start them before the first quick win.
Low-hanging fruit: low value, low complexity
Meeting summaries, automatic scheduling, simple FAQ chatbots. Do these things on the side when you have bandwidth. Don't actively prioritize them.
Avoid: low value, high complexity
Training your own LLMs. Developing custom AI models from scratch. Experimental architectures for use cases that an Excel macro would solve. We've seen companies put 18 months and six-figure budgets into projects that ultimately did less than ChatGPT plus two hours of prompt optimization.
The three use cases that almost always work
We have carried out this analysis with dozens of Swiss companies. Three use cases almost always emerge as quick wins — regardless of industry or company size.
1. Translation and correspondence
Bilingual communication is everyday business for many Swiss companies. AI-supported translation with company-specific context and terminology not only saves time, it also reduces errors. Implementation: a few days. ROI: immediately measurable.
2. Document classification and routing
Incoming documents (invoices, contracts, inquiries, complaints) often end up today in a generic inbox, from which a person manually forwards them. This is a perfect task for AI. No complex infrastructure. No data privacy problems, if it is set up correctly.
3. Process automation for repetitive tasks
Every company has processes where someone repeats the same ten steps every day. Transferring data from one system to another. Compiling reports. Sending status updates. It is not glamorous. But it saves real hours.
How to carry out the evaluation in practice
The framework is simple. The application requires discipline.
Step 1: Collect use cases (without evaluation) – First gather everything, no filtering during brainstorming. Goal: 15-30 candidates.
Step 2: Quantify business value – For each use case: What is the concrete savings potential or revenue contribution? Not "large" or "medium" — but CHF or hours per year. If you can't estimate that, the use case is not mature enough for prioritization.
Step 3: Assign a complexity score – For each use case, give a simple 1-3 rating across the four dimensions.
Step 4: Fill in the matrix – Where does each use case land? Which quick wins are clear?
Step 5: Select the top 2-3 and start – Not five. Not ten. Two to three. Focus is the difference between a pilot project that is still a pilot a year later, and one that goes live after three months.
What companies experience with this approach
Companies that prioritize by business value instead of excitement factor reach their first measurable results faster. That sounds obvious. It isn't, though, when you're in the middle of a workshop and everyone is voting for the most exciting idea.
Specifically: an initial quick win — even a small one — creates something that AI initiatives often lack: internal trust. The team sees that AI works for more than just keynote presentations. The board sees a number. The employees who automated the process see free time.
That changes how the next project is evaluated. And the one after that.
By contrast, companies that start with a highly complex strategic bet often spend a year dealing with data problems, organizational resistance, and disappointed expectations — before they can even think about value creation.
The honest question
After the last workshop, you have a list of AI ideas. Maybe for longer already.
The question is not which idea sounds the most exciting.
The question is: Which of them can you implement in the next 90 days and then say "this saved us X hours per month"?
If the answer remains blank, or you have three ideas that all seem equally important, the problem is not the idea. It is the evaluation framework.
And developing that — that is often the most useful work you can do before the first AI project.
If you want to understand how we carry out this process with customers and which use cases could have the greatest leverage for your company, take a look at our AI consulting services or get in touch directly.


