Regulatory
4 Min.
FLARE Deep Dive (2/6): Why AI Validation Starts with Intended Use & Overall Risk

Tibor Zechmeister
Jul 15, 2025
At Flinn, we like to say that AI validation in MedTech is like your first cup of coffee: absolutely essential. Miss it, and everything that follows is guaranteed to unravel, because without a solid start, nothing runs as it should.
But where do you start? Not with the algorithm. Not with model performance. And certainly not with regulatory checklists.
The first and arguably most critical step is this: clearly defining your software’s intended use and assessing its overall risk. This isn’t just a formality; it’s the cornerstone upon which every other part of the validation process depends.

So, let’s dive into the foundation of our FLARE validation framework!
Why This Must Be Priority #1
Today, it's not enough to say a tool "uses AI". In MedTech, context is everything. AI that supports a decision is not the same as AI that makes one. And when patient safety is on the line, that distinction is critical.
Most issues we see in AI validation can be traced back to this first step: either because it was underestimated or simply undocumented.
Understanding where, how, and by whom an AI-based system is used is the only way to responsibly evaluate the risks it introduces.
Key Takeaways: What Intended Use + Overall Risk really Mean in Practice
Turning insight into action: So what does it actually look like to define the intended use and assess the overall risk of an AI tool? Let’s break it down:
1. Start With the Intended Use
Just like with any medical device, the intended use must be clearly described:
Who will use the software? Is it designed for Regulatory Affairs? Quality Management? Maybe even production teams?
What exactly does the tool do? Does it generate draft content for technical documentation? Flag duplicates in a dataset? Recommend actions in Post-Market Surveillance?
Which regulatory processes does it impact? Some tools touch a single workflow. Others may affect processes across departments, from regulatory to HR. The intended use needs to reflect that.
Depending on the complexity, this might be documented in a few lines or require several pages. Either way, it should be crystal clear!
2. Then Assess the Overall Risk
Once the intended use is documented, it’s time to ask: How risky is the software in this context?
We suggest a pragmatic, context-driven classification:
Low-risk example: A tool that flags duplicate entries in a spreadsheet. Worst case? Some extra cleanup work.
High-risk example: An AI-powered PMS tool that autonomously identifies death-related incidents from vigilance reports. Inaccuracies here could have real-world consequences for patient safety.
What’s important: The risk class is not dictated externally. You, as the user and organization, define it: based on your use case, product, and regulatory context.
Let’s Wrap This Up: What Should You Do Next?
After reading this, ask yourself:
Do we have a documented intended use for our AI-powered tools?
Have we assessed and classified the overall risk on a feature level?
If not, that’s the place to start!
And if you’re unsure how to define or structure that assessment, this is exactly the kind of foundational work FLARE was built to support. We're here to help! Let’s Talk!
So…What Comes Next in the FLARE Framework?
Once you’ve defined the intended use and assessed the overall risk of your AI tool, FLARE helps you decide where to go from there:
If the risk is low, the next step is to assess the vendor. Can you trust the provider behind the tool?
If the risk is medium or high, a deeper feature-level risk analysis is needed before moving forward.
In our next article, we’ll explore what happens in the low-risk path:
How to evaluate your vendor’s AI maturity, transparency, and quality practices.