Regulatory

6 Min.

FLARE Series (4/6): From Overall Risk to Feature-Level Insight

Tibor Zechmeister

Sep 15, 2025

FLARE Framework visual showing AI software features (A, B, C) with different risk levels – high, medium, low – for risk-based validation in regulated environments.
FLARE Framework visual showing AI software features (A, B, C) with different risk levels – high, medium, low – for risk-based validation in regulated environments.
FLARE Framework visual showing AI software features (A, B, C) with different risk levels – high, medium, low – for risk-based validation in regulated environments.

In our last FLARE article, we explored how Vendor Assessment helps you build confidence in a low-risk AI tool before it ever touches your regulated workflows.

Now, we’re turning up the heat! What happens when your AI tool, or one of its features, is high risk? The kind of functionality where a wrong output isn’t just inconvenient, but could directly impact patient safety?

That’s where the next step of FLARE comes in: Risk Analysis per Stakeholder Requirement.

FLARE framework flowchart highlighting the Risk Analysis per Stakeholder Requirement step in the AI validation pathway.

Why This Step Exists: AI Feature Risks 101

Not all parts of a software are created equal. In reality, most AI-enabled tools are a mix of features:

  • Some use no AI at all (and therefore fall outside the scope of this framework).

  • Some use AI but have minimal safety impact (e.g. keyword highlighting in literature review).

  • Others use AI in ways that are mission-critical, such as automatically classifying whether a case report describes a patient death.

When the stakes are this high, we need to go beyond assessing the software as a whole. We need to dissect it, identify the AI-powered features, and evaluate each one based on its potential risk to patients, compliance, and product safety.

These risks often align with specific features, but what we’re really assessing is whether the software reliably fulfills stakeholder requirements, the specific expectations or functions that carry regulatory or safety impact.

How It Works: AI Risk Analysis in Action

The Risk Analysis per Stakeholder Requirement step is essentially software triage:

  1. Break the tool into individual features: ideally from the vendor’s feature list, or from your own review

  2. Filter for those that use AI: non-AI features aren’t assessed here

  3. For each AI feature, evaluate both the probability of failure and the severity of potential harm

We don’t reinvent the wheel here. Instead, FLARE borrows from ISO 14971 — the international standard for risk management in medical devices — and applies its logic to AI features:

  • Severity: How bad is the outcome if the AI fails?

  • Probability: How likely is it to fail in real-world conditions?

This produces a risk level for each feature, typically visualized in a risk matrix.

FLARE AI Risk Acceptability Matrix visualizing risk levels based on severity and probability of AI failure in MedTech applications.

Let’s look at an example:

  • Severity: Critical (e.g. misclassification of a death case)

  • Probability: Occasional

    Risk Level: Unacceptable → Requires further, in-depth validation

Why ISO 14971 Fits Perfectly Here

ISO 14971 already describes a structured process for:

  • Identifying hazards

  • Estimating and evaluating associated risks

  • Controlling those risks

  • Monitoring the effectiveness of the controls

By using a familiar and regulator-accepted framework, we avoid overcomplication and ensure that the AI risk evaluation will be understood and trusted by auditors.

And What Happens Next?

Well, once each feature is classified:

  • Low-risk AI features loop back to the Vendor Assessment stage (Step 3 of FLARE) for a general qualification review. This keeps your validation resources focused.

  • High-risk AI features move forward in the FLARE pathway to the next step: Required Sample Size, where we determine how much data we need to validate the feature with statistical confidence.

This keeps your validation resources focused where they matter most, rather than spread thin over low-impact features.

Final Thoughts!

Risk Analysis per Stakeholder Requirement is where precision meets pragmatism. By separating the “nice-to-have” features from those with true safety implications, you can prioritize validation efforts intelligently.

Because in regulated AI, it’s not about validating everything equally, it’s about validating the right things thoroughly!

Curious how this might apply to your AI system? Let’s talk.

What’s Next in the FLARE Series?

Once you’ve classified your AI features and identified which ones are high risk, the next question is:

How much evidence is enough to validate them?

In our next article, we’ll explore the next step in the FLARE framework:
The required sample size and why it’s the statistical backbone of robust AI validation.

Stay tuned!

Let us show you

Let us show you

Let us show you

Bastian Krapinger-Rüther

© 2025, 1BillionLives GmbH, All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved