AI

Regulatory

6 Min.

FLARE Series (6/6): When Validation Fails, What Comes Next? From Risk Assessment to Real-World Decisions

Tibor Zechmeister

Nov 13, 2025

Decision flow for handling an AI-related risk in medical devices, showing three response options under a warning icon: "EXIT – Find a new vendor" "CORRECT – Take corrective actions" "AI OFF – Deactivate the feature"
Decision flow for handling an AI-related risk in medical devices, showing three response options under a warning icon: "EXIT – Find a new vendor" "CORRECT – Take corrective actions" "AI OFF – Deactivate the feature"
Decision flow for handling an AI-related risk in medical devices, showing three response options under a warning icon: "EXIT – Find a new vendor" "CORRECT – Take corrective actions" "AI OFF – Deactivate the feature"

We’ve reached the season finale of our FLARE Series: the last chapter in our journey through the framework that helps make AI validation practical, structured, and regulator-ready.

So far, we’ve explored how to qualify AI tools step by step, from defining their intended use all the way to testing high-risk features with statistical confidence.

But let’s be honest: (sadly) not every validation story ends with a happy ending.

Sometimes, despite careful vendor evaluation and rigorous testing, the results are disappointing. The AI feature underperforms. The vendor can’t provide solid evidence. Or the data scientists struggle to explain what’s gone wrong.

FLARE framework diagram with Step 6 highlighted, showing how failed validation leads to stakeholder or vendor failure decisions.

So, what happens when validation fails?

Three Paths Forward: How to Respond When AI Doesn’t Deliver

As a user, you have several options and which one you choose depends on your risk tolerance, available alternatives, and how critical the feature is for your process.

  1. The Break-up: Choose a Different Provider

If there are comparable providers offering similar functionality, sometimes the simplest option is also the smartest:

“Your software doesn’t meet our quality requirements. We’ll look elsewhere.”

Switching providers is often the cleanest route when multiple solutions exist and the risk of non-performance outweighs the benefit of staying.

  1. Request Corrective Actions: Give the Vendor a Chance

But what if the provider is unique or strategically important to your operations? In a relationship, you might say: “We see so much potential here!” In that case, it might be better to collaborate rather than cut ties.

You can request corrective actions:

“We’d like to continue working with you, but these AI features need improvement. Please fix these issues and provide new validation evidence.”

This path emphasizes partnership over punishment and can be particularly effective when non-AI components already perform well.

  1. Cutting Some Ties: Disable Problematic AI Features

In some cases, only part of the software causes problems. The non-AI features might be robust, while the AI modules fall short.

In those situations, you can ask the vendor to deactivate or isolate the AI features until they meet acceptable quality standards.

That way, you can continue benefiting from the stable parts of the system while maintaining regulatory safety.

Decision by Context: Balancing Pressure, Risk, and Reality

We know, it’s a tough call! The right decision, though, depends on a few key factors:

  • How urgently you need the software in your workflow

  • How severe the identified issues are

  • How high the overall regulatory and business risk is

FLARE doesn’t prescribe a single answer, it helps you make a structured, transparent decision by putting all the facts right in front of you.

And as we circle back from the final step to where it all began — well, we do love a good full-circle moment, don’t we?

The Full FLARE Journey: From Start to Finish

As this is the final article in our FLARE Series, let’s take one last look back at the path we’ve travelled together.

  • We began by defining intended use and assessing overall risk. Validation only makes sense if you know what you’re validating and how much risk it represents. This first step laid the foundation for everything that followed, because even the most ambitious projects can only succeed on a solid foundation.


  • Low-risk features led us to Vendor Assessment. We evaluated the provider’s maturity, transparency, and adherence to regulatory standards. Check!


  • For high-risk features, we went deeper. We dissected the AI tool feature by feature and assessed each one based on its potential harm and failure probability, following the logic of ISO 14971.


  • Then we tackled the Required Sample Size. Here, we asked the key question: How much data do you actually need to validate a feature with statistical confidence? Depending on the level of risk and how rare the event was, we determined when hands-on user testing was sufficient and when it was time to rely on solid vendor evidence instead.


  • And now, the last step: When validation fails. Because not every feature will meet expectations, and not every vendor will be ready. This final step ensures that even when things go wrong, your process stays structured, traceable, and defensible. And yes, we’ve got your back!

Flowchart of the FLARE framework for validating regulatory AI. Shows six steps: Software Risk & Intended Use, Vendor Assessment, Stakeholder Risk Analysis, Required Sample Size, Human Testing or Vendor Review, and failure-handling options.

Final Thought: The Essence of FLARE

FLARE was designed to bring clarity to one of the most complex challenges in modern MedTech: validating AI systems in regulated environments.

It’s less a checklist and more a mindset:

  • Structured thinking instead of assumptions.

  • Evidence instead of opinions.

  • And transparency instead of black boxes.

At Flinn.ai, this philosophy guides everything we build.

Because quality management is really not only about perfection, it’s about control, clarity, and the confidence to make the best out of every situation.

And FLARE helps you do exactly that!

Curious how FLARE could apply to your AI validation process?
Contact us – we’re happy to talk!

Did we miss a step? See something in the framework that doesn’t fit your experience?
We’d love to hear your perspective. Reach out and let’s improve it together.

Let us show you

Let us show you

Let us show you

Bastian Krapinger-Rüther

© 2025, 1BillionLives GmbH, All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved