Regulatory

5 Min.

Introducing FLARE: A Practical Framework for Validating AI in Regulatory Affairs for MedTech

Tibor Zechmeister

Jun 16, 2025

Framework for Levelled AI Risk Evaluation. 01/06
Framework for Levelled AI Risk Evaluation. 01/06
Framework for Levelled AI Risk Evaluation. 01/06

Let’s be honest: Just a few years ago, the idea of using AI in regulatory processes would’ve raised more eyebrows than interest. But it’s 2025 and the conversation has changed.

The regulatory landscape in MedTech is shifting. Fast.
And it’s not just the companies themselves. Regulatory professionals, too, have become much more open to the role of AI in their daily work.

Because let’s face it: AI isn’t some futuristic “maybe” anymore, it’s here and it’s everywhere. And with that, the regulatory workload is growing. New tools are emerging for literature review, automated vigilance screening, and even LLM-powered content creation for technical documentation. But while the range of use cases keeps expanding, so does the pressure: documentation demands are rising, audits are getting tougher and all of this is happening in a context where skilled experts are in increasingly short supply.

In short: We need AI.
But most importantly: We need a way to trust it.

That’s where this new series begins.

Why We Built FLARE (Framework for Levelled AI Risk Evaluation)

At Flinn, we often hear the same concern from customers and from auditors, and frankly, in our own internal discussions:
How do we validate AI tools reliably?

It’s a fair question.
We’re dealing with powerful systems that promise a lot, but as anyone who's worked with AI knows: they don’t always deliver predictable results. Sometimes they “hallucinate”. Sometimes they simplify too much. And in the medical device industry, “good enough” simply isn’t good enough, especially when patient safety is on the line.

FLARE is a practical, risk-based validation framework designed specifically for the use of AI in regulatory affairs, with a focus on real-world use in fields like MedTech. Not a standard and not a silver bullet, but a methodical approach to help QA/RA teams assess AI features with confidence.

In this first article, we’ll give you a brief overview of the framework. In the coming months, we’ll publish a 6-part deep dive series, with one article dedicated to each step of FLARE.

Let’s Zoom Out: Why Validation is the Blocker

AI tools are no longer the bottleneck, validation is.

Even the best-performing features, like automated literature scans or LLM-generated summaries, face the same challenge once they’re ready for market use:

How do we prove they’re safe, accurate, and reliable?

There’s no clear standard yet. No ISO norm tailored to LLMs. No MDR annex for generative models. While general guidance exists (like ISO 14971 or IEC 62304), it doesn’t cover the specific risks of non-deterministic AI. Which means QA/RA teams are often left to figure it out on their own (or worse: put blind trust in what they don’t fully understand).

That's why we were keen to bring FLARE to life.

A First Look at the FLARE Framework

Flowchart of the FLARE framework for validating regulatory AI. Shows six steps: Software Risk & Intended Use, Vendor Assessment, Stakeholder Risk Analysis, Required Sample Size, Human Testing or Vendor Review, and failure-handling options.

Brace yourself for what’s coming, because FLARE guides you through six key validation steps. Below is a quick overview, each will be unpacked in more detail in future posts:

1. Software Overall Risk & Intended Use

Start with clarity: What exactly will the AI do? Who will use it? And how critical is the outcome?

An AI that flags duplicate entries is one thing. An AI that auto-classifies adverse events? Entirely different risk class. FLARE starts by categorizing the overall intended use, because validation without context is meaningless.

2. Vendor Assessment to Receive Confidence Index

Here’s where you become the auditor.

Ask questions like: What QA processes are in place? How does the vendor test and monitor their model? Are results explainable?

FLARE helps you assess whether the provider is qualified, not just technically, but methodologically and organizationally.

3. Risk Analysis per Stakeholder Requirement (SR)

Not every feature carries the same weight.

FLARE encourages granular risk analysis of specific stakeholder requirements using tools like ISO 14971. It’s about isolating the SRs where failure would matter most and aligning the validation effort accordingly.

4. Required Sample Size

How many data points are enough?

Some features can be validated with a handful of test cases. Others, like rare but critical adverse events, demand large-scale validation.

FLARE supports probabilistic thinking to define what “reliable” really means for each case.

5. Expert Testing / Full Review with Vendor

Depending on the sample size, the testing approach changes.

Sometimes your internal team can validate the tool. In other cases, you’ll need structured reviews with the vendor’s data science team.

Either way, FLARE helps ensure the right people are involved and that domain knowledge is built into the process.

6. … What If It Fails?

Not every feature passes and that’s okay!

FLARE outlines three clear paths when an AI component doesn’t meet your expectations:

  • Request corrective actions

  • Use the product without the AI feature

  • Find an alternative vendor

The goal isn’t to block AI, it’s to integrate it responsibly.

The Start of This Journey & the Big Question: What's Next?

We are beyond excited about this new chapter!
In our second article of this series, we’ll take a deep dive into Step 1: Understanding Risk & Intended Use and why that step alone can make or break your validation journey.

FLARE is a practical tool built for the real-world challenges we and our customers face.
And like all good frameworks, it’s not set in stone.

That’s why we want to hear from you!
What’s missing? What works in your organization? What doesn’t?

Let’s build this together - let’s talk.

Let us show you

Let us show you

Let us show you

Bastian Krapinger-Rüther

© 2025, 1BillionLives GmbH, All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved

© 2025, 1BillionLives GmbH,

All Rights Reserved