AI is making its way into every corner of work, including quality management. That isn’t surprising: quality teams are buried in documentation, training upkeep, investigations, and reporting, which are exactly the kinds of workflows AI can help streamline.
But in regulated environments, the questions about AI quality management tools shouldn’t be “Can AI help?” or even “Is it impressive?” The questions should be risk-based:
To help QA leaders as they begin evaluating AI quality management tools, we spoke with Karin Ashkenazi, VP of Quality Assurance at ZenQMS, to better understand the questions leaders should be asking – and what “good” looks like when AI is introduced into regulated workflows.
As it turns out, AI adoption is more about governance than it is about capability. Below are the core questions QA leaders should ask before implementing an AI-driven quality tool, along with examples of what those controls look like in practice, and how we’re approaching compliant AI at ZenQMS.
Start here, because this one question determines how every other question should be answered.
AI in quality management spans a wide range of risk. Some use cases are assistive – helping users find information faster or draft training quiz questions they’ll still review. Others can influence decisions that carry downstream compliance impact.
A useful way to frame it, and how Karin Ashkenazi, VP of Quality Assurance at ZenQMS often frames it, is simple: “If the AI output is wrong, what’s the consequence?” In some workflows, an incorrect result from AI is inconvenient. In others, it can affect product quality or patient safety.
That’s why “AI in quality assurance” isn’t one category. It’s a spectrum. Building search filters or suggesting training questions typically sits on the lower-risk end. Risk scoring, investigation conclusions, trend analysis, or regulatory reporting sit higher.
At ZenQMS, we’ve intentionally started with lower-risk workflows where users can immediately validate results, like AI-assisted filter building that shows you the logic it created before anything is applied.
Quick risk lens for AI tools in quality management:
2. Will the AI be trained on our data?
This is usually one of the first questions QA leaders ask. And for good reason.
Quality data can include proprietary manufacturing and process information, deviations, CAPAs, supplier details, and regulated records. If that data is used to train public or third-party models, the risk can be unacceptable.
Karin also highlighted a practical nuance many teams miss: data protections can change by the AI tool’s subscription tier. “Am I going to use the free version of ChatGPT or do I want to pay for a subscription with more data protection options because I’m going to upload my customer data?” The right question isn’t just “Is the vendor secure?” It’s also “What protections come with the version we’re using?”
In regulated environments, trust isn’t a control. Documentation is. As you evaluate AI features, ask for written commitments (terms, agreements, and, when applicable, a Data Processing Agreement (DPA)) that spell out:
ZenQMS’s stance is simple: customer data is not used to train third-party models. We treat “no training on customer data” as a baseline expectation for compliant AI adoption in regulated workflows.
In regulated quality workflows, the safest AI adoption is deliberate adoption: you decide when AI is enabled, where it’s used, and who can access it. That control matters even more because, as Karin pointed out, AI doesn’t only show up when you buy a new “AI tool.” It can appear inside tools you already use as vendors roll out new AI features. When that happens, QA teams often need to reassess the tool’s risk, intended use, and validation impact – just as they would with any significant new capability.
That’s why QA leaders should be wary of AI quality management features that are enabled by default, especially when they touch regulated workflows. If AI “arrives” silently through an update, teams can lose control over when and how it becomes part of the quality process.
What to look for:
ZenQMS AI features are opt-in by default – customers explicitly enable AI features so adoption can align to internal procedures and validation plans.
Even if a vendor doesn’t train on your data, your data is still being processed, and QA leaders need transparency on how.
This is similar to supplier qualification, but AI raises the stakes because it can introduce additional processing layers. Karin’s advice is straightforward: these are questions you should ask any vendor, but especially those with AI features.
A vendor should be able to answer these clearly:
If you’re evaluating AI tools for quality assurance, look for the fundamentals Karin emphasized: clear answers on where your data is stored, who can access it, whether it’s isolated, and what happens when it’s deleted – backed by documentation you can rely on.
One of the biggest adoption barriers for AI in regulated environments is the “black box” problem.
If an inspector asks, “How did you get this result?” the answer can’t be “The AI said so.” Explainability matters, not as a preference, but because quality systems have to be defendable.
A good AI feature should make it clear:
This is also where “human in the loop” becomes non-negotiable. Especially as risk increases, AI should recommend and assist, but humans should review, accept, reject, or edit before results become part of the quality record.
This principle is embedded in how ZenQMS is approaching compliant AI. For example, with AI-assisted filtering, the system shows the filters it generated so users can validate accuracy before acting. And when AI is used, activity is logged so organizations can show how AI was used in context, alongside the human decisions made afterward.
AI can hallucinate. It can be biased. It can drift over time. And depending on how a system is designed, it can even produce inappropriate outputs in response to bad inputs.
Karin’s framing here is especially helpful: vendors should be able to define the “borders” of acceptable behavior and show how they detect when the AI moves outside those boundaries. In other words, the question isn’t just “Do you have guardrails?” It’s “How do you know when outputs aren’t reliable – and what happens next?”
In a regulated environment, trust can’t be assumed. It has to be engineered through safeguards like output controls, monitoring, and clear pathways to flag poor results.
This is why ZenQMS treats guardrails and output controls as essential. The goal isn’t to pretend AI never makes mistakes. It’s to reduce the likelihood of unsafe outputs and ensure users can identify and correct problems before they become risk.
QA leaders often ask whether AI can be validated the same way as traditional software. Karin’s view is: yes, but with extra considerations.
AI qualification still needs the same fundamentals you expect from any regulated software: defined intended use, validation evidence, and controlled change management. But it’s different because AI systems can evolve – models may be recalibrated or even swapped – so you need to understand how changes are governed and documented.
A vendor should be able to explain:
Karin referenced ISO/IEC 42001 (an AI management system standard) as a helpful benchmark for whether a vendor understands the control expectations around AI. It’s not the only signal, but if a vendor can show alignment (or certification), it can indicate stronger governance maturity.
At ZenQMS, AI features follow our SDLC and release controls, and validation evidence is included in release documentation.
One of Karin’s strongest points is also the easiest to overlook: AI adoption is shared responsibility. Even if you trust your vendor, you still need a plan for how your organization will use AI responsibly.
That means internal clarity on:
In many organizations, AI risk isn’t created by the tool alone. It’s created when teams incorporate AI without shared rules for how it’s used.
AI adoption in quality management isn’t a matter of “if.” It’s a matter of how.
For QA leaders, responsible AI starts with a risk-based approach: define intended use, match controls to risk, hold vendors accountable, and build workflows that are explainable and auditable. The organizations that adopt AI well won’t be the ones who move fastest. They’ll be the ones who started by asking the right questions.
And for many teams, the best place to start is the same place ZenQMS has started: low-risk, human-verifiable AI features that reduce toil without shifting decision-making authority – then scale adoption as governance, validation maturity, and confidence grow.