Video Q&A from our Part 11/Annex 11 webinar

Questions and Answers on 21 CFR Part 11 and Annex 11
Life Science
Welcome to another video question and answer from our most recent webinar on 21 CFR Part 11 and Annex 11 and how they apply to environmental monitoring systems. In this video, our Senior GxP Regulatory Expert Paul Daniel answers questions we didn’t get to during the webinar. Continuous monitoring system compliance with Part 11 and Annex 11: Easier than you think 
Find two new application notes on 21 CFR Part 11 and Annex 11 below and a link to a webinar on Cybersecurity in GxP systems.
If you have further questions, please post them in the comment section below the transcript of the video.
Why aren't accuracy checks required in a monitoring system?
In the context of this webinar, the requirement for accuracy checks is narrowly defined by Annex 11 as:

“For critical data entered manually, there should be an additional check on the accuracy of the data. This check may be done by a second operator or by validated electronic means. The criticality and the potential consequences of erroneous or incorrectly entered data to a system should be covered by risk management.”

In a monitoring system, I would argue that no critical data is entered manually.  Since there is no manual entry of critical data, accuracy checks as defined in Annex 11 would not apply.

Now if we entertain the idea of “accuracy checks” more broadly, it sounds like a reasonable idea to check the accuracy of data.  And of course, a monitoring system like viewLinc has several ways to automatically checks data for accuracy.  First, the sensors are all calibrated.  Second, when data is transmitted into the system, the data packets are checked for completeness to ensure that they haven’t changed or altered during transmission. Third, when data, such as parameters or thresholds, are entered into viewLinc manually, it is verified for format.  For instance, an email must have an @ sign, and end with a .com or .net or other valid suffix. Further, all thresholds must be numeric data. Those are two examples  However, those are not necessarily critical data, even though they are entered manually. It is counter-intuitive, in considering Annex 11, but that is what the guideline stipulates. We could do an entire webinar on things that are missing from Part 11 and Annex 11. 
Audit trails need to be available and convertible to a generally intelligible form and regularly reviewed.   Do you have any experience of what the authorities consider to be “regularly"? For example, annually, before batch release?
The audit trail should be reviewed at an interval that makes sense.  Usually, this is not a time period, but a time frame that coincides with a review or approval event that involves the data in question. 

For example, if I am approving a manufacturing process step that requires a manufacturing suite to be maintained between 10 °C and 25 °C, this would be a good time to check the audit trail and ensure no one has changed the data.  If I wait a year before reviewing the data, that batch of product would be long gone and there would be nothing to do but start testing my retain samples, or maybe even recall the batch. 

Maybe a LIMS system collecting data from a chromatograph would be a better example. Chromatograph data needs to be adjusted and manipulated to be understood and evaluated.  If I was in Quality Control and I had just tested a sample with a chromatograph, the relevant audit trail records should be reviewed at the same time as the chromatograph report, so that the approver can verify that the correct changes were made.
So, given the two choices in your question, which were yearly (which is a time period) or at batch release (a defined event), I would go with the defined event.  But just what defined event  is appropriate will depend on the computerized system you are dealing with.  A well-designed system should have a way to make sure that record reviewers and approvers, can easily access the audit trail for the defined event that is under review.
Does viewLinc  monitor data when the system is off-line and inform users when the system comes back on line?
The viewLinc system is designed to be on-line all the time.  The viewLinc server should run 24/7 and the viewLinc services should be running on the server at all times.
If for some reason, the viewLinc server has gone off-line, and comes back on-line, the following events will occur: 
  • The Event Log will record the recovery of the system. 
  • On the next system scan event, the system will backfill any data that was collected by the data loggers but not transmitted to viewLinc while it was off-line. 
Do you limit a validation process to your system, or enlarge it to the whole facility? 
Yes.  Both. I think 21 CFR Part 11 is speaking directly to validation of a single system.  But, before we can really begin validating single systems, we should have a Validation Master Plan in place that defines the systems that need to be validated. The plan will include the criticality or priority order in which systems should be validated.  We should also have validation procedures in place that define how we perform validations at our facility. 
The side effect of this is that we can’t really validate a single system without having a validation strategy in place for the facility. It's like building a wall in that we build one row of bricks at a time.  Validation of a system according to 21 CFR Part 11 is like a brick in the fourth row up, sitting on top of the procedure, policy, and planning bricks below.  Part 11 is focused on the system only, but by implication it must include the whole facility to some extent.  And this would necessarily include the related systems, such as network infrastructure, and the IT backup processes. Annex 11 simply says all this more directly.  But the expectations in terms of validation are similar in both Part 11 and Annex 11.

New Webinar

Cybersecurity in your GxP monitoring system

Cybersecurity is an increasing concern in GxP-regulated applications that use computerized systems.  In this webinar, we discuss how the Vaisala viewLinc Continuous Monitoring System (CMS) uses advanced security features to reduce system exploitability. We look at common software, network, and user vulnerabilities and threats to data integrity. Learn how to ensure your systems have controls and procedures that secure your computerized systems.
Learning objectives:

  • Security procedures
  • Vulnerabilities in Transport Layer Security (TLS)
  • Modes of attack: Replay, Man-in-the-Middle, SSL, JavaScript, Brute force, etc.
  • IT support & vendor development
  • Software security features

Learn more and register

Add new comment