7 Data Integrity Hacks for GxP Compliance

data integrity for FDA
Paul Daniel, Vaisala
Senior GxP Regulatory Compliance Expert
Published:
Life Science
In this blog we answer customer questions on data integrity for GxP monitoring applications. 


Question: Was there a specific event that prompted a new data integrity guidance?

Data Integrity is an emerging area of concern for the FDA, as you can see in the part of this webinar where we look at recent warning letters.  In 2014 we saw 10 FDA warning letters with data integrity violations; by 2017 we saw 56 such letters.
There were a string of events at multiple sites in Asia involving improper calibration and manipulation of raw data on HPLCs.   These obvious data integrity failures galvanized the increase in regulatory scrutiny.  These events were similar in so many ways, they have given rise to an apocryphal story that there was one specific event that started it all.  But all the stories have in common certain elements such as high tech computerized systems, globalized supply chains, and immature quality culture. 

In one of these true events, the lab in question had an "off practice."  Instead of using a standard to calibrate the device, they simply used the sample of the batch that they were analyzing. If that sounds confusing, a simple analogy is useful; what they did was like deciding that you are two meters tall by taking a string that is as long as you are tall, and calling the string 2m. You then measure your height using the string, and conclude that you are in fact 2 meters tall. Circular reasoning.
Again, in many of these cases, the facilities involved were newer facilities without a mature quality-focused culture, staffed with newly trained, presumably well-meaning employees using high-tech computerized equipment. And, crucially, they used that technology incorrectly, albeit in ways that weren’t obvious.

The US Congress passed the FDA Safety and Innovation Act in 2012, requiring the agency to inspect facilities in foreign countries that manufacture drugs or drug components for distribution in the U.S. In 2014, the FDA sent warning letters to companies with plants in India Australia, Austria, Canada, China, Germany, Japan, Ireland, and Spain. Observations ranged from data integrity failures, to inadequate quality testing and contamination incidences.

Question: Regarding access controls, will more system administrators be an issue that could bring an observation during an inspection? What is an acceptable number? 3 to 4? 8 to 10?

Generally, having extraneous levels of administrative access is a problem. But it isn’t really possible to put an exact number on it without knowing more about what kind of system you are dealing with.  What does the system do?  How many system users are there?  Is the system is use 24/7, or just during working hours.

At the very least, you will have 2 admins for any system.  One admin is too few because that person may quit one day. 10 admins… that sounds like a lot if you only have 20 users on a system. But if you have 2000 system users, 10 administrators may not be enough. The key is simply making sure that the only people with administrative access are those who need administrative access to do their jobs.

Question: What constitutes "adequate control" to prevent changes or unauthorized access to data?

The standard approach is to use a two–factor authentication system to limit access to the system. In most software, this is done with unique usernames and passwords. That how we control access to the system. Now we decide which users can make changes. Typically this is done with a user profile to which rights are assigned. To determine what actions a particular user can take in the system. Classically, this is defined by different types of users, such as:
• View Only
• Supervisor
• Manager
• Administrator

The key concern is to make sure that the abilities of each user class matches their job roles.  Giving people unnecessary access creates opportunities for error.

Question: How do we provide controls to prevent data omission/deletion?

This is very close to the question about changes and unauthorized access. In the case of omissions/deletions, the standard control is the  audit trail. A properly designed audit trail will capture events that involve record creation, modification and deletion. But an audit trail is only useful to us if we review it to detect times when data has been modified or deleted. Omission is harder to detect, but should be detectable through either review of the audit trail, or just through adequate review of the data during reporting processes and ensuring the original data record is attached.

At our facility we have a shared login for our monitoring system. Is that going to raise a flag under audit? 
It will definitely raise a big red flag.  Any shared login is going to stand out because it creates a situation where changes or actions in the system won’t be attributable to a specific person.  Remember the ALCOA acronym?  One of those A’s stands for attributable.  In Vaisala’s viewLinc system, we can create a user with view-only access, who can’t make any changes.  It might be easy to think that having a shared view-only account would not create a data integrity problem because changes can’t be made, but…

At the very least, it’s going to catch an auditor’s attention and you’ll need to explain it.  You really want to avoid ever having to explain anything to an auditor because it’s never guaranteed the auditor will agree, and it just provides the auditor with more questions.  So in general, this makes it a dangerous practice.  Ideally, each user has a unique non-shared account.  Access control is natural next step; just because a user can’t change the data, it doesn’t mean it is okay for them to be looking at it. 


Question: How should we handle a case where we suspect some data may have been deleted accidentally?

There are two questions here:
One is that you think data has been deleted. The other is that you think it might have happened accidentally. My first step would be to review the audit trail.  Part 11 requires that audit trails capture the creation, modification, and deletion of records. So you should be able to determine if data was deleted. If the data was deleted (and you want it back in the system) you would then start a deviation process wherein you recover the deleted data. Exactly how you might recover the data will depend on the system.

Now let’s deal with the fact that the data was deleted accidentally. In this case, there might be a problem in your workflow design, or perhaps the access level of the employee. So there should be a CAPA, probably as part of the deviation, to evaluate this. At the very least, you’ll need to retrain the employee, review access changes, and possibly reconfigure your workflows.

Question: If we copy raw data to a CD and erase that data from the hard drive, how can we ensure sure that all raw data is copied to the CD before deleting them permanently from the drive?

What you are describing here is a data migration process.
You need to build some verification steps so you can ensure that all the data was migrated and that the data wasn’t changed during the migration process. You could do things like verify that the number of files was identical before and after the migration. You can verify that the size of the data folders hasn’t changed before and after. If you are doing a data migration like this regularly, you might want to consider validating the process.

Further Reading on this Topic:

 

Articles on GxP standards & regulations

Visit our repository of articles, white papers, application notes and eBooks...

 

Add new comment