Video Q&A: Switching monitoring system servers & mapping study know-how

Life Science

Scroll down to see the edited transcript


[00:00:14] Hi, my name's Janice Bennett-Livingston. I'm Vaisala’s global life science marketing manager. Today we're going to do something a little different. For many years now, we've had webinars and blogs where our senior regulatory expert, Paul Daniel, answers questions and shares information about how to manage environments for GxP compliance. In today’s video, Paul is going to answer three questions he’s received by email. So, Paul, how are you today?

[00:00:58] I'm good. It's fun to be working with you remotely today.

[00:01:06] Yes! Do you want to just go right into questions?

[00:01:18] Yeah. Let's see if we can bring my 23 years of GxP and validation experience to bear and maybe help some people with their problems during this time of working from home.

[00:01:29] OK. So question one. This is from a customer in Sweden running a validated instance of viewLinc. The plan is to replace the computer approximately every three years. The question is, when there's no software version change or service update, but only a move from an old to a new computer, do they need to perform a validation?

viewLinc continuous monitoring system software welcome tour
viewLinc continuous monitoring software

[00:02:01] Yes, there definitely has to be some validation involved because the laptop is new. That likely means a new operating system, a new CPU, new components like the hard drive, NIC (network interface controller) cards and all that stuff. This creates an entirely new environment for viewLinc to operate in. So they need to prove that the software operates as expected in that new environment.

Given that some level of validation is needed, this would probably be a really good opportunity to update to the latest version of viewLinc. We could then treat it all as a new system, but with the existing configuration [in viewLinc] imported and migrated data. In some ways, [updating viewLinc when you update hardware] is really the easiest approach and the protocol documents are already available for this practice.

Now, if you aren't upgrading the software, but just staying with your existing version of viewLinc, we could make the argument that the software itself was validated on the old computer. Therefore, all that needs to be done on the new computer is to show that viewLinc operating as expected. This would require some basic tests to verify. We can log in, verify the services are running, verify the alarms and reports can still be generated. You’d also want to check and verify that all sensors are still connected and sending data and that critical configuration details haven't changed. This list [of things to verify] gets longer and longer.

All of this would need to be done under a change control process and we would need,  at minimum, to draft some test documents to capture this information. So this process, it's going to take time, energy, and money. In the end, you might find out that it's easier to update viewLinc and spend about three days to validate the new system using existing protocol documents from Vaisala. It's always up to the customer to make decisions about how to validate. But for moving to a new company computer or doing an upgrade, the easiest path is, in terms of risk and effort, is to simply use the template protocols we offer.

[00:04:04] Ok, next question: “We are planning a room temperature mapping using the DL1416, a 4-probe temperature logger and possibly also DL1000, another temperature data logger model. We’d also like to use do a humidity mapping for a room with the DL2000 data loggers. Is there a recommended effective radius for these units?”

[00:04:45] I think you're on the wrong path here if you're trying to determine an effective radius or a distance between loggers, based on the data logger. Far more important than the data logger is the parameter that you're measuring, in this case, it’s temperature. The details of that environment that you're trying to map, focus on that, rather than the data logger.

You can easily have a hotspot that's located just one meter away from a cold spot, especially when you take into account how HVAC units cycle.  You wouldn't want to use some hard number just based on the type of logger you have. There are some guidelines for logger placement and multiple guidances. As an example, the ISPE has some good practice guides. They recommend eight loggers for a unit that's smaller than two cubic meters and fifteen loggers per unit between two and 20 cubic meters. Unfortunately, they don't go much farther for bigger space. They don't offer specific guidance.

So here's what I would do in large cold room: I'd keep my loggers no more than eight meters apart. And in large environments at ambient temperatures, I’d want to see those loggers no more than 20 meters apart.  Remember, this isn't going to be found in a guidance anywhere. It’s based on my years of mapping experience and the nature of how control and HVAC systems work in different sized spaces.

The question also mentions humidity. Humidity mapping is way different from temperature mapping. I still remember the first time I did a humidity mapping because it failed miserably. I didn't understand that relative humidity was relative to temperatures. So if you're in a room with, for example, with no source of new air, or of any air that has a different humidity, the concentration of the gaseous phase water molecules will almost instantly equilibrate across the entire room and across a huge warehouse. That’s simply how a gas behaves. This would be true for any other gas, oxygen, nitrogen, CO2, you name it.

So let's imagine we're trying to map the concentration of oxygen in a room. You already know from high school science class that the concentration would be the same everywhere in the room except right in front of your face as you breathe on the sensor. The same thing occurs with gaseous water if we are measuring the gas concentration or absolute humidity. But, in measuring relative humidity, remember it’s dependent on temperature.

Let's say we have a room that has one percent gaseous water vapor. That the gas concentration of water. Now the relative humidity is relative to temperature, and some spot in this room, say right here beside my head, if this is a 20 degree Celsius spot, it’s going to have a relative humidity of 43 percent. But if we have another spot over here, say that's 22 degrees C, that spot is hotter. Two degrees C hotter. But it's going to have a lower relative humidity of 38 percent. 

The relative humidity is lower because the temperatures are higher, even though the concentration in the entire space is exactly the same at one percent. So long as absolute humidity is stable in your room, any differences in humidity are caused only by differences in temperature.

Well, that was a long story to basically tell you that if you want to map humidity, you should use the same data logger placement that you use for temperature mapping. This is unrelated to an effective measurement radius of the sensor. It's simply because the relative humidity is dependent on temperature.

[00:09:15] OK. Question three. And this is this is three questions wrapped into one, again on mapping.
“We want to perform a door opening study for warehouse temperature mapping. Can we do this study without acceptance criteria? For example, when we open the door, what should the maximum duration be for the temperature to be out of specification? Do we have to define the time when the first sensor goes out of spec? And should we recommend this time as a reference time, the duration for a typical door opening time when product is loading?

[00:10:07] In answer to the first question: Yes, you can do a door opening study without acceptance criteria. This is a little pet peeve of mine, because technically it isn't validation or qualification if there are no acceptance criteria. But I'll let go of that. Thinking logistically, the easiest time for us to perform an open-door test is when we're doing temperature mapping. Because we already have all the data loggers in place. So typically, we perform open-door tests during validation, even though there aren't any acceptance criteria.

Some companies avoid this problem and simply create some acceptance criteria. But really, these acceptance criteria are just based on experience from earlier studies. And those earlier studies were used to create a policy that might say, for example, all doors should be open no longer than two minutes, or temperature should return to within specifications within 10 minutes after closing the door. But for this question, it sounds like they need to determine how long their door can be open, or how long a storage area can be out of temperature specs. For this, I would start by looking at the products that are stored in the warehouse.

So the question is: what are the storage criteria of the products? How long can these products be out of spec? We need to focus on the product, not the space. It sounds like the questioner was focusing on the time when that first temperature sensor goes out of spec. I wouldn't focus on that. I would focus on the length of the excursion; and focus on the total time it takes for the warehouse to get back in spec after the doors have been closed.

There are two parts to door opening. The first part is the loss of equilibrium when the conditions go out of spec. The second part is the recovery time it takes the environment to get back into specification. These two parts work together. I work backwards on things like this. I determine the maximum amount of time that the environment can be out of spec during an excursion, say, caused by a door opening. And then I figure out how long the door can be open and still have the environment returned to specification before we exceed the maximum time allowable. So work backwards.

[00:12:28] OK, we'll end this video by sending out an invitation to people to send your questions to [email protected]. As you can see, Paul likes answering questions. Paul, should we give a little plug for the next webinar you're doing June 24?

[00:13:20] Absolutely! We're doing a webinar about a thing we call “continuous mapping”. It's a new model for temperature mapping of warehouses in GxP environments for good distribution practice. This method is more effective, gives higher compliance, requires less labor and doesn't cost any more money. So join us for that webinar and we will share this idea with you.

[00:13:57]  Stay tuned! We will email on this new webinar soon!


Add new comment