Meeting Regulatory Requirements for Periodic Remapping... Where is the wiggle room?
Recently a contact wrote to Paul with a bunch of questions on remapping temperature distribution.
When determining hot/cold spots, do you look at average or single temperature peaks? Do you assign a location as a hot spot based on a single registered hot peak under a week's mapping? For example, how do you determine when a hot/cold spot is considered as a hot/cold spot? When are you certain it's statistically significant? Does the mapping study help ensure that multiple fixed sensors cover the distribution (and extremes) continuously? Finally, can we then justifiably avoid the requirement to remap periodically?
Thanks for any advice you can give!
First off, be assured that you are not alone in your questions (six by my count ?). We often hear similar questions so I have some experience with this...
Regarding mapping hot/cold spots: You are on the right path. There is obviously a difference between the hot spots that are represented by a single peak value and a hot spot that is consistently hot. Our usual approach to such a situation is best described in two parts.
- Part 1: Identify the cause of the single peak, and ideally, the frequency of its occurrence. A classic version of this single-peak question is the hot spots caused by defrost cycles in refrigerated areas.
- Part 2: Provided you can identify the source of the single peak, you can then consider the "average" hot-spot as your hot-spot with regards to monitoring.
Another approach would be to tune the controller or adjust the set-point to improve the performance of the hot spots. If that failed to produce an improvement (and I could not explain my transient hot spikes) then I would consider quarantining the area of the hot spikes as not suitable for storage. Second, we have several customers who practice "continuous mapping." People want to know if this is an option to avoid periodic remapping without blowing compliance with best practice...So, here's a scenario: We have a freezer room: width/length/height = 20m/10m/10m. We perform an initial mapping study with 30 external sensors and 22 fixed sensors (connected to the facility monitoring system). This is to analyze the temperature distribution in the area. We alter the placement of the fixed sensors according to the outcome of the initial study, in order to cover all extremes with the 22 fixed sensors and with symmetry in all three dimensions kept in mind.
Can we claim that we continuously measure the distribution in the room via these 22 fixed sensors AND does this fulfill a requirement to remap the area periodically?
In short: Yes, absolutely. We've seen this done in regulated monitoring applications and it can produce significant labor savings in terms of repeated mappings.
But - there could be a downside, and that is the increased cost for initial sensor deployment with the attendant maintenance costs, including calibration.
So, I ran a financial analysis on the idea. (editor's note: because he's a nerd.) As far as I can determine, continuous mapping is most financially viable for two scenarios:
When you need to capture the winter and summer temperature extremes.
For areas that need frequent remapping due to changes to the HVAC or reorganizations of the space for productivity enhancements. If areas don't need to be mapped frequently because of these factors (extreme weather, changing conditions) it may be that continuous mapping is not worth it. Spend the money remapping instead.
But, as you might expect, there are some other caveats: Continuous mapping is absolutely dependent on the number of sensors. However, since you can monitor for a longer time period, an auditor would likely accept fewer sensors. (Yes, we've assisted customers in inspections and seen this play out positively.)
Also, it is a fairly typical practice to perform remapping with fewer sensors than the original mapping.
Continuous mapping works best with sensors that can be swapped out rather than individually collected, calibrated and replaced. Continuous mapping works well with budgets that can support the ongoing cost of calibration but aren't well-funded for ongoing validation.
For our specific scenario of a 2000 cubic meter cold room; mapping with 52 sensors sounds like overkill. You could probably do it with 25. However, you must be very sure to actually do the remapping study. You must still generate a validation protocol and collect and analyze the data. You are really just saving the time in sensor deployment and calibrations. The remapping validation work still needs to be done.
Also, I must mention that while this idea makes a lot of sense to me, and we've seen it done, we haven't seen it done very often. This is because most project budgets are limited by the up-front costs, as opposed to being contingent upon long-terms savings.
As a result, a continuous mapping approach may be novel to your auditor/regulator, and you may find yourself having to explain your method. And there is always the chance that a stubborn auditor won't like your solution just because they are not familiar with it. We live with the reality that there is a measurable increase in regulatory risk anytime we implement an innovative solution.
Here I offer a success story from a customer who integrated this method to great effect (inspection went swimmingly!).
Here is a quote from this company's VP of Quality:
"We wondered if we wouldn't be better off maintaining the higher density of sensors for long-term monitoring and decided to leave all sensors in place. This essentially resulted in 'continuous mapping' of these environments. We have used this system for three years and found it very successful."
- Gary Swanson, Senior Vice President of Quality for Herbalife International
Read this story here:
Please let me know if I can be of further assistance!
Sr. Regulatory Compliance Expert