Calibration of gas detectors is vital to be sure that they are in good working order and able to alert the user to gas hazards in their vicinity. A calibration is a “resetting” of the detector’s response against a known concentration of target gas, in a balance of synthetic air or nitrogen. This will determine the relationship between the detector’s reading and the actual concentration of the component gas of interest. Adjustment involves modification of the detectors response to bring the reading into line with what is expected while exposing the instrument to the known source. This is fundamentally different to bump testing which is a brief exposure to gas in order to verify that the sensors respond within a specific boundary and the detector’s alarms function properly.
Why do we calibrate?
Calibration is required for a number of reasons. If an instrument has been subjected to adverse conditions, this can result in a change to the degree to which it responds to a given gas concentration, for example, the detector may read 46% LEL when the true level is 50% LEL. These conditions can include environmental factors, such as extreme temperatures or humidity, sensor poisoning through exposure to contaminants like silicone and solvents, or exposure to high gas concentrations. Mechanical shock or stress and the age of the sensor can also affect performance.
Additionally there is the need to show that regulations are being complied with as a record to indicate an instrument has been calibrated and detects gas within the required tolerances is required. Instruments keep of a log of their calibration date, which not only shows when a calibration was performed but also when it’s next due, as well as producing a calibration certificate as a record.
How do we calibrate?
When calibrating, it is important at the outset to account for factors such as flow paths and flow rates, pressure, temperature, humidity, gases used, cross sensitivities, time for sensors to respond, and exhaustion of waste gas, as well as following any additional requirements pointed out by the detector manufacturer.
Calibration is usually a two-step procedure. First, the instrument is zeroed in a fresh air, synthetic air or nitrogen background, so that the readings equal those expected in clean air. The second step is to expose the detector to calibration gas that contains known concentrations of the gases that the sensor is designed to measure and adjust any deviation to the correct reading. Alternatively, you may be cross calibrating, where you use a different gas type and use a cross calibration factor to give you the response to the target gas required.
How frequently should we calibrate?
The frequency at which the instrument should be calibrated can vary, although it is recommended that you combine information from the application and environment, as well as the user, manufacturer and the service provider. A risk assessment is generally required to confirm a calibration period is adequate. And remember, regular bump testing is recommended between calibration periods.