What’s the difference between a pellistor and an IR sensor?

Sensors play a key role when it comes to monitoring flammable gases and vapours. Environment, response time and temperature range are just some of the things to consider when deciding which technology is best.

In this blog, we’re highlighting the differences between pellistor (catalytic) sensors and infrared (IR) sensors, why there are pros and cons to both technologies, and how to know which is best to suit different environments.

Pellistor sensor

A pellistor gas sensor is a device used to detect combustible gases or vapours that fall within the explosive range to warn of rising gas levels. The sensor is a coil of platinum wire with a catalyst inserted inside to form a small active bead which lowers the temperature at which gas ignites around it. When a combustible gas is present the temperature and resistance of the bead increases in relation to the resistance of the inert reference bead. The difference in resistance can be measured, allowing measurement of gas present. Because of the catalysts and beads, a pellistor sensor is also known as a catalytic or catalytic bead sensor.

Originally created in the 1960’s by British scientist and inventor, Alan Baker, pellistor sensors were initially designed as a solution to the long-running flame safety lamp and canary techniques. More recently, the devices are used in industrial and underground applications such as mines or tunnelling, oil refineries and oil rigs.

Pellistor sensors are relatively lower in cost due to differences in the level of technology in comparison to IR sensors, however they may be required to be replaced more frequently.

With a linear output corresponding to the gas concentration, correction factors can be used to calculate the approximate response of pellistors to other flammable gases, which can make pellistors a good choice when there are multiple flammable vapours present.

Not only this but pellistors within fixed detectors with mV bridge outputs such as the Xgard type 3 are highly suited to areas that are hard to reach as calibration adjustments can take place at the local control panel.

On the other hand, pellistors struggle in environments where there is low or little oxygen, as the combustion process by which they work, requires oxygen. For this reason, confined space instruments which contain catalytic pellistor type LEL sensors often include a sensor for measuring oxygen.

In environments where compounds contain silicon, lead, sulphur and phosphates the sensor is susceptible to poisoning (irreversible loss of sensitivity) or inhibition (reversible loss of sensitivity), which can be a hazard to people in the workplace.

If exposed to high gas concentrations, pellistor sensors can be damaged. In such situations, pellistors do not ‘fail safe’, meaning no notification is given when an instrument fault is detected. Any fault can only be identified through bump testing prior to each use to ensure that performance is not being degraded.

 

IR sensor

Infrared sensor technology is based on the principle that Infrared (IR) light of a particular wavelength will be absorbed by the target gas. Typically there are two emitters within a sensor generating beams of IR light: a measurement beam with a wavelength that will be absorbed by the target gas, and a reference beam which will not be absorbed. Each beam is of equal intensity and is deflected by a mirror inside the sensor onto a photo-receiver. The resulting difference in intensity, between the reference and measurement beam, in the presence of the target gas is used to measure the concentration of gas present.

In many cases, infrared (IR) sensor technology can have a number of advantages over pellistors or be more reliable in areas where pellistor-based sensor performance can be impaired- including low oxygen and inert environments. Just the beam of infrared interacts with the surrounding gas molecules, giving the sensor the advantage of not facing the threat of poisoning or inhibition.

IR technology provides fail-safe testing. This means that if the infrared beam was to fail, the user would be notified of this fault.

Gas-Pro TK uses a dual IR sensor – the best technology for the specialist environments where standard gas detectors just won’t work, whether tank purging or gas freeing.

An example of one of our IR based detectors is the Crowcon Gas-Pro IR, ideal for the oil and gas industry, with the availability to detect methane, pentane or propane in potentially explosive, low oxygen environments where pellistor sensors may struggle. We also use a dual range %LEL and %Volume sensor in our Gas-Pro TK, which is suitable for measuring and switching between both measurements so it’s always safely operating to the correct parameter.

However, IR sensors aren’t all perfect as they only have a linear output to target gas; the response of an IR sensor to other flammable vapours then the target gas will be non-linear.

Like pellistors are susceptible to poisoning, IR sensors are susceptible to severe mechanical and thermal shock and also strongly affected by gross pressure changes. Additionally, infrared sensors cannot be used to detect Hydrogen gas, therefore we suggest using pellistors or electromechanical sensors in this circumstance.

The prime objective for safety is to select the best detection technology to minimise hazards in the workplace. We hope that by clearly identifying the differences between these two sensors we can raise awareness on how various industrial and hazardous environments can remain safe.

For further guidance on pellistor and IR sensors, you can download our whitepaper which includes illustrations and diagrams to help determine the best technology for your application.

You won’t find Crowcon sensors sleeping on the job

MOS (metal oxide semiconductor) sensors have been seen as one of the most recent solutions for tackling detection of hydrogen sulphide (H2S) in fluctuating temperatures from up to 50°C down to the mid-twenties, as well as humid climates such as the Middle East.

However, users and gas detection professionals have realised MOS sensors are not the most reliable detection technology. This blog covers why this technology can prove difficult to maintain and what issues users can face.

One of the major drawbacks of the technology is the liability of the sensor “going to sleep” when it doesn’t encounter gas for a period of time. Of course, this is a huge safety risk for workers in the area… no-one wants to face a gas detector that ultimately doesn’t detect gas.

MOS sensors require a heater to equalise, enabling them to produce a consistent reading. However, when initially switched on, the heater takes time to warm up, causing a significant delay between turning on the sensors and it responding to hazardous gas. MOS manufacturers therefore recommend users to allow the sensor to equilibrate for 24-48 hours before calibration. Some users may find this a hinderance for production, as well as extended time for servicing and maintenance.

The heater delay isn’t the only problem. It uses a lot of power which poses an additional issue of dramatic changes of temperature in the DC power cable, causing changes in voltage as the detector head and inaccuracies in gas level reading. 

As its metal oxide semiconductor name suggests, the sensors are based around semiconductors which are recognised to drift with changes in humidity- something that is not ideal for the humid Middle Eastern climate. In other industries, semiconductors are often encased in epoxy resin to avoid this, however in a gas sensor this coating would the gas detection mechanism as the gas couldn’t reach the semiconductor. The device is also open to the acidic environment created by the local sand in the Middle East, effecting conductivity and accuracy of gas read-out.

Another significant safety implication of a MOS sensor is that with output at near-zero levels of H2S can be false alarms. Often the sensor is used with a level of “zero suppression” at the control panel. This means that the control panel may show a zero read-out for some time after levels of H2S have begun to rise. This late registering of low-level gas presence can then delay the warning of a serious gas leak, opportunity for evacuation and the extreme risk of lives.

MOS sensors excel in reacting quickly to H2S, therefore the need for a sinter counteracts this benefit. Due to H2S being a “sticky” gas, it is able to be adsorbed onto surfaces including those of sinters, in result slowing down the rate at which gas reaches the detection surface.

To tackle the drawbacks of MOS sensors, we’ve revisited and improved on the electrochemical technology with our new High Temperature (HT) H2S sensor for XgardIQ. The new developments of our sensor allow operation of up to 70°C at 0-95%rh- a significant difference against other manufacturers claiming detection of up to 60°C, especially under the harsh Middle Eastern environments.

Our new HT H2S sensor has been proven to be a reliable and resilient solution for the detection of H2S at high temperatures- a solution that doesn’t fall asleep on the job!

Click here for more information on our new High Temperature (HT) H2S sensor for XgardIQ.