It is well understood that risk is a two-dimensional concept. We cannot understand risk if we do not have the means to predict the consequences of an incident, or the likelihood at which it might occur. What is not quite as well considered, however, is when exactly in the hazard assessment process we should be trying to put a number on these parameters.
Hazard identification exercises, including HAZOP studies, are designed to determine where there is potential in a design or operation for an issue to occur. It sounds obvious that the purpose is for identifying hazards, but often the lines get blurred and these studies include some sort of assessment of the ‘how bad’ and ‘how often’. Although there is no requirement to assess each of the hazards identified by these studies, some sort of prioritisation will often be carried out, as even high-level risk assessments at this stage can allow operators to understand what hazards they should focus their attention on. How accurate can we really be when assessing risks at this stage?
A lot of the time, a risk matrix is used to determine the severity and frequency of the events identified during the HAZOP or HAZID. While severity is a relatively easy concept to perceive and to assign broad categories to, frequency is a more abstract concept and the more unlikely the event, the more that any judgements become a ‘guestimation’. It would be considered reasonable to make a judgement on the frequency of events where we have personal experience, most likely the higher frequency incidents such as slips and trips. How can we begin to make a reasonable judgement for low frequency events such as major hazards, though? Thankfully, very few of us will experience these in our lifetimes, but lack of experience can result in wildly different subjective judgements about their likelihood.
No matter how fine or coarse the categories on a risk matrix are, they will always require an estimation, which could be orders of magnitude out. The consequence of this could mean that the risk is either underestimated, with insufficient control measures being implemented, or overestimated, leading to resources being focussed in the wrong places.
This isn’t a new idea, and often it is the case that we don’t consider any kind of frequency prediction for the sake of action prioritisation. Consider how long you have deliberated over the likelihood of low frequency events that have been identified in HAZID or HAZOP studies, only to ignore the end result. So, when is the right time to carry out a risk assessment?
Rather than employ a one-size fits all approach during the HAZID or HAZOP to each hazard that has been identified, they can be screened by taking a tiered approach, and resource can be more appropriately allocated to risk assessments.
By starting at a high level, using judgements made on severity alone, the low risks can be identified with some confidence, and the list of events requiring assessment outside of the HAZID or HAZOP can be refined.
Depending on the magnitude of the severity, a semi-quantified method might be employed. Techniques such as LOPA can use a set of rules to allow an estimate of frequency to be made.
For the events with the highest magnitude consequences, it is appropriate to carry out a fully quantified assessment in order for us to understand detailed risk. Predicting frequency is complex and based on databases of historical failures and complex calculations which also comprise of conservative assumptions. It is worth carrying out this screening phase to avoid spending any unnecessary time and resource on the wrong events.
In using this approach, the HAZID and HAZOP study remains used for its purpose, and little time is wasted quantifying events where it would be considered disproportionate.
Carolyn Nicholls & Jenny Hill