During the monsoon season of 2011, torrential rains in Thailand resulted in flooding across the Central Plains. Manufacturing plants in the major industrial parks around Bangkok suffered heavily from the floods that persisted for 2–3 months. Was this flood risk in Thailand rare and uncontrollable?
More than 800 of the affected plants were Japanese manufacturers of auto parts, semiconductor parts and various other products. They were forced to suspend production, severely disrupting the supply of spare parts in Japan and around the world.
Many insurance claims were filed worldwide following the catastrophic incident, based on contingency business interruption (CBI). The total flood loss exceeded US$15.7 billion, ranking as the 10th largest catastrophe loss in the world before 2018 and ranked 12th after adding the wildfires in the US and Typhoon Jebi in Japan in 2018.
Risk management negligence
Extensive research of the floods in Thailand from 2006 to 2013 shows that flooding in the central plains is an annual occurrence, albeit with different inundation areas. In 2006 and 2010, the inundated areas were larger. (See Figure 1, with blue showing the inundated areas).
You might question why any manufacturer would build a plant in such a flood-prone region, as comprehensive risk assessment is a key process requirement prior to building any plant.
It became clear that the flood risk was not taken seriously, despite the Thai Research Institute releasing a useful flood map. The significant risk of floods is not considered as important as the relatively low cost of the industrial land. This is one typical catastrophic outcome of negligence in risk management.
It is important and necessary when discussing this issue, to reiterate the concept of return period. The 2011 flood in Thailand is generally considered to be a 50-year event. In fact, the return period represents a 2 percent chance (1/50) of similar catastrophes occurring every year, instead of a recurrence interval once every 50 years. So, even if the magnitude of the flood in 2010 is a 30-year or 50-year incident, the chances of encountering such a flood in 2011 would be similar.
Negligence therefore becomes one of the primary reasons for poor decisions and disastrous consequences. This is true even in the case of a market-wide event that can test the level of risk control of all companies. For example, in 1992 Hurricane Andrew in the United States, which intensified to a Category 5 hurricane with a central pressure of 922 mbar at landfall, resulted in eight American insurance companies shutting down. Those who survived were nonetheless left with a fragile balance sheet. A subsequent survey revealed that it was the adoption of advanced IT technology and early employment of catastrophe models that saved them. The risk management teams of bankrupted firms were clearly unprepared for such a high intensity hurricane, and the awareness of catastrophes and their impacts was inadequate. Hurricane Andrew awakened the industry to the significance of employing catastrophe models in risk management.
The frequency and intensity of an event is an objective reality
Complete awareness of the risk would help a company set up a more appropriate catastrophe premium rate charge and marketing strategies for a certain area during the new business planning process. There is a saying that “nothing is absolute”. Hence, comments like “there is no earthquake risk in a certain area” are obviously at odds with the laws of nature and are likely to lead to undesirable outcomes.
The importance of data
Business must be conducted in “vulnerable” areas. It is essential to capture the profile of the risk in as much detail as possible to manage the catastrophe risk within a company’s risk appetite.
There are no enforceable rules in existing regulations for the data dimensions that need to be gathered through property insurance underwriting. Although specifications regarding the data collection for risks with catastrophe exposure were issued many years ago, they are merely guidelines. They are not widely adopted for various reasons, including fears that they “significantly complicate the underwriting process, require more time for data input, and erode business efficiency” and that it is “unrealistic to obtain that much data, and difficult to split risk data”.
The discrepancy between the modelled experience and the actual experience could be attributed to the data collecting process.
It is common to find a fair amount of “given” blank fields in risk data. When information is fed into the model, it will always try to fill in the blanks. The more blanks there are, the more uncertain the results will be. The results calculated in this way evidently cannot reflect the actual risk vulnerability level. When comparing the results generated from flawed input data with actual losses, it becomes difficult to determine the dominant factors that lead to the large gap. The more blank areas there are, the more difficult it is to explain the gap.
The rule of thumb for data collection is “Don’t leave out requisite data; don’t simplify what should not be simplified”.
It is an art to capture the core information of a risk while avoiding data redundancy. As processing power and computing technology improve, the algorithms and models of big data also get enhanced. Nonetheless, if we are not able to feed accurate and comprehensive data into any model, the reliability of the output is compromised. As decision-makers become more dependent on the results from big data or intelligent models, the quality of the raw data must be strengthened accordingly. Data quality now plays a critical role in determining business developments. As a result, data collection and data filtering require adequate investments.
As risk awareness has increased, the dimensions of descriptive data on catastrophe risk should also be increased. It is clearly inappropriate to describe a current risk using outdated data structures and systems. It is also insufficient to gauge the magnitude of catastrophes with a decade-old model. If we cannot guarantee the quality of the raw information being fed into the model, there will be no way to assess the effectiveness of the model itself. Ultimately, we will be unable to provide strong support and assistance to clients.
The data mentioned here does not merely refer to the data collected during underwriting. Any data related to losses is as important as the underwriting information.
In 2013, Typhoon Fitow resulted in the highest insurance losses of the past 10 years. The typhoon brought long-duration precipitation and massive flooding in Ningbo and Yuyao in Zhejiang province, though it made landfall in Fujian province, which lies to the south of Zhejiang. This distinct spatial pattern poses a huge challenge to claims settlement. Is there a way to separate the losses of Typhoon Fitow into wind damage and flood damage from a reinsurer’s point of view? The answer is an obvious “no”. The details might be lost during the risk transfer process, or the details might never have been collected. This is the core part of loss estimation and the most challenging part of catastrophe model calibration. Without accurate first-hand information for the catastrophe model to calibrate, the model will never improve. This is another typical case of how a speck of information can determine the level of catastrophe risk management.
 Swiss Re, Sigma Research, 2015 Edition
 Source: Thailand Flood Monitoring System, http://flood.gistda.or.th/
 This rating is from the Saffir–Simpson Hurricane Wind Scale. Category 5 is stronger than the “Super Typhoon” rating from the Central Meteorological Observatory
 Source: NOAA