11 October 2024
Top climate challenges for the (Re)Insurance Industry
1. Tipping Points In climate science, a tipping point is a critical...
1. Tipping Points
In climate science, a tipping point is a critical threshold that, when crossed, leads to large, accelerating and often irreversible changes in the climate system. If tipping points are crossed, they are likely to have severe impacts on human society and may further accelerate global warming.
Tipping behaviour is found across the climate system, in ecosystems, ice sheets, and the circulation of the ocean and atmosphere.
Armstrong McKay et al. (2022) identified 16 distinct tipping points, noting that 5 of them may have already passed critical warming thresholds. That is to say, at least 5 may already be in the process of occurring, with no recourse for prevention.
At present, tipping points are not included in any market-deployed catastrophe risk models, yet they define the truly catastrophic and systemic risks of climate change. While probabilities of any single tipping point occurring may seem remote, it is yet to be tested whether the probabilities, time horizons and impacts of occurrence imply that they now fall within typical insurance focused return periods and risk transfer products, or whether they will begin to in the near future. Further, it remains to be studied how the tipping points may be correlated, and if the joint probabilities of occurrence warrant explicit treatment of multiple tipping point occurrence in catastrophe risk models available to the insurance industry.
2. Link between transition and physical modelling of climate change – are NGFS scenarios fit for purpose.
NGFS scenarios are widely used for the financial impact modelling of climate change, although with a focus on the economic parameters for transition modelling. With the (re)insures focused on insured losses from extreme weather events, are NGFS scenarios adequate for the modelling to a level comparable to what is required for an SII internal modelling.
3. High resolution information and downscaling
The need for downscaling is due to the models not simulating extreme events of interest. They do not simulate these events out of error but largely because it is not numerically possible at their coarse resolution. Proposed solutions include increasing the resolution of global climate models or using regional climate models (IPCC_AR6_WGI_Chapter08, p. 85). Statistical downscaling is a form of bias correction which critically assumes that the same scaling can be done to both historical and future data sets.
The other main challenges in downscaling from climate models include the weak coordination of production of downscaled datasets, which leads to significant errors in high-resolution downscaled reconstructions of the current climate (IPCC_AR6_WGI_Chapter10, p. 15). Additionally, integrating these downscaled datasets with historical change and process-based understanding remains an important challenge (IPCC_AR6_WGI_Chapter10, p. 15). The ability to project all aspects of shifting climate impact driver profiles and their effects at fine, local
scales is often reliant on dynamical downscaling and additional impact modeling steps, making the quantification of uncertainties more challenging (IPCC_AR6_WGI_Chapter12, p. 103). The evaluation of the added value of downscaling is complex (IPCC_AR6_WGII_Chapter11, p. 10). The coarse spatial resolution of climate models has also been considered a limitation, and proposed solutions include increasing the resolution of global climate models or using regional climate models (IPCC_AR6_WGI_Chapter08, p. 85). Ref: https://www.nature.com/articles/nclimate3418
4. How much change is there in the system already, what is the existing climate attribution?
Whilst much of the focus on climate change tends to be through a future lens, very little attention is paid to how much risk may have changed already. Validation of catastrophe models often takes the form of contrasting their output with historical data, but very little is done in validation – both of and by the vendors – to show how risk may have changed up to the present day to validate that the view of risk
presented by the models is genuinely up to date.
Some of the issue comes from the fact that we are hamstrung through use of historical data: tail risk trends simply cannot be spotted in short 50- or 100-year historical datasets. If we are able to use – and potentially be involved in the design of – more ensemble climate modelling in reanalysis of the past, we can help ourselves to understand how risk is shifting up to the present day and also feel comfortable that the numbers presented by the catastrophe modelling companies are genuinely a reflection of present-day risk. Having multiple counterfactual analyses of prior years also may potentially help towards disentangling natural variability from climate deltas. There is also the additional possible role of including population and land use change into historical risk shift considerations to help identify where increasing historical losses could be more influenced by demographics than climate change and can lead to confusion over the size of the climate delta.
5. Improved Ensemble projections
The weaknesses of current ensemble approaches include:
1. Model democracy assumption: Assessments of climate model ensembles have commonly assumed that each individual model is of equal value, which both diminishes the influence of models that exhibit a better agreement with observations (Haughton et al., 2015; Tapiador et al., 2020) and makes the erroneous assumption that the models can be treated as statistically independent. (IPCC_AR6_WGI_Chapter01, p. 82)
2. Selection bias: In some cases, ensembles are sub-selected by removing poorly performing model simulations or simulations that are perceived to add little additional information. This can introduce biases in the ensemble, such as favoring certain models over others (McSweeney et al., 2015; IPCC_AR6_WGIII_Annex-III, p. 36).
3. Lack of formal methods to reduce bias: While sampling bias is recognized in assessments, formal methods to reduce bias are not employed due to conceptual limitations (IPCC_AR6_WGIII_Annex-III, p. 36).
4. Uncertainty in ensemble size: The appropriate ensemble size for robust use depends on various factors, such as the model, physical variable, spatial and time aggregation, acceptable error magnitude, and the type of questions being addressed. Determining the optimal ensemble size can be challenging (Deser et al., 2012, 2017b; Kang et al., 2013; IPCC_AR6_WGI_Chapter10, p. 47). Overall, these weaknesses highlight the need for ongoing research and development to improve the effectiveness and reliability of ensemble approaches in climate modeling.
5. Ensemble boosting. Ensembles still suffer from limitations of simulating extremes of rain, wind and temperature. There is interest in exploring the technique of ensemble boosting where outliers of the original ensembles are re-run to examine a case in more detail. This has been applied for extreme temperatures (e.g. Fisher et al. , 2023) .
Lack of exploration of other approaches, such as storylines: The typical “IPCC approach” is to derive probabilities for a change in variable X at location Y by looking at the mean and the spread in that variable across the climate model ensemble, often with an implicit assumption that the spread of results is a proxy for the spread of possible weather conditions under the given level of climate change (which conflates aleatoric and epistemic uncertainty). An alternative approach is to use “physical storylines”, where distinct and plausible manifestations of climate change are identified across the ensemble (e.g., by clustering etc). Physical storylines usually describe different plausible changes in atmospheric dynamics (forced changes in large scale features like jet streams etc.), which is one of the biggest epistemic uncertainties in climate science,
particularly for the scales the industry is interested in. Storylines can be thought of as “scenarios within a scenario”, with appropriate “high impact” ones identified and investigated for stress testing or precautionary planning. Finally, while individual storylines are usually treated without addressing their likelihood, there are evolving frameworks for exploring this.
6. A comprehensive assessment of all affected perils impacted by climate change
Much work has focused on the major perils of tropical cyclones and floods. Climate change does however have a wide variety of other impacts that need assessment globally and on a consistent method, with as always appreciation of the uncertainty.
Climate change has a wide array of impacts on various perils:
1. Extreme Weather Events: More frequent and intense, storms, heatwaves, and droughts due to altered climate patterns.
2. Rising Sea Levels: Melting ice caps and glaciers lead to rising sea levels, posing threats to coastal areas, causing erosion, and increasing the risk of flooding.
3. Wildfires: Increased temperatures and drier conditions escalate the frequency and severity of wildfires, posing risks to ecosystems and human settlements, through direct damage and poor air quality.
4. Water Scarcity: Altered precipitation patterns and increased evaporation contribute to water shortages, impacting agriculture, ecosystems, and communities.
5. Ecosystem Disruption: Climate change affects ecosystems, disrupting biodiversity, endangering species, and altering habitats.
6. Health Risks: Changes in climate patterns influence the spread of diseases, heat-related illnesses, and respiratory issues due to air pollution.
7. Food Insecurity: Altered growing conditions affect crop yields, leading to food shortages and price increases, impacting global food security.
These perils intertwine and exacerbate each other, posing significant challenges to communities, economies, and the environment worldwide.
7. Decision making under uncertainty.
Catastrophe models are complex tools that enable (re)insurers to analyse hazard and vulnerability components on their exposures in order to test financial loss impacts under different conditions. Adjusting for climate change, especially far into the future, can amplify the uncertainty that is inherent within model outputs. This uncertainty can be split between aleatory (dice throwing statistical probability) and epistemic (incomplete or inaccurate knowledge). We know that aleatory uncertainty cannot be reduced through capturing additional data,
whereas a lack of knowledge can, to an extent, be improved.
Uncertainty can be further separated into primary and secondary uncertainty. In the former, we have the uncertainty that the model captures the full range of outcomes for the hazard: for example, is the event catalogue complete, does it include the absolute worst-case scenario, or, is the exposure data complete and accurate and what is the uncertainty in the event rates? Secondary uncertainty concerns the uncertainty in
the actual damage that results from an event, given that the event itself has occurred. Translating windspeed,ground motion or flood depth requires parameterisation and assumptions in the form of vulnerability functions. These outcomes then need to be calibrated against actual insured loss experience. Each new loss event provides model vendors with vital new data points on which to recalibrate damage functions, but
there is always uncertainty in translating these into individual building performance at the time of a loss. For example, a lack of standardised claims data inhibits model developers from capturing all the insights gained with each new loss. There is likely then to be a lag between a rapidly evolving climate state, which produces new extremes of losses that in turn provide additional insights to be
incorporated into models.
At the same time, we must not confuse our models for the reality that they are simulating. While important, models are only part of the toolkit for the decision maker(s), who will need to draw from several forms of information to make effective decisions under uncertainty. .
Defining risk horizons: P&C re/insurance is somewhat fixated on annual event occurrence rates, with little consideration of probabilities over multi year spans of time. Probabilistic seismic hazard assessment is the opposite, interested more in the 10% or 1% chance of an event of given size in a 50 year period. Climate projections also lean more to the latter, making projections over future multi year blocks of time. Should we be formalising multi-year risk horizons of interest for cat risk projections? For example, there is a circa 10% chance of seeing a 1 in 200 (annual rate) event over a 20-year period, which would pin our minimum risk horizon of interest to the mid-2040s. This would also be compatible with the
reported capability of climate models to differentiate between RCPs on a minimum of ‘several decades’ (Fielder et al, 2021).
8. Decadal change analysis
There is a need for a business risk view on the 5 -10 year time scale. However, this sub-decadal time scale is somewhat neglected by research lying between seasonal and long-term climate change. Yet the path to climate change can be expected to include decadal fluctuations which will have a business impact.
There is a research gap around decadal predictions and variations of the important modes NAO, AMO, PDO, ENSO etc on decadal timescales . It is not clearly understood how these modes change as the mean state shifts. This is a hard problem as the sub-decadal timescale is in a “grey-zone” which is less constrained by the ocean surface temperature (important for seasonal ) or atmospheric boundary conditions (climate and greenhouse gases).
Fischer, E.M., Beyerle, U., Bloin-Wibe, L. et al. Storylines for unprecedented heatwaves based on ensemble boosting. Nat Commun 14, 4643 (2023). https://doi.org/10.1038/s41467-023-40112-4
9. Can we adequately model Solvency Capital Intensity for future climate scenarios?
Solvency Capital Ratio is a key regulated metric for (re)insurance companies. How future climate change might impact SCR is of key interest for the long-term viability of the industry. Using exposure as the normalisation quantity for the calculation of Capital Intensity enables the modelling across different companies underwriting strategy and risk appetite.
10. Business impact from the increased risk to insurability from climate change – will this lead to reduced volume in certain LOB’s / regions.
Changes in insurability have the potential to lead to changes in underwriting risk, which is a component of the overall Enterprise Risk Management under Solvency II. Climate change has the potential to lead to heterogenous changes in premium volume across Lines of Business and regions.
About Lighthill Risk Network
The Lighthill Risk Network is an all-encompassing and inclusive organisation with the specific aim of facilitating and enhancing knowledge transfer into business from academic, government and commercial experts at the forefront of risk-related research.
For more information, please visit: https://lighthillrisknetwork.org/
PR Contacts
Helen Wright, Lysander PR
helen@lysanderpr.com
07842 729 579
Roddy Langley, Lysander PR
roddy@lysanderpr.com
07547 901 618
Steve Colton, Lysander PR
steve@lysanderpr.com
07554 889 056
Summary
The wind speed increased by up to +13% and the return period reduced by up to -60% across the gates and cases examined. The annual probability of a major hurricane making US landfall is now about 30% larger compared to the 1980 baseline. The probability of a Category 5 landfall has nearly tripled from a 30-year to 10-year event,
1. IRIS (v1.0)
IRIS is a new model driven with several key innovations. It recognises that the key step for estimating landfall wind speed is the location and value of the life-time maximum intensity (LMI). It redefines the problem as one of decay only (1). The initial value is physically constrained by the thermodynamic state as defined by the potential intensity (PI). Tracks are based on observations. The model (v0.1) description is undergoing minor revisions under peer review (2).
In July 2023 Maximum Information Ltd. performed independent validation of IRIS (v0.1). They identified a low bias in landfall counts (all storms) particularly near Florida. We confirmed their findings. To improve the model, we added several features to a new version, v1.0, used for this report. There are physically informed changes in v1.0 compared to v0.1:
We compare the mean climatology of IRIS(v1.0) and IBTRACS for 1980-2021. Figure 1 shows the eleven US gates which are used for the validation. The gate size was chosen to capture an adequate sample size while being approximately similar in scale.
Figure 1. Landfall locations and gate number. Coloured dots are IRIS simulations and circles are observations.
Figures 2-4. show the frequency of storms making landfall by gate and minimum intensity.
The storm rates are captured well at the continental scale 2.52 yr-1 modeled compared to 2.40 yr-1 observed (Fig. 2). The rates at the different gates are also well captured. Noted differences are near the Carolinas (Gate 7) there are limited observations in Gate 7 and the model count is higher but within 2 standard deviations (Fig. 3). Figure 4 shows the analysis for major hurricanes storms above Category 3. The continental rate of 0.54 yr-1 is also in good agreement with the observations of 0.57 yr-1. As expected, the model against observed variability asserts itself at the local scale. There is some difference between the gates. This is largely due to hurricanes entering a gate slightly more/less than the adjacent one. Gates 7,9 and 10 have no observations. Overall, there is no clear regional bias. The observations (one realisation) of 40 years are within the model variability.
Figure 2. Total storm (vmax>17.5 m/s) count rate (yr-1) for each gate. Intervals and bars are the 95%,5% tile range based on 42-year samples of the 10,000-year simulation.
Figure 3. As Figure 2 but for hurricanes (vmax> 33 m/s).
Figure 4. As Figure 2 but for major hurricanes (vmax>50 m/s).
2. Attribution Methodology
The IRIS model can be used to infer the additional strengthening of hurricanes that can be attributed to recent warming or more specifically to changes in only the potential intensity. 17 Hurricanes are considered making 18 landfalls which had major impacts (Figure 5).
Figure 5. The tracks of the 17 most damaging hurricanes since 1980. Red dots show landfall locations which are allocated to the appropriate gate (see Figure1).
We first need to consider the change in the thermodynamic environment. ERA – reanalysis is used to calculate monthly mean PI fields during the satellite era since 1980. We consider global warming to manifest itself differently with latitude. We have low confidence in attributing regional or longitude trends to global or anthropogenic warming. The regional changes are more likely to be caused by decadal variations and less likely to be sustained or representative of global warming. It is for example very unlikely that the North Atlantic will continue to warm relative to the globe at the current rate. We are assuming that the underlying anthropogenic trend in PI is best represented by the global zonal mean. We know of no evidence, e.g. from climate models, that the anthropogenic portion of the increase in the PI in the North Atlantic is enhanced compared to the zonal mean. If it is, then our attribution estimate would be conservative. To calculate the PI field in any given year we apply the corresponding monthly global zonal mean trend to the 1980-2020 monthly mean PI field. In this way we can estimate the regressed anomalous PI field in any month and year. This regressed value is not the actual PI but that portion due to a linear change since “1980”. We use quotation marks around the year to signify that the year is not the actual PI for that year but the regressed value. Figure 6 shows the difference in potential intensity between “2021” and “1980” in the peak hurricane month of September based on the trend. There are large changes in the tropics which reduce in magnitude towards the subtropics and then increase again at larger latitudes. This meridional structure is interesting and is different to the SST trends which tend to gradually increase from the tropics to higher latitude.
Figure 6. Change in global zonal mean ERA-5 September regressed potential intensity between “2021” and “1980”.
The frequency of landfall is the next consideration. This version of the IRIS model does not change the number of events in the Atlantic, only the initial life-time maximum intensity is modified by the PI. However, the landfall rate will change because on average storms last longer if they have a larger initial value. Since we do not account for the relative warming of the North Atlantic nor the increase in the number of hurricanes (which are likely related) our attribution is therefore not complete but may be conservative. It is important to note that we are not examining the counterfactual of an individual hurricane, but the counterfactual (“1980”) of that gate climatology in the year of the landfall. For reference we take the observed intensity at landfall.
3. Results
3.1 US landfalls
Table 1 shows the results for the 18 landfall cases. When interpreting the results it is important that we are comparing events of the type of the named Hurricane. We are not re-running the individual case with higher potential intensity (see Sandy case study below as an example of this). Rather the cases are representative of events of that intensity in the respective gate for different potential intensity states.
Table 1. Hurricane wind speed and return period for the gate fr the most damaging hurricanes since 1980. Change is the difference relative to the “1980” baseline. Georges made two damaging landfalls in Gate 11 (“Georges 1”) and 3 (“Georges 2”).
Relative to a baseline PI climate of “1980” the relative wind speed increase ranges up to 13% (Sandy 2012). The biggest absolute change of 6 m/s is found for Ida (2021). The largest absolute decreases of return period are found for the largest return periods such as for Andrew (1992), whereas the largest relative change of -60% is found for Michael (2018). The absolute change in wind speed with years follows the PI trend, whereas the reduction in return period is much more variable over time. The return periods are shown up to 200 years (except Andrew which is up to 1000 years) for the appropriate gates for all 18 landfall cases are attached in the supplementary figures.
As well as considering the individual cases it is also informative to understand how the overall US landfall risk has changed. The general increase in wind speed for major hurricanes is about 4-6 m/s. Figure 7 shows the return period of the intensity for all US landfall for now (“2022”) and past (“1980”) climate. The annual probability of a major hurricane (vmax> 50 m/s) at landfall has changed by 31% from a return period of 2.1-year to 1.6-year. The change in Category 5 is even more dramatic: the probability (return period) has approximately tripled from 30-year event to a 10-year event. The observed climatology lies largely in between the “years” supporting the IRIS model assumptions.
Figure 7. Mean maximum wind speed (m/s) of US landfall storms vs return period (years) for observations (black) “2022” (orange) and “1980” (blue). 200 year samples (grey) out of 10,000 year “2022” simulation. Observations
3.2 Case studies: Sandy and Andrew
Sandy (2012) caused an estimated economic damage of $60B. It is assumed that wind speed attribution is not possible. Instead a study attributed only the anthropogenic sea-level rise contribution to the surge damage to be approximately 13% (3). Our study here reveals that at the time of Sandy a type of cyclone with Sandy’s wind speed has become much more likely by about 37%. Conversely, the expected wind speed at a 30 year return period was about 13% higher than in 1980 baseline. We now have an attribution. This is the largest percentage increase in wind speed of all the hurricanes studied. If we remove the Sandy (track) from IRIS, then the landfall rate for Gate 10 decreases by only about 12%. The model climatology at that gate is not very sensitive to one event.
Does IRIS suggest climate change has made Sandy 13% more intense? We could use IRIS differently (not as originally intended) and simulate 1000 possible intensities at landfall of Sandy’s track post LMI as an ensemble of counterfactuals. Figure 12 shows the distribution of counterfactual landfall wind speeds for a “1980” and “2012” climate. The probability of reaching the observed Category 1 landfall wind speed, 38.5 m/s is about 20%. (Figure 8). For that probability there is a shift to the larger intensity in “2012” compared to the “1980” baseline of +1.7 m/s or only 4% compared to 13% for the gate. The probability of reaching the same intensity in 2012 increases by +0.03 (+18%) compared to “1980”. This change is only driven by the potential intensity change at a fixed Sandy’s LMI location and the small magnitude of wind speed would not be detectable. It is important to state the landfall climatology at gate 10 is caused by many LMI locations and different PI trends. The counterfactual Sandy case (barely detectable) and the IRIS climatology are not the same. Sandy counterfactuals contribute to the gate climatology, but in essence IRIS is a climate model. Removing Sandy from the observation or model has little effect (Figure 9).
Figure 8. CDF against maximum wind speed (m/s) for 5000 member ensemble of Hurricane Sandy (2012) simulations in 1980 and 2012 climate.
Figure 9. The effect of removing Sandy on the gate return period from the model (blue) and observations data sets (black-dashed) compared the base model (orange) and all observations (black).
Andrew (1992) was an exceptional Category 5 Hurricane at landfall in Florida. The IRIS model places this type of event at a very low probability with a return period in excess of about 500 years. There is no meaningful attributable change in the wind speed. The LMI of Andrew was near the Bahamas (25oN) which lies in the belt of minimal PI trend (Figure 6), so that any change of PI and LMIs by 1992 was modest. The observed rates in Andrew’s gate 6 are very sensitive to individual events because the sample size is small. Andrew itself had a substantial effect on the observed major hurricane rate. The model rate is about 50% lower than that observed (Figure 4). However, if we remove just Andrew from observational analysis, then the observed rate of major hurricanes more than halves and then agrees much better with the model (Figure 10.) The overall model landfall rate at this gate is only reduced a little when Andrew is removed from the model.
Figure 10. The effect of removing Andrew on the gate return period from the model (blue) and observations data sets (black-dashed) compared the base model (orange) and all observations (black). Major hurricane lines are shown to illustrate impact of Andrew removal on gate rate shown in Figure 4.
4.0 Conclusions
We find a wide range of increases in wind speeds (up to +13%) and reduction in return periods (up to -60%) for the 18 landfalls. Overall, the probability of major hurricane (vmax>50 m/s) landfalls in the US has increased by about 31% since 1980. The probability of a Category 5 landfall has nearly tripled. This model result is at least consistent with the observed doubling of major tropical cyclones making landfall globally since 1980 (4). For the whole North Atlantic basin, a doubling of the major hurricane fraction since 1980 has been reported (5). The model major hurricane landfall rate trend is somewhat smaller than the trend of the basin fraction, but the near tripling of model landfall Category 5s is plausible given these observations. It is noteworthy that the model absolute wind speed increase is only about +5 m/s . This is within the estimated IBTRACS uncertainty of the maximum wind speed reported in the 1980s. The economic damage per year is extremely variable. It is thus to be expected that the detection of both historic landfall wind speed trends (4) and damage trends has been both challenging and controversial. We note from the IRIS model simulations, that the intensity changes are indeed found to be small, but that there has likely been a substantial change in the probability of damaging hurricanes.
Here we assumed a constant total hurricane rate and a global zonal mean increase in PI. It is worth noting that the actual annual probability can be expected to be larger because the Atlantic potential intensity has increased more than the global mean and the number of North Atlantic hurricanes has also increased. However, it is not clear that these two changes are due to anthropogenic forcing or variability (more likely). It therefore appears that the expected annual loss, driven by major hurricanes, has increased by at least 30% over the last four decades due to global warming.
References
1. Shuai Wang,Ralf Toumi, Recent migration of tropical cyclones toward coasts. Science 371,514-517(2021).DOI:10.1126/science.abb9038
2. Sparks, N., Toumi, R. IRIS: Imperial College Storm Model, Scientific Data, under review.
3. Strauss, B.H., Orton, P.M., Bittermann, K. et al. Economic damages from Hurricane Sandy attributable to sea level rise caused by anthropogenic climate change. Nat Commun 12, 2720 (2021). https://doi.org/10.1038/s41467-021-22838-1
4. Wang, S., Toumi, R. More tropical cyclones are striking coasts with major intensities at landfall. Sci Rep 12, 5236 (2022). https://doi.org/10.1038/s41598-022-09287-6
5. Kossin JP, Knapp KR, Olander TL, Velden CS. Global increase in major tropical cyclone exceedance probability over the past four decades. Proc Natl Acad Sci U S A. 2020 Jun 2;117(22):11975-11980. doi: 10.1073/pnas.1920849117. Epub 2020 May 18. Erratum in: Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29990. PMID: 32424081; PMCID: PMC727571
Supplementary Figures
Mean maximum wind speed (m/s) at the gate of the named hurricane vs return period (in years) for “year” of the hurricane (orange line) and “1980” (blue line). 200-year samples (grey lines) out of 10,000 years of the “year” of the hurricane simulation.
The uncertainties of this study depend on some of the key assumptions. The potential intensity calculation and changes are sensitive to ERA5 reanalysis errors.The assumption of constant uniform probability distribution of the relative intensity (LMI/PI) and constant tracks are important sources of uncertainty which could either decrease or increase the scale of the attribution presented here.
The final methodology and results are provided for general information purposes only, and do not represent the opinion of any individual within the Lighthill Risk Network or its members. The information is not intended to be taken as advice with respect to any individual situation and cannot be relied upon as such.
Notes to editors
About Lighthill Risk Network
The Lighthill Risk Network is an all-encompassing and inclusive organisation with the specific aim of facilitating and enhancing knowledge transfer into business from academic, government and commercial experts at the forefront of risk-related research.
For more information, please visit: https://lighthillrisknetwork.org/
PR Contacts
Helen Wright, Lysander PR
helen@lysanderpr.com
07842 729 579
Roddy Langley, Lysander PR
roddy@lysanderpr.com
07547 901 618
Steve Colton, Lysander PR
steve@lysanderpr.com
07554 889 056
London, 27 February, 2023: Research collaboration organisation, Lighthill Risk Network Ltd (Lighthill), today announces the publication of a significant and far reaching research report, Best practices for modelling the physical risks of climate change, that addresses the challenges facing regulators and the (re)insurance industry as they look to model the impacts of climate change.
Regulators have different approaches when it comes to guiding (re)insurers to deal with climate change, resulting in a lack of consistency and inadvertently hindering (re)insurers in their efforts to model climate change risks as accurately and completely as possible. Lighthill has identified this conundrum and responded with a report that details best practice methodologies based on industry modelling and reporting standards for the benefit of its members who include Aon, Hiscox, MS Amlin, Guy Carpenter, Liberty Syndicates and Lloyd’s.
Amid strong policy signals from regulators and governments that climate change impact reporting must be factored into the whole financial services industry, to date a myriad of disparate measures have been deployed globally to try to quantify how longer time scale climate change shocks might impact the balance sheets of banks, (re)insurers and investment funds. This report examines what a warmer world implies for (re)insurers, and best practice guidance for how catastrophe models can be used to estimate climate change impacts across different time horizons.
Lighthill Chief Executive Dickie Whitaker said:
“The world of risk is evolving ever faster and, to ensure that appropriate risk modelling is used moving forwards, a collaborative and harmonious approach must be found to eliminate the cost of complying with the requirements of (re)insurers which, if not addressed now, will ultimately spiral and impact the effectiveness of regulated entities. We need more models to be able to reflect changes to the climate already taking effect and we need greater appreciation for which approaches suit which types of questions.
This report could therefore not have come at a better time. With a focus on the learnings from the latest IPCC Assessment Reports, plus Best Practices in Modelling Climate Change, we hope that industry stakeholders will find this open and collaborative paper invaluable in helping them better understand and update their models to factor in the risks of climate change, and how best to model weather hazards over time.”
Jeremy Hindle, Lead Report Author and Director & Consultant – Risk, ESG, Climate, Data Standards & Modelling, said:
“The impacts of climate change, especially for perils such as severe convective storm, flood and wildfire, already presents challenges to ensure that models reflect current climate conditions.
Predicting how future climate will influence changes in frequency and severity against different pathways, time horizons and temperature changes has created a lack of consensus as to how best to derive decision useful outputs that could provide all stakeholders with what they need . As the insurance sector plays such a key role in helping bring financial relief to victims of such disasters, fully understanding and modelling the future impacts of climate change requires a holistic view of all approaches to climate change risk modelling.
The time horizon of the questions being asked will determine, to a large degree, what analytical approaches are used, and catastrophe models are useful for near term decision making whilst deterministic approaches are better for long term. Conversely, the extended time horizons chosen by regulators for climate change scenario analyses have little value to the non-life insurance industry for pricing, solvency or risk assumption, so one can see that different approaches are required to measure both shorter and longer time horizon impacts of climate change.
(Re)insurers, working closely with catastrophe model vendors, need to ensure that risk modelling accurately reflects the risk, including current climate change. This report – which is free and open to anyone to access – provides a basis for collaboration across industry stakeholders to make the best use of the available resources, and to help them simplify and align best practices for modelling climate change.
I would like to thank the expert and insightful contributions of all Lighthill Risk Network members, as well as AXA XL, Hiscox, Impact Forecasting, JBA Risk, Lambda Climate Research, Maximum Information, The Institutes, and Zurich Insurance, who have been instrumental in the creation of this report.”
Kirsten Mitchell-Wallace, PhD, Director of Portfolio Risk Management Markets, Lloyd’s, said:
“Understanding and quantifying the physical risks of climate change is necessary for the (re)insurance industry to fulfil its role in assisting the transition to a carbon neutral economy. Lloyd’s welcomes this report’s drive towards dialogue and cooperation, which should increase the efficacy of regulation in this area.”
Dr Jessica Turner, Head of International Cat Advisory at Guy Carpenter, said:
“This landscape is evolving rapidly – and the need to disclose and report on climate change impacts is valuable and necessary. As well as highlighting how the use of catastrophe models has evolved over the years, the report establishes best practices in the use of data, methodologies and tools for the modelling of climate change risk assessment. It also addresses several key themes, including time horizons and climate scenarios, climate-related financial disclosures, and the impact of future temperature changes.”
Access the report Flood Research Needs of the (Re)Insurance sector: collaborating to improve risk understanding and management here: https://lighthillrisknetwork.org/reports/
Notes to editors
About Lighthill Risk Network
The Lighthill Risk Network is an all-encompassing and inclusive organisation with the specific aim of facilitating and enhancing knowledge transfer into business from academic, government and commercial experts at the forefront of risk-related research.
For more information, please visit: https://lighthillrisknetwork.org/
PR Contacts
Helen Wright, Lysander PR
helen@lysanderpr.com
07842 729 579
Roddy Langley, Lysander PR
roddy@lysanderpr.com
07547 901 618
Steve Colton, Lysander PR
steve@lysanderpr.com
07554 889 056