Home / Top Climate Challenges
Top Climate Challenges
The key research challenges for insurers and academics to address
Top Climate Challenges
Tipping Points
In climate science, a tipping point is a critical threshold that, when crossed, leads to large, accelerating and often irreversible changes in the climate system.[1] If tipping points are crossed, they are likely to have severe impacts on human society and may further accelerate global warming.[2][3] Tipping behaviour is found across the climate system, in ecosystems, ice sheets, and the circulation of the ocean and atmosphere.[3]
Armstrong McKay et al. (2022) identified 16 distinct tipping points, noting that 5 of them may have already passed critical warming thresholds. That is to say, at least 5 may already be in the process of occurring, with no recourse for
prevention.
At present, tipping points are not included in any market-deployed catastrophe risk models, yet they define the truly catastrophic and systemic risks of climate change. While probabilities of any single tipping point occurring may seem remote, it is yet to be tested whether the probabilities, time horizons and impacts of occurrence imply that they now fall within typical insurance focused return periods and risk transfer products, or whether they will begin to in the near future. Further, it remains to be studied how the tipping points may be correlated, and if the joint probabilities of occurrence warrant explicit treatment of multiple tipping point occurrence in catastrophe risk models available to the insurance industry.
Link between transition and physical modelling of climate change – are NGFS scenarios fit for purpose.
NGFS scenarios are widely used for the financial impact modelling of climate change, although with a focus on the economic parameters for transition modelling. With the (re)insures focused on insured losses from extreme weather
events, are NGFS scenarios adequate for the modelling to a level comparable to what is required for an SII internal modelling.
High resolution information and downscaling
The need for downscaling is due to the models not simulating extreme events of interest. They do not simulate these events out of error but largely because it is not numerically possible at their coarse resolution. Proposed solutions include increasing the resolution of global climate models or using regional climate models (IPCC_AR6_WGI_Chapter08, p.85). Statistical downscaling is a form of bias correction which critically assumes that the same scaling can be done to both historical and future data sets.
The other main challenges in downscaling from climate models include the weak coordination of production of downscaled datasets, which leads to significant errors in high-resolution downscaled reconstructions of the current climate (IPCC_AR6_WGI_Chapter10, p.15). Additionally, integrating these downscaled datasets with historical change and process-based understanding remains an important challenge (IPCC_AR6_WGI_Chapter10, p.15).
The ability to project all aspects of shifting climate impact driver profiles and their effects at fine, local scales is often reliant on dynamical downscaling and additional impact modeling steps, making the quantification of uncertainties more challenging (IPCC_AR6_WGI_Chapter12, p.103). The evaluation of the added value of downscaling is complex (IPCC_AR6_WGII_Chapter11, p.10). The coarse spatial resolution of climate models has also been considered a limitation, and proposed solutions include increasing the resolution of global climate models or using regional climate models (IPCC_AR6_WGI_Chapter08, p.85). Ref: https://www.nature.com/articles/nclimate3418
How much change is there in the system already, what is the existing climate attribution?
Whilst much of the focus on climate change tends to be through a future lens, very little attention is paid to how much risk may have changed already. Validation of catastrophe models often takes the form of contrasting their output with historical data, but very little is done in validation – both of and by the vendors – to show how risk may have changed up to the present day to validate that the view of risk presented by the models is genuinely up to date.
Some of the issue comes from the fact that we are hamstrung through use of historical data: tail risk trends simply cannot be spotted in short 50- or 100-year historical datasets. If we are able to use – and potentially be involved in the design of – more ensemble climate modelling in reanalysis of the past, we can help ourselves to understand how risk is shifting up to the present day and also feel comfortable that the numbers presented by the catastrophe modelling companies are genuinely a reflection of present-day risk. Having multiple counterfactual analyses of prior years also may potentially help towards disentangling natural variability from climate deltas. There is also the additional possible role of including population and land use change into historical risk shift considerations to help identify where increasing historical losses could be more influenced by demographics than climate change and can lead to confusion over the size of the climate delta. [Something about disentangling physical climate change from other changes, such as land use/land cover, economics etc etc?]
Improved Ensemble projections
The weaknesses of current ensemble approaches include:
1. Model democracy assumption: Assessments of climate model ensembles have commonly assumed that each individual model is of equal value, which both diminishes the influence of models that exhibit a better agreement with observations (Haughton et al., 2015; Tapiador et al., 2020) and makes the erroneous assumption that the models can be treated as statistically independent. (IPCC_AR6_WGI_Chapter01, p.82)
2. Selection bias: In some cases, ensembles are sub-selected by removing poorly performing model simulations or simulations that are perceived to add little additional information. This can introduce biases in the ensemble, such as favoring certain models over others (McSweeney et al., 2015; IPCC_AR6_WGIII_Annex-III, p.36).
3. Lack of formal methods to reduce bias: While sampling bias is recognized in assessments, formal methods to reduce bias are not employed due to conceptual limitations (IPCC_AR6_WGIII_Annex-III, p.36).
4. Uncertainty in ensemble size: The appropriate ensemble size for robust use depends on various factors, such as the model, physical variable, spatial and time aggregation, acceptable error magnitude, and the type of questions being addressed. Determining the optimal ensemble size can be challenging (Deser et al., 2012, 2017b; Kang et al., 2013; IPCC_AR6_WGI_Chapter10, p.47). Overall, these weaknesses highlight the need for ongoing research and development to improve the effectiveness and reliability of ensemble approaches in climate modeling.
5. Ensemble boosting. Ensembles still suffer from limitations of simulating extremes of rain, wind and temperature. There is interest in exploring the technique of ensemble boosting where outliers of the original ensembles are re-run to examine a case in more detail. This has been applied for extreme temperatures (e.g. Fisher et al. , 2023) .
Lack of exploration of other approaches, such as storylines: The typical “IPCC approach” is to derive probabilities for a change in variable X at location Y by looking at the mean and the spread in that variable across the climate model ensemble, often with an implicit assumption that the spread of results is a proxy for the spread of possible weather conditions under the given level of climate change (which conflates aleatoric and epistemic uncertainty). An alternative approach is to use “physical storylines”, where distinct and plausible manifestations of climate change are identified across the ensemble (e.g., by clustering etc). Physical storylines usually describe different plausible changes in atmospheric dynamics (forced changes in large scale features like jet streams etc.), which is one of the biggest epistemic uncertainties in climate science, particularly for the scales the industry is interested in. Storylines can be thought of as “scenarios within a scenario”, with appropriate “high impact” ones identified and investigated for stress testing or precautionary planning. Finally, while individual storylines are usually treated without addressing their likelihood, there are evolving frameworks for exploring this.
A comprehensive assessment of all affected perils impacted by climate change
Much work has focused on the major perils of tropical cyclones and floods. Climate change does however have a wide variety of other impacts that need assessment globally and on a consistent method, with as always appreciation of the uncertainty.
Climate change has a wide array of impacts on various perils:
1. Extreme Weather Events: More frequent and intense, storms, heatwaves, and droughts due to altered climate patterns.
2. Rising Sea Levels: Melting ice caps and glaciers lead to rising sea levels, posing threats to coastal areas, causing erosion, and increasing the risk of flooding.
3. Wildfires: Increased temperatures and drier conditions escalate the frequency and severity of wildfires, posing risks to ecosystems and human settlements, through direct damage and poor air quality.
4. Water Scarcity: Altered precipitation patterns and increased evaporation contribute to water shortages, impacting agriculture, ecosystems, and communities.
5. Ecosystem Disruption: Climate change affects ecosystems, disrupting biodiversity, endangering species, and altering habitats.
6. Health Risks: Changes in climate patterns influence the spread of diseases, heat-related illnesses, and respiratory issues due to air pollution.
7. Food Insecurity: Altered growing conditions affect crop yields, leading to food shortages and price increases, impacting global food security.
These perils intertwine and exacerbate each other, posing significant challenges to communities, economies, and the environment worldwide.
Decision making under uncertainty.
Catastrophe models are complex tools that enable (re)insurers to analyse hazard and vulnerability components on their exposures in order to test financial loss impacts under different conditions. Adjusting for climate change, especially far into the future, can amplify the uncertainty that is inherent within model outputs.
This uncertainty can be split between aleatory (dice throwing statistical probability) and epistemic (incomplete or inaccurate knowledge). We know that aleatory uncertainty cannot be reduced through capturing additional data,
whereas a lack of knowledge can, to an extent, be improved.
Uncertainty can be further separated into primary and secondary uncertainty. In the former, we have the uncertainty that the model captures the full range of outcomes for the hazard: for example, is the event catalogue complete, does it include the absolute worst-case scenario, or, is the exposure data complete and accurate and what is the uncertainty in the event rates? Secondary uncertainty concerns the uncertainty in the actual damage that results from an event, given that the event itself has occurred. Translating windspeed, ground motion or flood depth requires parameterisation and assumptions in the form of vulnerability functions. These outcomes then need to be calibrated against actual insured loss experience. Each new loss event provides model vendors with vital new data points on which to recalibrate damage functions, but there is always uncertainty in translating these into individual building performance at the time of a loss. For example, a lack of standardised claims data inhibits model developers from capturing all the insights gained with each new loss.
There is likely then to be a lag between a rapidly evolving climate state, which produces new extremes of losses that in turn provide additional insights to be incorporated into models. At the same time, we must not confuse our models for the reality that they are simulating. While important, models are only part of the toolkit for the decision
maker(s), who will need to draw from several forms of information to make effective decisions under uncertainty. [Something about the fact that several frameworks for this exist for this sort of thing, not least in the fields of data
science/stats, operations research, psychology etc., as well as operationalised in national policy docs].
Defining risk horizons: P&C re/insurance is somewhat fixated on annual event occurrence rates, with little consideration of probabilities over multi year spans of time. Probabilistic seismic hazard assessment is the opposite, interested more in the 10% or 1% chance of an event of given size in a 50 year period. Climate projections also lean more to the latter, making projections over future multi year blocks of time. Should we be formalising multi-year risk horizons of interest for cat risk projections? For example, there is a circa 10% chance of seeing a 1 in 200 (annual rate) event over a 20-year period, which would pin our minimum risk horizon of interest to the mid-2040s. This would also be compatible with the reported capability of climate models to differentiate between RCPs on a
minimum of ‘several decades’ (Fielder et al, 2021).
Decadal change analysis
There is a need for a business risk view on the 5 -10 year time scale. However, this sub-decadal time scale is somewhat neglected by research lying between seasonal and long-term climate change. Yet the path to climate change can be expected to include decadal fluctuations which will have a business impact.
There is a research gap around decadal predictions and variations of the important modes NAO, AMO, PDO, ENSO etc on decadal timescales . It is not clearly understood how these modes change as the mean state shifts. This is a
hard problem as the sub-decadal timescale is in a “grey-zone” which is less constrained by the ocean surface temperature (important for seasonal ) or atmospheric boundary conditions (climate and greenhouse gases). Fischer, E.M., Beyerle, U., Bloin-Wibe, L. et al. Storylines for unprecedented heatwaves based on ensemble boosting. Nat Commun 14, 4643 (2023). https://doi.org/10.1038/s41467-023-40112-4
Can we adequately model Solvency Capital Intensity for future climate scenarios?
Solvency Capital Ratio is a key regulated metric for (re)insurance companies. How future climate change might impact SCR is of key interest for the long-term viability of the industry. Using exposure as the normalisation quantity for the calculation of Capital Intensity enables the modelling across different companies underwriting strategy and risk appetite.
Business impact from the increased risk to insurability from climate change – will this lead to reduced volume in certain LOB’s / regions.
Changes in insurability have the potential to lead to changes in underwriting risk, which is a component of the overall Enterprise Risk Management under Solvency II. Climate change has the potential to lead to heterogenous changes in premium volume across Lines of Business and regions