Our latest paper has just appeared in Nature Climate Change: “Evaluation of CMIP5 palaeo-simulations to improve climate projections”. It was mainly the brainchild of Sandy Harrison, intended as a response/update to the “preview” paper “Evaluation of climate models using palaeoclimatic data” which promised that the PMIP component of CMIP5 would
provide assessments of model performance, including whether a model is sufficiently sensitive to changes in atmospheric composition, as well as providing estimates of the strength of biosphere and other feedbacks that could amplify the model response to these changes and modify the characteristics of climate variability.
So, how did PMIP do? Well, when we started writing the paper, we looked to see what work had been published that addressed these questions. There was not perhaps quite as much as we might have hoped, but enough to say that
Palaeo-evaluation has shown that the large-scale changes seen in twenty-first-century projections, including enhanced land–sea temperature contrast, latitudinal amplification, changes in temperature seasonality and scaling of precipitation with temperature, are likely to be realistic. Although models generally simulate changes in large-scale circula- tion sufficiently well to shift regional climates in the right direction, they often do not predict the correct magnitude of these changes. Differences in performance are only weakly related to modern-day biases or climate sensitivity, and more sophisti- cated models are not better at simulating climate changes. Although models correctly capture the broad patterns of climate change, improvements are required to produce reliable regional projections.
We started work on the paper around the time we visited Reading last summer, and mostly finished it during our trip to France. Most of the time since then it’s been sitting in limbo waiting for space to be published. It’s pleasing that this travel actually generated a tangible result, which doesn’t always turn out to be the case. And also pleasing that jules and I can continue to make contributions to the literature despite the lack of grey cubicles to spend our days in 🙂
4 figures summarise the main results. The first shows that large scale behaviour of the models is consistent between past and future climates (top plots) and that the past climates are consistent with data (bottom plots).
However, when we look at some smaller (but still quite large) areas there are substantial problems, with the modelled mid-Holocene monsoon not adequate to support the vegetation that was present in what is now the Sahara desert. In some places, the models barely register any changes at that time, or even move in the opposite direction to the data.
We summarised the climatologies with a Taylor diagram: the Last Glacial Maximum results show at least a positive correlation between models and data (the temperature results are fairly good), whereas for the mid-Holocene, the model results are clustered close to the zero correlation line (that’s the vertical axis in the diagram below) and have far too little spatial variability. Pale colours are the older PMIP2/CMIP3 results where available, showing that things have not changed significantly with the new generation of models.
We also re-examined the question of equilibrium climate sensitivity. This had been raised in the context of CMIP3 by Hargreaves et al in 2012 who asked “Can the Last Glacial Maximum constrain climate sensitivity?” Our answer there did point to the importance of checking this somewhat tentative result with the new CMIP5 results when they became available, and the new results are not so encouraging. In short, there is no detectable correlation between the equilibrium sensitivity of the models, and their simulated LGM cooling. Not that the results are particularly incompatible with the correlation we had previously found either, they just form a rough ball in the right place without a slope either way. There is, perhaps, room to explore this in more detail, and a new paper by Hopcroft and Valdes (which seems to have been written while our paper was in the publication queue) does exactly this. There are question marks over one or two of the models but it’s perhaps a mistake to go too far down this route, as the risks of cherry-picking and post-hoc justification are strong when the ensemble is so small to begin with.
Not much of Harrison et al. will come as a huge surprise to those working in the field, as this paper is basically a review of recent literature rather than hot-off-the-press results. We hope it will serve as a useful summary and perhaps provoke further research in this area.