Bayesian deconstruction of climate sensitivity estimates using simple models: implicit priors and the confusion of the inverse

Has just been published!

It wasn’t really my intention, but somehow we never came up with a proper title so now we’re stuck with it!

This paper was born out of our long visit to Hamburg a few years ago, from some discussions relating to estimates of climate sensitivity. It was observed that there were two distinct ways of analysing the temperature trend over the 20th century: you could either (a): take an estimate of forced temperature change, and an estimate of the net forcing (accounting for ocean heat uptake) and divide one by the other, like Gregory et al, or else (b): use an explicitly Bayesian method in which you start with a prior over sensitivity (and an estimate of the forcing change), perform an energy balance calculation and update according to how well the calculation agrees with the observed warming, like this paper (though that one uses a slightly more complex model and obs – in principle the same model and obs could have been used though).

These give slightly different answers, raising the question of (a) why? and (b) is there a way of doing the first one that makes it look like a Bayesian calculation?

This is closely related to an issue that Nic Lewis once discussed many years ago with reference to the IPCC AR4, but that never got written up AFAIK and is a bit lost in the weeds. If you look carefully, you can see a clue in the caption to Figure 1, Box 10.1 in the AR4 where it says of the studies:

some are shown for different prior distributions than in the original studies

Anyway, there is a broader story to tell, because this issue also pops up in other areas including our own paleoclimate research (slaps wrist). The basic point we try to argue in the manuscript is that when a temperature (change) is observed, it can usually be assumed to be the result of a measurement equation like:

TO = TT + e        (1)

where TO is the numerical value observed, TT is the actual true value, and e is an observational error which we assume to be drawn from a known distribution, probably Gaussian N(0,σ2) though it doesn’t have to be. The critical point is that this equation automatically describes a likelihood P(TO|TT) and not a probability distribution P(TT|TO), and we claim that when researchers interpret a temperature estimate directly as a probability distribution in that second way they are probably committing a simple error known as “confusion of the inverse” which is incredibly common and often not hugely important but which can and should be avoided when trying to do proper probabilistic calculations.

Going back to equation (1), you may think it can be rewritten as

TT = TO  + e       (2)

(since -e and e have the same distribution) but actually this is  not the same thing at all because all these terms are random variables and e is actually independent of TT, not TO.

Further, we show that in committing the confusion of the inverse fallacy, researchers can be viewed as implicitly assuming a particular prior for the sensitivity, which probably isn’t the prior they would have chosen had they thought about it more explicitly.

The manuscript had a surprisingly (to me) challenging time in review, with one reviewer in particular taking exception to it. I encourage you to read their review(s) if you are interested. We struggled to understand their comments initially, but think their main point was that when a researcher writes down a pdf for TT such as N(TO2) it was a bit presumptuous of us to claim they had made an elementary error in logical reasoning when they might in fact have been making a carefully considered Bayesian estimate taking account of all their uncertainties.

While I think in theory it’s possible that they could be right in some cases, I am confident that in practice they are wrong in the vast majority of cases including all the situations under consideration in our manuscript. For starters, if their scenario was indeed the case, the question would not have arisen in the first place as all the researchers working on these problems would already have understood fully what they did and why. And one of the other cases in the manuscript was based on our own previous work, where I’m pretty confident in remembering correctly that we did this wrong 🙂 But readers can make up their own minds as to how generally applicable it is. It’s an idea, not a law.

Our overall recommendation is that people should always try to take the approach of the Bayesian calculation, as this makes all their assumptions explicit. It would have been a bit embarrassing if it had been rejected, because a large and wildly exciting manuscript which makes extensive use of this idea has just been (re-)submitted somewhere else today.  Watch this space!

I think this is also notable as the first time we’ve actually paid paper charges in the past few years – on previous occasions we have sometimes pleaded poverty, but now we’ve had a couple of contracts that no long really applies. Should get it free really as a reward for all the editing work – especially by jules!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s