Steffen nonsense

Been pondering whether it was worth bother blogging this but I haven’t written for a while and in the end I decided the title was too good a pun to pass on (I never claimed to have high standards).

The paper “Trajectories of the Earth System in the Anthropocene” had entirely passed me by when it came out, though it did seem to attract a bit of press coverage eg with the BBC saying

Researchers believe we could soon cross a threshold leading to boiling hot temperatures and towering seas in the centuries to come.

Even if countries succeed in meeting their CO2 targets, we could still lurch on to this “irreversible pathway”.

Their study shows it could happen if global temperatures rise by 2C.

An international team of climate researchers, writing in the journal, Proceedings of the National Academy of Sciences, says the warming expected in the next few decades could turn some of the Earth’s natural forces – that currently protect us – into our enemies.

and continues in a similar vein quoting an author

“What we are saying is that when we reach 2 degrees of warming, we may be at a point where we hand over the control mechanism to Planet Earth herself,” co-author Prof Johan Rockström, from the Stockholm Resilience Centre, told BBC News.

“We are the ones in control right now, but once we go past 2 degrees, we see that the Earth system tips over from being a friend to a foe. We totally hand over our fate to an Earth system that starts rolling out of equilibrium.”

Like I said, I had missed this, and it was only an odd set of circumstances that led me to read it, about which more below. But first, the paper itself. The illustrious set of authors postulate that once the global temperatures reach about 2C above pre-industrial, a set of positive feedbacks will kick in such that the temperature will continue to rise to about 5C above pre-industrial, even without any further emissions and direct human-induced warming. Ie, once we are going past +2C, we won’t be able to stabilise at any intermediate temperature below +5C.

The paper itself is open access at PNAS. The abstract is slightly more circumspect, claiming only that they “explore the risk”:

We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene.

(the paper fleshes out these words in numerical terms).

The paper lists a number of possible positive carbon cycle feedbacks and quantifies them as summing to a little under half a degree of additional warming (Table 1 in the paper). The authors then wave their hands, say it could all get much worse, and with one bound Jack was free. End of paper. I went through it again to see what I’d missed, and I really hadn’t. It is just make-believe, they don’t “explore the risk” at all, they just assert it is significant. There’s a couple of nice schematic graphics about tipping points too.

The mildly interesting part is what led me to read the paper at all, 6 months after missing its original publication. An editor contacted me a little while ago to ask if I’d write half a of debate (to form a book chapter) over whether exceeding 2C of warming would lock us onto a trajectory for a much warmer hothouse earth. I was charged with arguing the sceptical side of that claim. I was initially a bit baffled by the proposal as I had not (at that point) thought anyone had claimed anything to the contrary, but it soon became clear what it was all about. I said I’d be happy to oblige, but it turns out that my intended opponents, being two of the co-authors on the paper itself, were not prepared to defend it in those terms.

4 thoughts on “Steffen nonsense

  1. Hmmm. Both this post and previous one suggest you have an approach to risk management that would be unrecognisable to anyone in the banking or insurance sector. But I’m still struggling to really understand how you evaluate climate change risk?

    Let’s go through how a standard risk approach would tackle this.

    First, let’s take the IPCC’s probability distribution for climate change outcomes. These are presented at the 50% or 66% probability level. Let’s start by introducing some risk management reality. What are your estimates for warming at the 95% and 99% end of the tail for TCS and ECS? If a risk manager didn’t put those probability outcomes at the centre of any risk management presentation they would have a security guard collecting their desk possessions in black bin liner by the end of the same day.

    Then, using the Frank Knight (1921) distinction, how would you control for Knightian uncertainty:

    “High uncertainty can mean one of two things: either high stochastic volatility around known (or well estimated) average future outcomes, or at least partial ignorance about relevant mechanisms and potential outcomes. The first implies that uncertainty can be probabilistically measured (what Frank Knight called ‘risk’), whereas the second implies that it cannot (what Knight called ‘true uncertainty’ and is now known as Knightian uncertainty).”

    From here:

    That paper gives an example for a bank portfolio of loans. A more up-to-date example would be the failure to deal with Knightian uncertainty going into the Global Financial Crisis (GFC) in 2008. At the heart of this crisis was a failure to estimate the correlation between regional subprime loan portfolios. Banks had probability distributions based on empirical data of how these correlations moved through time in the face of varying economic circumstances. They also knew that at times of maximum economic stress throughout economic history asset return correlations tend to rise. But these hadn’t been quantified with respect to the new instruments and new financial economy that appeared in the 2000s. For adverse actor incentive reasons (a different story), these Knightian uncertainties were left out of risk models.

    The Steffen paper to me looks like an attempt to discuss some Knightian uncertainties. Adverse nonlinear feedback mechanisms are Knightian uncertainties. And what should a prudent risk manager do in the face of such uncertainties? In the GFC, financial firm risk managers knew that the the derivatives they were holding had a value based on past distributions of risk correlations. They also had some theoretical understanding of risk correlation. But they also knew that the data and theory would only get them so far in controlling risk. They then chose not to control that Knightian uncertainty because they were making so much money; and the regulators were so far behind with the new financial instruments that they didn’t force control of such risk either. And through such poor risk control these players almost blew up the global economy.

    Post the GFC, the regulator has forced financial institutions to put such Knightian uncertainty back into risk control. This is done through a variety of methods such as stress testing. The point here is that the tail-end outcomes are qualitative as well as quantitative. They contain a role for expert judgement and prudence.

    So Steffen et al. have come up with a back-of-the envelope expert judgement risk number that says we could tip from 2 degrees to 4 degrees. You may disagree with that insight, but how would you build the Knightian uncertainty into a robust risk management regime? Do you believe that no known unknowns exist? What numbers do you put on them. And not coming up with a number of your own is not an answer since it is not valid risk management strategy. What percentage rise would you add in that can’t be captured by a climate model in a quantitative manner for risk management purposes?

    My final point relates to your observation that you don’t want to get involved in the arguments of dualling economists (past post). Well, by default, you already have, and it’s a bad choice of sides. Nordhaus and Tol have nothing to say about tail-end outcomes – or at least nothing sensible to say about tail-end outcomes. For that, you need to read Pindyck (papers referenced in my previous comments to you) and Marty Weitzman (sadly just passed away). And their key takeaway is that we have absolutely no idea what the damage function looks like when we get out to higher degrees of warming. Please read this:

    Click to access w19244.pdf

    Quoting Weitzman:

    “The bottom line here is that the damage functions used in most IAMs are completely
    made up, with no theoretical or empirical foundation…..

    ….. The problem is that these damage functions tell us nothing about what to expect if temperature increases are larger, e.g., 5◦C or more.19 Putting T = 5 or T = 7 into eqn. (3) or (4) is a completely meaningless exercise. And yet that is exactly what is being done when IAMs are used to analyze climate policy.”

    And in his book “Climate Shock: The Economic Consequence of a Hotter Planet” (2015), written with the economist Gernot Wagner, he goes further and suggests that even at 2 degrees of warming we are in a risk area marked “there be dragons” in terms of a correct representation of the damage function and risk (Table 3.2, Page 67).

    Against this background, the aims of Greta and Extinction Rebellion would appear to be fully commensurate with a prudent risk management strategy. Waiting on our hands because we ignore 95% plus tail outcomes, are not including Knightian uncertainty and in the deranged belief that we know what the economic damage function looks like at 2 degrees and beyond appears complete madness to me.

  2. Just gone back and read your original papers and the Skeptical Science article talking about your work on constraining the fat tails.

    I understand that you are using a Bayesian approach to incorporate those Knightian uncertainties. I need to go away now and read the Bayesian risk management literature. Curious over whether it would have done better over a genuine non-linear shift that happened curing the Global Financial Crisis (the global economy being of similar complexity to the climate system).

    But putting the uncertainty argument to one side, I’m still curious as to where you see climate sensitivity out at 95% and 99%, even with your constrained tail?

    And the arguments over the damage functions still stand.

  3. Hi Justin, thanks for the comments. Starting from the bottom….yes the Bayesian approach tries to wrap up all types of uncertainty into one number, there are those who argue it’s a bit limited in handling deep uncertainty and ambiguity but on the other hand if we want to make rational (coherent) decisions it’s probably the best bet… least in my opinion.

    As for tails and sensitivity, actually I had some rather unsatisfactory conversations with Marty Weitzman about the time he was developing his “dismal theorem”. I don’t think he ever accepted my argument that the long tail in the stuff he cited is really an artefact of poor methodology, which I developed in my 2009/11 paper (took a long time to appear!) on priors.

    I think the major problem with the Steffen stuff is that their “2C leads to 4C” argument is basically hand-waving and make-believe. There is no observational evidence for it, and no models support it. They could pretty much change the numbers into 1.2 and 40 respectively, without any other changes to the text. And then whatever probability you assign to this hypothesis, you seem to have the conclusion that we must stop all fossil fuel extraction today, and those of us who do not immediately start working on geoengineering must instead devote ourselves to planting trees on all available land. It’s just silly hyperbole and notable that two of the authors declined to defend it when asked to write an essay on the topic.

    As for the actual answer, 95% may be about 4C or thereabouts. I’m a little reluctant to state a precise value right now as I’m part of a large group who have submitted a big manuscript on this, may take a while in the process and is embargoed for now. Yes it’s worth considering but really the mainstream value of “about 3” really is good enough for our purposes. Note also that the high values are associated with a longer time scale, ie doubling the value doesn’t double the rate of warming, rather it means the warming continues for twice as long (first order argument not precisely right). So discounting comes into play here.

  4. Pingback: Is the concept of ‘tipping point’ helpful for describing and communicating possible climate futures? |

Leave a Reply to jamesannan Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s