The IEA’s central projections for renewables continue to look way too low

The IEA’s projections for wind and solar capacity look much too low, continuing a history of vastly underestimating renewables growth.  Their projections are not a reliable basis for projecting the world’s future power generation mix. 

I previously looked at the IEA’s track record of underestimating the growth of renewables by a huge margin.  Since then the 2013 and 2014 World Energy Outlooks have been published, and it seems timely to ask how the credibility of their outlook has improved.   The answer appears, regrettably, to be “not much”.

The chart below shows the IEA’s long term projections for global capacity additions of wind and solar PV, taken from the current version of its central New Policies Scenario, and compares these with historical growth and short term projections.  The short term projections are likely to be quite accurate, especially for wind, as projects due on this year or next are usually already in progress.

Annual net global installations of wind and solar:  comparison of IEA long term projections (New Policies Scenario) with historical data (to 2014) and short term projections (2015-6)

installation rates

Notes:  Historical data is from BP, the Global Wind Energy Council and Bloomberg.  Short term projections are from Bloomberg, as of February 2015.  Long term projections are from the IEA World Energy Outlook, 2014, New Policies Scenario.  IEA projections are for 2012-2020 and for each 5 years thereafter, and are shown at the mid-point of each interval. 

The historic and short term forecast data shows a clear and strong upward trend in the rate of capacity installation for both technologies, although for wind this has somewhat moderated in recent years, and there has been considerable year to year policy-driven volatility.

The IEA’s projections show a sharp reversal of this trend, with net installation rates falling to well below current levels, and staying there or falling further for the next two and a half decades.  The average annual installation rate projected by the IEA over the period 2020-2040 is nearly 30% below last year’s outturn for wind, and nearly 40% below what’s likely to be put in this year.  For solar PV the decrease is even greater, with projected installation rates 40% below last year’s outturn, and nearly 50% below what’s likely this year.  This implies a substantial contraction in the wind and solar PV industries from their present size, rather than continuing growth or stabilisation.  The IEA projects correspondingly small proportions of the world’s electricity generation coming from wind and solar PV.  Even a quarter century from now their projections show wind accounting for only 8.3% of generation (in TWh) and solar PV a mere 3.2%.

It may well be that renewables installation rates begin to grow more slowly and even eventually plateau as markets mature.  But a sudden fall by around a third or a half of current levels sustained into the long term seems to run against the main prevailing drivers.

The imperative to reduce carbon emissions from power generation is ever greater.  This looks likely to continue to be a strong driver for renewables growth through direct mandates for renewables and (especially in the long term) through incentives from carbon pricing.  Renewables are also highly compatible with other policy objectives such as security of energy supply.

Renewables are much more cost competitive than they were, both with other low carbon generation and with conventional fossil fuels, especially if fossil generation includes the cost of its emissions.  Costs for wind and especially solar are expected to continue to fall.

Some argue that the total subsidy needed by solar and wind will limit their growth.  However as costs fall any remaining subsidies required will continue to fall even faster in percentage terms (so for example a 20% decrease in costs may lead to a 50% decrease in required subsidy).  This is likely to limit the total additional costs of renewables even as volumes grow, and especially in the 2020s and 2030s as the proportion of projects requiring no subsidy grows ever greater.

There is also scope to increase the installed base of renewables globally to well above the levels projected by the IEA without causing significant problems for grid integration.  In any case such obstacles are likely to reduce over time with improved grid management, greater interconnection, and falling costs of batteries.

Given these drivers the IEA’s projections appear to be close to or below the bottom end of the credible range for rates of deployment, especially for solar, rather than the central case they are intended to represent.  They do not form a reliable basis for assessing the future of the world’s power generation mix.

Adam Whitmore – 27th February 2015

Randomised trials of energy efficiency policy

Greater use of randomised trials could help the uptake of energy efficiency by identifying which policy interventions work best.

More efficient use of energy is high on almost everyone’s list of good ways to reduce CO2 emissions.  It can lead to large scale emissions reductions, is often cost-effective, and tends to be highly compatible with other policy goals such as energy security.

Efficiency standards for buildings, vehicles and appliances have played a critical role in improving energy efficiency, and will continue to do so.  But standards are not the whole story.  Rates of uptake of more efficient technology and processes and other changes in consumers’ behaviour can matter greatly.

However it is often impossible to know in advance how innovative policy interventions will affect behaviour.   Consumers’ responses to novelty are unpredictable, and judging likely response is further complicated because consumers’ circumstances are often complex and varied.  Even afterwards it may be difficult to judge whether an intervention has been effective because it’s impossible to say what would have happened otherwise.

Fortunately there are models from elsewhere that can help address these issues.  A well proven means of judging the effectiveness of interventions is the use of randomised trials, in which one group is subject to an intervention and a similar control group is not.  These trials look to avoid biases such as self-selection, for example where those most interested in something may participate disproportionately.

Double blind randomised control trials for new drugs form a benchmark for such tests.  This approach requires two groups to be chosen differing only in whether they have a new drug or a placebo, with neither the patient nor the physician being aware who is getting which.  Provided that all such studies of each new drug are published – a controversial area – there are two comparable groups, and valid statistical inferences can be drawn about whether the drug has been effective.

The double blind element of medical treatments is not always easy to reproduce in other fields, but the use of controlled trials is common in other areas.  Technology companies often roll out two different versions of software online to subsets of users to see which gets the best response, as measured for example by click-through rates.    This approach allows decisions to be data driven rather than based on judgement or experience.  Tests on users may be ethically controversial, as Facebook found with experiments to its news feeds.  And outcomes are not always desirable from the consumer’s point of view, for example when an option to turn down an offer is less visible on screen, even if few people want the offer.  But effectiveness will likely have been demonstrated, at least for major websites.

Development organisations have used similar approaches in looking at uptake, for example testing different ways of increasing uptake of immunisation programmes [1].

Data driven decision making of this sort is often contrasted with traditional decision making based on the judgement of someone senior, which is sometimes referred to as HIPPO based decision making (Highest Paid Person’s Opinion).  It also goes beyond a vague requirement for evidence based policy making, in that it requires a certain type of evidence to be gathered.  This reduces the often-noted risk that evidence based policy-making turns into policy based evidence-making.

Controlled trials are now beginning to be used to test interventions designed to increase energy efficiency.  In a trial in Norway [2] the labelling of appliances was changed to make it more meaningful to consumers.  Labels in some stores showed lifetime electricity running costs and improved staff training while the control groups had labelling showing only annual kWh and no training.  For fridge-freezers no significant effect was found. For tumble dryers the combined label and training reduced average energy use of tumble dryers sold by 4.9% while training alone led to a 3.4% reduction. The effect was strongest initially, but declined over time.

A similar change of labelling was undertaken in the UK in a joint study by John Lewis department stores working in collaboration with the Department of Energy and Climate Change (DECC) [3].  A statistically significant but small effect (0.7% increase in efficiency of appliances sold) was observed.  Another, earlier, study on interventions for households with difficulties affording enough energy found no reduction in bills, but an increase in comfort [4].

It is encouraging to find such approaches beginning to be adopted.  However they appear to remain very much the exception not the norm.  There are many other areas where such trials could make a large contribution.  Smart metering in particular could benefit from this.  There are many options for both design and use of smart meters.  It is far from clear which will work best.  Trials are needed to find out.  Although there have been a few such studies [5] many more are needed.

Such trials are not as cheap or easy as making a judgement about what will work and hoping for the best.  And they represent a high hurdle for interventions to clear.  But they are more robust as a result, and should lead to more effective (and cost-effective) outcomes.  Controlled trials need to become more widespread if energy efficiency is to make a full contribution to reducing emissions.

Adam Whitmore – 10th February 2015

Notes:

[1]  The use of controlled trials to look at poverty alleviation and development is described (among other topics) by Abhijit Vinayak Banerjee and Esther Duflo in their book Poor Economics and more concisely and relevantly by Duflo in the accompanying TED talk.

[2] Kallbekken et al. “Bridging the Energy Efficiency Gap: A Field Experiment on Lifetime Energy Costs and Household Appliances” Journal of Consumer Policy, 2013  http://link.springer.com/article/10.1007%2Fs10603-012-9211-z#page-1

[3] https://www.gov.uk/government/publications/evaluation-of-the-decc-and-john-lewis-energy-labelling-trial

[4] http://eprints.hud.ac.uk/9489/1/Microsoft_Word_-_Pragmatic_randomised_controlled_trial_of_a_fuel_poverty_intervention__Heyman_et_al_.pdf

[5] See for example:

http://www.ieadsm.org/Files/Content/Goette.pdf

https://www.ofgem.gov.uk/ofgem-publications/59105/energy-demand-research-project-final-analysis.pdf

https://ideas.repec.org/a/eee/eneeco/v45y2014icp234-243.html

Clearing the air on wind power output

Loss of output from wind turbines as they age is roughly in line with that from other technologies.  Looking at an earlier claim that deterioration is much more rapid than this provides useful lessons for spotting erroneous results that could distort policy debates. 

Large wind turbines are a relatively new power generation technology, and not much has been published on how their output changes over time.  This means that new studies can attract a good deal of attention.  For example, a report by Professor Gordon Hughes for the Renewable Energy Foundation (REF) – an organisation highly sceptical of renewables, despite what its name might imply – made the extraordinary claim that load factors for wind fall by over half in their first 15 years of operation [1], and further suggested that wind capacity would rapidly become uneconomic, so that “few wind farms will operate for more than 12-15 years”.  Such a severe decline in load factor and correspondingly short life would make wind power much more expensive than is commonly assumed.

Two researchers, Dr Iain Staffell and Professor Richard Green, at Imperial College London have since taken a closer look at this claim, and found that it simply does not hold up to scrutiny [2].  Looking at what they found provides wider lessons about how to assess extraordinary claims that might affect policy.

Staffell and Green’s first step was to carry out some comparative sense checks.  They looked at gas turbines, which are similar to wind turbines in that they are also large chunks of rotating metal subject to considerable wear and tear.  Gas turbines typically lose output at a rate of around 0.3-0.6% p.a. with careful maintenance and component replacement, or 0.75% to 2.25% without.  This is very much less than the rate suggested for wind turbines in the REF study, even if wind turbines are not well maintained, for example due to their remote locations.

The next step was to look at the lifetime of existing wind turbines.  The UK has 45 wind farms over 15 years old.  35 of these (nearly 80% of the total) are still operating, and of the remaining 10 only one (2% of the total) has been closed completely.  The other nine have been repowered with larger and more modern turbines to increase output.  Five of these wind farms had been repowered when 17-20 years old, past the operating life predicted by the REF study. These statistics simply disprove the prediction in the REF study that most wind farms would be retired after 15 years.

With these results already falsifying the contention of short lifetimes, Staffell and Green looked directly at falls in output, tracking the performance of each installation over time.  Modelling changes in output as turbines age requires correction for variations in the weather.  Fortunately very detailed data on wind speed and direction is available – the study used 500 million data points from NASA.  Taking this into account showed that turbine load factors do indeed fall with age, but by 1.6% p.a. (0.4 percentage points), within the range for conventional technologies and much less than the 5-13% p.a. found by the REF study.

The results are illustrated in the chart below, which shows the annual change in weather corrected load factor (the absolute rate of decline) for each onshore farm in the UK against the year the farm was built.  There is an apparent tendency for newer turbines to lose capacity somewhat less rapidly, but it is unclear whether there is because newer technology is more durable, or because turbines are more carefully maintained during their early years.  The few that had increased output substantially (at the top right) were likely moving out of an early commissioning phase.

 Wind Decline Figure

Source: I. Staffell, R. Green / Renewable Energy 66 (2014) 775‒786

Together these findings show that the results reported by Renewables Energy Foundation are simply incorrect, and that a material but manageable rate of output decline is to be expected from wind plants as they age, as for other technologies.  In a sense this is a boring result, in that the conclusion is well supported by evidence and makes good sense with few real surprises.  But it is nevertheless an important result, because it implies that wind power can continue to make a growing contribution to decarbonising the power sector.

What lessons can be drawn from the comparison of these studies?

The first is that often a few simple sense checks – such as what has happened with comparable technologies and whether 15 year old wind farms were actually being retired – can help identify claims that are unlikely to be true.  Such simple checks were not reported in the REF study, raising immediate suspicions about the result.

Second, a study that relies purely on statistical techniques, as the REF study did, rather than using physical data – wind speeds in the case of Staffell and Green’s work – is doubly suspect, because it will tend to say little about what is driving results.

Third, it’s important to take account of implied information from the private sector.  Experienced investors continue to put their money into wind farms.  While private investors can and do make mistakes it is unlikely that the numerous investors in wind farms the world over have all either ignored or missed something as simple as rapid output degradation over time.  The fact that very large investments in wind turbines continue to be made suggests there is a good deal of unpublished analysis that contradicts the REF contention.

Fourth, the results of a single study should always be regarded with caution.

Fifth, it is appropriate to be sceptical if results are likely to be congenial to those publishing them, as they were in the case of the REF study.  There may be deliberate misrepresentation, but this is not necessarily so.  Studies have shown that people are more prone to misinterpret data when doing so leads to conclusions that support their world view [2].  Independent academics such as Staffell and Green – who work at Imperial College Business School and are not any kind of lobbyists for wind power (or any other kind of power) – can act as a useful counterbalance to this tendency.

Rigorous, genuinely independent public domain work can play a valuable role in keeping climate change policy debates well-founded.

Adam Whitmore – 12th January 2015

Notes:

[1]  The REF study is Hughes G. The performance of wind farms in the United Kingdom and Denmark. London: Renewable Energy Foundation; 2012. URL: http://tinyurl.com/cn5qnqg.

[2] I. Staffell, R. Green / Renewable Energy 66 (2014) 775‒786.  http://dx.doi.org/10.1016/j.renene.2013.10.041

[3] See Kahan and Peters http://www.cogsci.bme.hu/~ktkuser/KURZUSOK/BMETE47MC15/2013_2014_1/kahanEtAl2013.pdf

Before Father Christmas becomes a climate refugee …

As this is my last post before Christmas I thought I would look forward to some good cheer and also perhaps some seasonal gifts.  Here’s my request to Father Christmas at the North Pole (or according to your preferred tradition Papa Noel, St Nicolas, Santa Claus, or another bringer of good cheer at this time of year).

“Dear Father Christmas,

can we please have for Christmas something that makes global carbon dioxide emissions rise no more than 0.5 % p.a. until they reach that peak, leads them to peak in 2025, and then fall at 3.5% p.a. forever, so that global temperatures this century increases by no more than 2.0 degrees centigrade due to extra carbon dioxide in the atmosphere.

Thank you”

You can fill in your own numbers for your own particular wish in this spreadsheet

Adam Whitmore’s summary of the Allen and Stocker model

Sometimes it seems like it will need a miracle from Father Christmas to get to the sorts of numbers you have probably entered, certainly if they are like mine.  But if the world can at least make good progress towards these numbers next year, it will make the best Christmas present the planet could have.

Here’s hoping it works out that way, and in the meantime enjoy the holiday season.

Adam Whitmore – 18th December 2014

Note:

If you want to know more about the basis of this calculation see my earlier post here.  The parameters define cumulative CO2 emissions given current emissions (area under the curve), and this converts to linearly to temperature. I’ve assumed a transient climate response to emissions (TCRE) of 2 degrees, a variable which is subject to considerable uncertainty.  The calculation is for CO2 warming only, and there may be another perhaps 0.4 degrees due to other greenhouse gasses, so you might want to be more ambitious about what you wish from CO2 than I have been here, even though the numbers already look rather ambitious.

Grains of rice, Japanese swords and solar panels

Even Greenpeace has underestimated the growth of renewables.  In particular, solar has been growing exponentially, and may continue to be so for a while, though likely at a slower percentage rate.

Greenpeace did much better than many at projecting the growth of renewable energy sources in the 2000s.  Their projections were very close to outturn for wind – the 1999 projections were a little below outturn, the 2002 projections a little above.  However even Greenpeace underestimated the growth of solar.  The projections were nevertheless startlingly better than those of the IEA, who have, as I’ve previously noted, consistently underestimated the growth of renewables by a huge margin.  Growth of solar has been exponential, as has that of wind (at least until recently).  Greenpeace appears to have done well by following the logic of exponential growth.

Greenpeace’s projections for wind growth in the 2000s were close to outturn, but they underestimated the growth of solar …

Capture

Exponential growth is so powerful that it can confound intuition about how large numbers can become.  The counterintuitive power of exponential growth is illustrated by the process of making a traditional Japanese steel sword.  The supreme combination of strength and flexibility of such a weapon is said to derive from the way an exponential process layers the metal.  As the metal is beaten out and folded repeatedly to forge the sword the number of layers in the metal doubles up each time.  Following this simple process 15 times creates 215 layers, well over 30,000.  This would be impossible in any other way with traditional methods, and the number of layers created would be hard to comprehend without doing the formal calculation.  This property of producing very large numbers from simple repeated doublings may have contributed to previous projections for renewables seeming implausible, because they were so much greater than the then installed base.  This may have contributed to even Greenpeace being a little cautious in its projections for solar.

Nevertheless exponential growth inevitably runs into limits as some stage.  This is captured by the classic fable of grains of rice on a chessboard, where one grain is put on the first square, two on the second, four on the third, eight on the fourth and so on, doubling with each square.  Half way through the chessboard the pile of grains, though very large, is manageable – around 50 tonnes for the 32nd square.  However amounts then quickly begin to go beyond all reasonable physical constraints.  The pile on the final square would contain 263 grains of rice, which is about 230 billion tonnes.  This is about 300 times annual global production, and enough to cover not just a square of the chessboard but the entire land surface of the earth (to a depth of about a millimetre or two).

Extrapolating growth rates for solar PV from the period 2000 to 2013, when cumulative installed capacity doubled every two years, runs into similar limits.  At this growth rate the entire surface of the earth would be covered with solar panels before 2050.  This would provide far more energy than human civilisation would need, if there were room for any people, which there would not be because of all the solar panels.   So are there constraints that imply that renewables are now in second half of the chessboard – or, if you prefer a more conventional model, the linear part of an s-curve for technology adoption?

Looking at solar in particular, as I’ve previously commented, it needs a lot of land, but this is unlikely to be a fundamental constraint.  Some have previously suggested a limit as technologies reach scale, defined as about 1% of world energy supply, after which growth becomes more linear.  However solar manufacture and installation are highly scalable, so there are fewer obstacles to rapid growth than with traditional energy technologies.

Costs are rapidly falling, so that solar is becoming competitive without subsidy, both compared to other low carbon technologies and, increasingly, with high carbon technologies, especially if the cost of emissions is taken into account.  There is no obvious limit to how low the costs of solar cells can go that is likely to bind in the foreseeable future, although the ancillaries may show slower cost falls.  The costs of lithium ion batteries are also falling rapidly, having approximately halved in the last five years and continuing to fall at a similar rate.  As a result daily storage is becoming much more economic, reducing the problem of the peakiness of solar output and easing its integration into the grid, although seasonal storage remains a daunting challenge.

Solar still accounts for only around 1% of world electricity generation so globally there are plenty of opportunities globally in new electricity demand and from scheduled retirement of existing generating plant.  The vexed issues around grid charges, electricity market structures and role of incumbents may slow growth for a while, at least in some jurisdictions, but seem unlikely to form a fundamental barrier globally as long as costs continue to fall.

In short there seem few barriers to solar continuing to grow exponentially for a while, although likely at a slower percentage rate than in the past – each doubling is likely to take longer than two years given the current scale of the industry.  Solar can still continue moving quite a long way up the chessboard before it hits its limits.  How large the industry will become will need to await a future post, but provisionally there does not seem any reason why solar PV should not become a 300-600 GW p.a. or more industry.

Policy has played an important role in the development of solar to date mainly by providing financial incentives.  It will continue to play an important role, but this will be increasingly around removing barriers rather than providing a financial stimulus.

Of course I cannot know if this fairly optimistic view is right.  But it does at least to avoid some issues that might bias projections downwards.  First, it recognises the potential validity of counter-intuitive results.  In a sector such as energy which usually changes quite slowly the numbers resulting from exponential growth can seem implausible.  This can lead to rejection of perfectly sound forecasts, as the intuition of experienced professionals, which is based on long experience of incremental change, works against them.  Second it avoids assuming that all energy technologies have similar characteristics.  Finally, it takes into account a wide range of possibilities and views and considers the drivers towards them, helping to avoid the cognitive glitch of overconfidence in narrow limits to future outcomes.

The rate of growth of renewables is intrinsically uncertain.  But the biases in forecasts are often more towards underestimation than overestimation.  If you’ve been in the energy industries a while it’s quite likely that your intuition is working against you in some ways.  Don’t be afraid to make a projection that doesn’t feel quite right if that’s where the logic takes you.

Adam Whitmore – 25th November 2014

Notes

In the calculations of the results from exponential growth I have, for simplicity, assumed very rough and ready rounded values of 40,000 grains of rice = 1litre = 1 kg.  I’ve assumed 10m2/kW (including ancillaries) for the area of solar panels. The land surface of the earth is 1.5 x108 km2.  Solar capacity doubled around every 2 years from 2000 to 2013, growing from 1.25GW in 2000 to 140 GW in 2013 (source:  BP statistical review), reaching a land area of around 1400km2.  217 times its current area takes it past the land surface of the earth, so it would take to 2047 (34 years from 2013) with doubling of installed capacity every 2 years to reach this point.  The source of the story about sword-making is from the 1970s TV documentary The Ascent of Man and accompanying book.

For data on Greenpeace’s historical projections see:

http://www.greenpeace.org/international/Global/international/publications/climate/2012/Energy%20Revolution%202012/ER2012.pdf See pages 69 and 71

 

Making climate change policies fit their own domain

A new framework acts as a sound guide for policy formation.

There is a widely held narrative for climate policy that runs something like this.  The costs of damage due to greenhouse gas emissions are not reflected in economic decisions.  This needs to be corrected by imposing a price on carbon, using the power of markets to incentivise efficient emissions reduction across diverse sources.  Carbon pricing needs to be complemented by measures to address other market failures, such as under-provision of R&D and lack of information.  Correcting such market failures can help carbon markets function more efficiently over time.  However further interventions, especially attempts by governments to pick winners or impose regulations mandating specific solutions, are likely to waste money.  This narrative, even if I have caricatured it a little, grants markets a central role with other policies in a supporting role.  Its application is evident, for example, amongst those in Europe who stress and exclusive or central role for the EUETS.

While this narrative rightly recognises the important role that markets can play in efficient abatement, it is incomplete to the point that it is likely to be misleading as a guide to policy.  A better approach has recently been characterised in a new book by Professor Michael Grubb and co-authors.  He divides policy into three pillars which conform to three different domains of economic behaviour.  Action to address all three domains is essential if efforts to reduce emissions to the extent necessary to avoid dangerous climate change are to succeed.  These domains and the corresponding policy pillars are illustrated in the chart below.

Three domains of economic behaviour correspond to three policy pillars …

Domains and pillars diagram

In the first domain people seek to satisfy their needs, but once this is done they don’t necessarily go further to achieve an optimum.  Although such behaviour is often characterised by economists as potentially optimal subject to implicit transaction costs this is not a very useful framework.  Much better is to design policy drawing on disciplines such as psychology, the study of social interactions, and behavioural economics.  This domain of behaviour relates particularly to individuals’ energy use, and the corresponding policy pillar includes instruments such as energy efficiency standards and information campaigns.

The second domain looks optimising behaviour, where companies and individuals will devote significant effort to seeking the best financial outcome.  This is the domain where market instruments such as emissions trading have the most power.  Policy making here can draw strongly on neoclassical economics.

The third domain is system transformation, and requires a more active role from governments and other agencies to drive non-incremental change.  The policy pillar addressing this domain of behaviour includes instruments for technology development, the provision of networks, energy market design, and design and enforcement of rules to monitor and govern land use changes such as deforestation.  Markets may have a part to play but the role of governments and other bodies is central here.  The diversity of policies addressing this domain means that it draws on a wide range of disciplines, including the study of governance, technology and industrial policy, institutional economics and evolutionary economics.

As one moves from the first to the third domain there is increasing typical scale of action, from individuals through companies to whole societies, and time horizons typically lengthen.

This framework has a number of strengths.  It is both simple in outline and immensely rich is its potential detail.  Each domain has sound theoretical underpinnings from relevant academic disciplines.  It acknowledges the power of markets without giving them an exclusive or predominant role – they become one of three policy pillars.  It implies that the vocabulary of market failures becomes unhelpful, as I’ve previously argued.  Instead policy is framed as a wide ranging endeavour where the use of markets fits together with a range of other approaches.  While this may seem obvious to many, the advocacy of markets as a solution to policy problems has become so pervasive, especially in Anglo-Saxon economies, that this broader approach stands as a very useful corrective to an excessively market-centric approach.

The framework is high level, and specific policy guidance needs to draw on more detailed analysis.  The authors have managed to write 500 pages of not the largest print without exhausting the subject.  However, the essential framework is admirable in its simplicity, compelling in its logic, and helpful even at a high level.  For example it suggest that EU policy is right to include energy efficiency, emissions trading and renewables – broadly first, second and third domain policies respectively – as well as to be active in third domain measures such as improving interconnection, rather than relying exclusively on emissions trading (although as the EUETS covers larger emitters, so first domain effects are less relevant for the covered sector).

The framework in itself does not tell you what needs to be done.  In particular the challenges of the third domain are formidable.  But it provides a perspective which deserves to become a standard structure for high level guidance on policy development.

Adam Whitmore – 31st October 2014

Costing damages from climate change offers only a partial guide to choice of policy

Estimates of the cost of damages from greenhouse gas emissions are more use for ruling in policy measures than ruling them out.

Estimates of the cost of the damages caused by greenhouse gas emissions (often referred to as the social cost of carbon) are widely used to assess the cost effectiveness of policies to reduce emissions.  Broadly speaking, emissions reductions that are cheaper than the cost of damages are judged cost-effective, while emissions reductions more expensive than the cost of damages risk being deemed not cost effective.  For example, the US EPA uses an estimate for the social cost of carbon of $39/tonne of CO2 (in 2015 at a 3% discount rate) as its benchmark, with policy measures leading to emissions reductions at a cost lower than this being considered cost effective.  Such estimates also act as a benchmark for carbon prices, on the grounds that an economically efficient carbon price should equal the expected cost of damages [1].

Detailed modelling is used to estimate the additional costs of damage per tonne of additional emissions (see notes at the end of this post for a short summary of this process).  The modelling is often thorough and elaborate, and attempts to be comprehensive.  However there are several factors which tend to lead to estimates of the cost of damages being below what it is really worth paying to avoid emissions.

Omitted costs

Many of the costs of climate change are omitted from models, essentially assuming that they are zero.  For example, knock-on effects, such as conflict from migration, are often not modelled, but may be among the largest costs of climate change.  Other costs are dealt with only partially, because they are difficult to estimate reliably [3], or difficult to measure as a financial loss.  For example, it is difficult, and in many ways impossible, to develop adequate costings for the loss of major ecosystems.

Difficulties in estimating the effects of large temperature changes

Models designed to estimate the cost of damages for a temperature change of one or two degrees may be become highly misleading if used to estimates the costs of larger temperature changes.  Damages may increase only quite slowly with small temperature changes, but are likely to increase quite rapidly thereafter, and perhaps catastrophically when certain thresholds are reached [4].  This is often not represented adequately in models.  For example, the widely used DICE model shows GDP only approximately halving with a temperature rise of 19 degrees centigrade.  This is unlikely to be realistic, and indeed the model’s author has cautioned against its use for temperature changes above around 3 degrees.  But temperature changes above 3 degrees would be very likely under a business as usual emissions scenario, and the effects of such large temperature changes are a major cause for concern.

Treating GDP growth as exogenous

Most models assume that the drivers of GDP growth are largely unaffected by even very severe climate change.  Over a century, even slow growth (anything above 0.7% p.a.) more than doubles GDP, and so more than offsets the costs of warming even if GDP is assumed to halve from the level it would otherwise reach.  Even with a temperature rise of 19 degrees over a century people appear, on average, better off than today, because the benefits of growth (more than doubling GDP) outweigh the costs of climate change (halving GDP).  Calling this result counterintuitive is something of an understatement.

Role of risks

Analysis often excludes some risks which are difficult to model, for example some types of climate feedbacks.  This effectively assumes that they won’t happen and so won’t cause any damage, ignoring the risks.  Indeed, even attempting to set a single average cost of damages fails to address the question of willingness to tolerate the chance of a cost much larger than the estimated average (due to low probability high impact events).  The EPA does estimate of the cost in the upper tail of the damage distribution, and some other modelling explicitly includes a range of sensitivities.  However these approaches, at best, go only part way towards addressing the problem of the risk of catastrophe outcomes, especially in view of the other limitations I’ve outlined.

Finally, the process of assessing policy measures needs to take account of all costs and benefits.  Measures to reduce emissions often have valuable co-benefits for health which need to be factored in to decision making.  And analysis needs to take account of future benefits for emissions reduction, for example in promoting early stage technologies.

Estimates of the cost of damage from greenhouse gas emissions remain useful inputs into decision making.  They can be useful in ruling policy measures in – if a policy measure has a cost per tonne below even a cautious estimate of the cost of damages then it is very likely cost-effective.  But they are much less useful for ruling measures out.  It is probably worth paying a good deal more to reduce the risks of large changes to the climate than the conventional estimates of damage costs suggest.  And in any case judging which risks are acceptable will always be a matter of political and ethical debate, rather than a simple matter of costings.

Adam Whitmore – 13th October 2014

Notes

[1] This principle that pricing of pollutants should reflect the cost of damages is commonly discussed in terms of Pigovian taxes or the Polluter Pays Principle.  

[2] The cost of damages, commonly referred to as the social cost of carbon (SCC), is usually estimated by modelling the cost of damages from additional emissions.  A base case emissions track is specified.  The changes to the climate and the resulting impacts associated with this base case emissions track are modelled.  The financial costs of the damages resulting from the impacts, for example due to rising sea levels, are estimated.  This process is repeated, adding an additional (say) billion tonnes of extra emissions, and calculating the costs of the additional damages that result.  The (discounted) additional cost of damages per tonne of additional emissions is derived from this.  These calculations are usually done using elaborate models known as Integrated Assessment Models (IAMs).  Estimates of the Social Cost of Carbon such as those used by the US EPA can refer to estimates from several different IAMs.  The uncertainties involved in the modelling lead to a wide range of estimates for the SCC. 

[3] A good survey of omissions from calculations of the SCC is given by a recent report co-sponsored by the US NGOs the Environmental Defense Fund and National Resources Defence Council:  http://costofcarbon.org/blog/entry/missing-pieces

[4] A good review of the limits of modelling can be found in Nicholas Stern, The Structure of Economic Modelling of the Potential Impacts of Climate Change, Journal of Economic Literature 2013.  This includes the reference to damages at very large temperature changes, quoting work by Ackerman, Stanton and Bueno: Fat tail, Exponents, Extreme Uncertainty: Simulating Catastrophe in DICE, Ecological Economics 69, 2010