The last of the three IPCC 5th Assessment Reports has now been published, but with a final Synthesis Report to come towards the end of the year. The “Mitigation of Climate Change” details the various emission pathways that are open to us, the technologies required to move along them and most importantly, some feeling for the relative costs of doing so.

As had been the case with the Science and Impacts reports, a flurry of media reporting followed the release, but with little sustained discussion. Hyperbole and histrionics also filled the airwaves. For example, the Guardian newspaper reported:

The cheapest and least risky route to dealing with global warming is to abandon all dirty fossil fuels in coming decades, the report found. Gas – including that from the global fracking boom – could be important during the transition, but only if it replaced coal burning.

This is representative of the general tone of the reporting, with numerous outlets taking a similar line. The BBC stated under the heading “World must end ‘dirty’ fuel use – UN”:

A long-awaited UN report on how to curb climate change says the world must rapidly move away from carbon-intensive fuels. There must be a “massive shift” to renewable energy, says the study released in Berlin.

While it is a given that emissions must fall and for resolution of the climate issue at some level, anthropogenic emissions should be returned to the near net zero state that has prevailed for all of human history barring the last 300 or so years, nowhere in the Summary Report do words such as “abandon” and “dirty” actually appear. Rather, a carefully constructed economic and risk based argument is presented and it isn’t even until page 18 of 33 that the tradeoff between various technologies is actually explored. Up until that point there is quite a deep discussion on pathways, emission levels, scenarios and temperature ranges.

Then comes the economic crux of the report on page 18 in Table SPM.2. For scenarios ranging from 450ppm CO2eq up to 650 ppm CO2eq, consumption losses and mitigation costs are given through to 2100, with variations in the availability of technologies and the timing (i.e. delay) of mitigation actions. The centre section of this table is given below;

 IPCC WGIII Table SPM2

Particularly for the lower concentration scenario (430-480 ppm) the table highlights the importance of carbon capture and storage. For the “No CCS” mitigation pathway, i.e. a pathway in which CCS isn’t available as a mitigation option, the costs are significantly higher than the base case which has a full range of technologies available. This is still true for higher end concentrations, but not to the same extent. This underpins the argument that the energy system will take decades to see significant change and that therefore, in the interim at least, CCS becomes a key technology for delivering something that approaches the 2°C goal. For the higher concentration outcomes, immediate mitigation action is not so pressing and therefore the energy system has more time to evolve to much lower emissions without CCS – but of course with the consequence of elevated global temperatures. A similar story is seen in the Shell New Lens Scenarios.

Subtleties such as this were lost in the short media frenzy following the publication of the report and only appear later as people actually sit down and read the document. By then it is difficult for these stories to surface and the initial sound bites make their way into the long list of urban myths we must then deal with on the issue of climate change.

Today we see a huge focus on renewable energy and energy efficiency as solutions for reducing CO2 emissions and therefore addressing the climate issue. Yet, as I have discussed in other posts, such a strategy may not deliver the outcome people expect and might even add to the problem, particularly in the case of efficiency. I am not the only one who has said this and clearly the aforementioned strategy has been operating for some 20 years now with emissions only going one way, up.

Kaya Yoichi

A question that perhaps should be asked is “why have many arrived at this solution set?”. Focusing on efficiency and renewable energy as a solution to climate change possibly stems from the wide dissemination of the Kaya Identity, developed in 1993 by Japanese energy economist Yoichi Kaya (pictured above). He noted that:

 Kaya formula

 Or in other words:

Kaya formula (words)

Therefore, by extension over many years (where k = climate sensitivity): 

Climate Kaya formula (words)

In most analysis using the Kaya approach, the first two terms are bypassed. Population management is not a useful way to open a climate discussion, nor is any proposal to limit individual wealth or development (GDP per person). The discussion therefore rests on the back of the argument that because rising emissions are directly linked to the carbon intensity of energy (CO2/Energy) and the energy use per unit of GDP (Energy/GDP or efficiency) within the global economy, lowering these by improving energy efficiency and deploying renewable energy must be the solutions to opt for.

But the Kaya Identity is just describing the distribution of emissions throughout the economy, rather than the real economics of fossil fuel extraction and its consequent emissions. Starting with a simple mineral such as coal, it can be picked up off the ground and exchanged for money based on its energy content. The coal miner will continue to do this until the accessible resource is depleted or the amount of money offered for the coal is less than it costs to pick it up and deliver it for payment. In the case of the latter, the miner could just wait until the price rises again and continue deliveries. Alternatively, the miner could aim to become more efficient, lowering the cost of pickup and delivery and therefore continuing to operate. The fossil fuel industry has been doing this very successfully since its beginnings.

The impact on the climate is a function (f) of the total amount delivered from the resource, not how efficiently it is used, when it is used, how many wind turbines are also in use or how many people use it. This implies the following;

Climate formula (words)

This may also mean that the energy price has to get very low for the miner to stop producing the coal. Of course that is where renewable energy can play an important role, but the trend to date has been for energy system costs to rise as renewable energy is installed. A further complication arises in that once the mine is operating and all the equipment for extraction is in place, the energy price has to fall below the marginal operating cost to stop the operation. The miner may go bankrupt in the process as capital debt is not being serviced, but that still doesn’t necessarily stop the mine operating. It may just get sold off to someone who can run it and the lost capital written off.

This doesn’t have to be the end of the story though. A price on the resultant carbon emissions can tilt the balance by changing the equation;

Climate formula with carbon price (words)

When the carbon price is high enough to offset the profit from the resource extraction, then the process will stop, but not before. The miner would then need to invest in carbon capture and storage to negate the carbon costs and restart the extraction operation.

What this shows is that the carbon price is critical to the problem. Just building a climate strategy on the back of efficiency and renewable energy use may never deliver a reduction in emissions. Efficiency in particular may offer the unexpected incentive of making resource extraction cheaper, which in turn makes it all the more competitive.

 

With much anticipation but little more than 24 hours of media coverage, the Intergovernmental Panel on Climate Change (IPCC) released the next part of the 5th Assessment Report, with Working Group II reporting on Impacts, Adaptation and Vulnerability. The report started with the very definitive statement;

Human interference with the climate system is occurring . . . 

But this was immediately followed by a statement that set the scene for the entire assessment;

. . . . and climate change poses risks for human and natural systems.

The key word here is “risk”. This report attempted to explain the risks associated with rising levels of CO2 in the atmosphere and demonstrate how the impact risk profile shifts depending on the eventual change in surface temperature and the response to this through adaptation measures. Unfortunately, the subtlety of this was largely lost in the media reporting.

For example, although the Guardian did use the “risk” word, it chose to open one of its many stories on the new report with the statement;

Australia is set to suffer a loss of native species, significant damage to coastal infrastructure and a profoundly altered Great Barrier Reef due to climate change . . . .

This was under the headline “Climate change will damage Australia’s coastal infrastructure“.

The IPCC report didn’t actually say this, rather it presented a risk assessment for coral reef change around the coast of Australia under different emission and temperature scenarios. This was summarised, along with a wide variety of other impact risks, in a useful chart form with the Australian coral extract shown below.

Coral reef risk

Of course it is the job of the media to translate a rather arcane and technical report into something that a much larger number of people can understand, but it is nevertheless important to retain the key elements of the original work. In this case, it is the risk aspect. With very few exceptions, there is no “will” in this subject, only “could”. Some of those “could” events have a very high level of probability (the IPCC use the term “virtually certain” for 99% probability), but this still doesn’t mean it is certain.

There has been an increasing tendency to talk about climate change in absolutes, such as “stronger hurricanes”, “more violent storms” and “a profoundly altered Great Barrier Reef”, when in fact this isn’t how the science describes the issue. Rather, it is how the media and others have chosen to describe it. This isn’t to say that these risks should be dismissed or ignored, they are real and very troubling, but the outcomes are not a given. Hopefully as others have time to digest the latest IPCC work, this aspect of the story becomes more prominent.

Taking this a step further though, it would appear that even the IPCC have chosen to present the risks with a slight skew. Although they are completely transparent about all the material they have used to build their case, the final presentation in the risk charts doesn’t tell the full story. They have chosen to present only two scenarios in the summary document, the 2 ºC case and the 4 ºC case. There is much to say between these and arguably, the space between 2 and 4 is where the real risk management story lies.

This was analysed in 2009 by the MIT Joint Program, in their report Analysis of Climate Policy Targets under Uncertainty. The authors demonstrated that even a modest attempt to mitigate emissions could profoundly affect the risk profile for equilibrium surface temperature. In the chart below five mitigation scenarios are shown, from a “do nothing” approach to a very stringent climate regime (Level 1, akin to the IPCC 2 ºC case). They note in the report that:

An important feature of the results is that the reduction in the tails of the temperature change distributions is greater than in the median. For example, the Level 4 stabilization scenario reduces the median temperature change by the last decade of this century by 1.7 ºC (from 5.1 to 3.4 ºC), but reduces the upper 95% bound by 3.2oC (from 8.2 to 5.0 ºC). In addition to being a larger magnitude reduction, there are reasons to believe that the relationship between temperature increase and damages is non-linear, creating increasing marginal damages with increasing temperature (e.g., Schneider et al., 2007). While many estimates of the benefits of greenhouse gas control focus on reductions in temperature for a reference case that is similar to our median, these results illustrate that even relatively loose constraints on emissions reduce greatly the chance of an extreme temperature increase, which is associated with the greatest damage.

 Temperature uncertainty

There is a certain orthodoxy in only looking at 2 and 4 ºC scenarios. It plays into the unhelpful discussion that seems to prevail in climate politics that “it must be 2 ºC or it’s a catastrophe.” I posted a story on this late last year. As it becomes increasingly clear that the extreme mitigation scenario required for 2 ºC simply isn’t going to happen, society will need to explore the area between these two outcomes with a view to establishing what can actually be achieved in terms of mitigation and to what extent that effort will shift the impact risk. Maybe this is something for the 6th Assessment Report.

In my previous post I responded to an article by environmentalist Paul Gilding where he argued that the rate of solar PV deployment meant it was now time to call “Game over” for the coal, oil and gas industries. There is no doubt that solar PV uptake is faster than most commentators imagined (but not Shell in our Oceans scenario) and it is clear that this is starting to change the landscape for the utility sector, but talk of “death spirals” may, in the words of Mark Twain, be an exaggeration.

In that same article, Gilding also talks about local battery storage via electric cars and the drive to distributed systems rather than centralized ones. He clearly envisages a world of micro-grids, rooftop solar PV, domestic electricity storage and the disappearance of the current utility business model. But there is much more to the energy world than what we see in central London or Paris today, or for that matter in rural Tasmania where Paul Gilding lives. It all starts with unappealing, somewhat messy but nevertheless essential processes such as sulphuric acid, ammonia, caustic soda and chlorine manufacture (to name but a few). Added together, about half a billion tonnes of these four products are produced annually. These are energy intensive production processes operating on an industrial scale, but largely hidden away from daily life. They are in or play a role in the manufacture of almost everything we use, buy, wear, eat and do. These core base chemicals also rely on various feedstocks. Sulphuric acid, for example, is made from the sulphur found in oil and gas and removed during the various refining and treatment processes. Although there are other viable sources of sulphur they have long been abandoned for economic reasons.

dow-chemical-plant-promo

The ubiquitous mobile phone (which everything now seems to get compared to when we talk about deployment) and the much talked about solar PV cell are just the tip of a vast energy consuming industrial system, built on base chemicals such as chlorine, but also making products with steel, aluminium, nickel, chromium, glass and plastics (to name but a few). The production of these materials alone exceeds 2 billion tonnes annually. All of this is of course made in facilities with concrete foundations, using some of the 3.4 billion tonnes of cement produced annually. The global industry for plastics is rooted in the oil and gas industry as well, with the big six plastics (see below) all starting their lives in refineries that do things like converting naphtha from crude oil to ethylene.

The big six plastics:

  • polyethylene – including low density (PE-LD), linear low density (PE-LLD) and high density (PE-HD)
  • polypropylene (PP)
  • polyvinyl chloride (PVC)
  • polystyrene solid (PS), expandable (PS-E)
  • polyethylene terephthalate (PET)
  • polyurethane (PUR)

All of these processes are also energy intensive, requiring utility scale generation, high temperature furnaces, large quantities of high pressure steam and so on. The raw materials for much of this comes from remote mines, another facet of modern life we no longer see. These in turn are powered by utility scale facilities, huge draglines for digging and vast trains for moving the extracted ores. An iron ore train in Australia might be made up of 336 cars, moving 44,500 tonnes of iron ore, is over 3 km long and utilizes six to eight locomotives including intermediate remote units. These locomotives often run on diesel fuel, although many in the world run on electric systems at high voltage, e.g. the 25 kV AC iron ore train from Russia to Finland.

The above is just the beginning of the industrial world we live in, built on a utility scale and powered by utilities burning gas and coal. These bring economies of scale to everything we do and use, whether we like it or not. Not even mentioned above is the agricultural world which feeds 7 billion people. The industrial heartland will doubtless change over the coming century, although the trend since the beginning of the industrial revolution has been for bigger more concentrated pockets of production, with little sign of a more distributed model. The advent of technologies such as 3D Printing may change the end use production step, but even the material that gets poured into the tanks feeding that 3D machine probably relied on sulphuric acid somewhere in its production chain.

One of the best books I have read in recent years is the Steve Jobs biography by Walter Isaacson. It’s also a great management book, although I don’t think that it was really intended for that purpose. In discussing Jobs’ approach to life and business management, Isaacson goes to some length to describe the concept of a Reality Distortion Field (RDF), a tool used on many occasions by Jobs to inspire progress and even bet the company on a given outcome. The RDF was said to be Steve Jobs’ ability to convince himself and others to believe almost anything with a mix of charm, charisma, bravado, hyperbole, marketing, appeasement and persistence. RDF was said to distort an audience’s sense of proportion and scales of difficulties and made them believe that the task at hand was possible. This also seems to be the case with a number of renewable energy, but most notably the Solar PV, advocates.

The Talosians from Star Trek were the first aficionados of the RFD

It is always with interest that I open the periodic e-mail from fellow Australian Paul Gilding and read the latest post from him in The Cockatoo Chronicles. But this time, the full force of the Renewables Distortion Field hit me. Gilding claims that;

 I think it’s time to call it. Renewables and associated storage, transport and digital technologies are so rapidly disrupting whole industries’ business models they are pushing the fossil fuel industry towards inevitable collapse. Some of you will struggle with that statement. Most people accept the idea that fossil fuels are all powerful – that the industry controls governments and it will take many decades to force them out of our economy. Fortunately, the fossil fuel industry suffers the same delusion. In fact, probably the main benefit of the US shale gas and oil “revolution” is that it’s keeping the fossil fuel industry and it’s cheer squad distracted while renewables, electric cars and associated technologies build the momentum needed to make their takeover unstoppable – even by the most powerful industry in the world.

My immediate approach to dealing with a statement like this plays into the next paragraph by Gilding, where he says;

How could they miss something so profound? One thing I’ve learnt from decades inside boardrooms, is that, by and large, oil, coal and gas companies live in an analytical bubble, deluded about their immortality and firm in their beliefs that “renewables are decades away from competing” and “we are so cheap and dominant the economy depends on us” and “change will come, but not on my watch”. Dream on boys.

But the energy system is about numbers and analysis, like it or not. Perhaps Gilding needs to at least look in his own back yard before reaching out for global distortion. In a number of posts over the last year or two he was waxed lyrical about the disruption in Australia and consequent shift in its energy mix. Yet the latest International Energy Agency data on Australia shows that fossil fuel use is continuing to rise even as residential solar PV is becoming a domestic “must have”. There is no escaping these numbers!

Australia primary energy to 2012

It is true that solar PV is starting to have an impact on the global energy mix and that at least in some countries the electricity utilities are playing catch-up. But the global shift will likely take decades, even at extraordinary rates of deployment by historical standards. The Shell Oceans scenario portrays such a shift, with solar deployment over the next 20 years bringing it to the level of the global coal industry in 1990 and then in the 30 years from 2030 to 2060 the rate of expansion far exceeds the rate of coal growth we have seen from 1990-2020 (see chart).

Solar growth in Oceans

I would argue that this is a disruptive change, but it still takes all of this century to profoundly impact the energy mix. Even then, there remains a sizable oil, gas and coal industry, although not on the scale of today. Of course this is but one scenario for the course of the global energy system, but it at least aligns in concept with the aspirations of Paul Gilding. I don’t imagine he would be particularly impressed by our Mountains scenario!!

 Solar in Oceans

Many will of course argue that the proof of the RDF is in the Apple share price and its phenomenal success. But this didn’t come immediately. Apple and Jobs had more ups and downs than even the most ardent follower would wish for, with the company teetering on the brink more than once (read the Isaacson account). But it persisted and nearly forty years on it is a global behemoth. However, forty years isn’t exactly overnight and IT change seems to take place at about twice the rate of energy system change. Does that mean new energy companies won’t become global super-majors until much later this century?

 

This week (March 10th-14th) in Bonn, parties to the UNFCCC are meeting under the direction of the Fourth Part of the Second Session of the Ad Hoc Working Group on the Durban Platform for Enhanced Action (ADP 2.4). In short, this is the process that is trying to deliver a global deal on climate change over the next 20 months when the world comes together at COP 21 in Paris. The last attempt at such a monumental feat ended in tears in Copenhagen in December 2009.

One might imagine that a process with only a few months to reach a solution on a major global commons issue would be deeply imbedded in the economics of Pigouvian pricing, or at least attempting to see how the global economy could be adjusted to account for this particular externality. However, as we know from the Warsaw COP and previous such meetings that this isn’t the case, rather it is an effort just to get nation states to recognize that a common approach is actually needed.

The pathway being plied in Warsaw resulted in the text on “contributions”, which at least attempts to create a common definition and set of validation rules for whatever it is that nation states offer as climate action from within their own economies. More recently the USA set out its views on the nature of “contributions”. This process is at least trying to get everyone in a common club of some description, rather than having several clubs as has been the case since 1992 when the UNFCCC was created. The diplomatic challenge for Paris will be to find the most constraining club which everyone is still willing to be a member of and then close the doors. Once inside, the club rules can be continually renegotiated until some sort of outcome is realized which actually deals with emissions. This ongoing renegotiation will be for the years after Paris, it won’t happen beforehand or even during COP 21.

But ADP 2.4 in Bonn seems to have gone off-piste. Looking through the Overview Schedule, what can be seen is a series of meetings on renewable energy and energy efficiency. While this may be an attempt to highlight particular national actions as a template for others to follow, it is nevertheless symptomatic of a process that isn’t really dealing with the problem it is mandated to solve; limiting the rise in the level of CO2 in the atmosphere.

At best, the ADP has become a derivative process, or perhaps even a second derivative process. Rather than confronting the issue, it is instead dealing with tangents. Holding sessions on renewable energy is a good example of this behaviour. The climate issue is about the release to atmosphere of fossil carbon and bio-fixed carbon on a cumulative basis over time, with the total amount released being the determining factor in terms of peak warming (i.e. the 2°C goal). The first derivative of this is the rate of release, which is determined by total global energy demand and the carbon intensity of the energy mix. The second derivative is probably best described as the rate of change of the carbon intensity of the global energy mix, although this can be something of a red herring in that the global energy mix can appear to decarbonize even as emissions continue to rise, simply because demand change outpaces intensity change.

Energy efficiency is perhaps yet another derivative away from the problem. It deals with the rate of change of energy use, but this has further underlying components, one being the rate of change of energy use in things such as appliances and the other the rate of change of the appliances themselves. Efficiency isn’t good at dealing with the immediate rate of energy use in that this tends to be dictated by the existing stock of devices and infrastructure, whereas efficiency tackles the change over time for new stock. That new stock then has to both permeate the market and also displace the older stock.

Focussing on renewable energy deployment and efficiency is a useful and cost effective energy strategy for many countries, but as a global strategy for tacking cumulative carbon emissions it falls far short of what is necessary. Yet this is where the UNFCCC ADP 2.4 has landed. It also seems to be difficult to challenge this, as illustrated by one Tweet that emanated from a Bonn meeting room!!

 Twitter: 10/03/2014 16:47

shameful: US sells concept of “clean energy” (including gas, CCS) at renewable workshop. what hypocrisy / hijacking of process. #ADP2014

 

Emissions Trading via Direct Action in Australia

  • Comments Off

The Australian Government recently released a Green Paper describing in more detail its proposal for an Emission Reduction Fund (ERF), the principle component of its Direct Action climate policy. The ERF will sit alongside renewable energy and reforestation policies, but is designed to do the bulk of the heavy lifting as the Government looks for some 430 million tonnes of cumulative reductions (see below) over the period 2014 to 2020. The ERF will have initial funding of about AU$ 1.55 billion over the forward period, with the money being used to buy project reductions (as Australian Carbon Credit Units or ACCUs) from the agriculture and industrial sectors of the economy by reverse auction. These reductions will be similar to those that are created through the Clean Development Mechanism (CDM) available under the Kyoto Protocol.

 Australia Reduction Task to 2020

Although the fund and reverse auction process are discussed in some detail and appear as central to the policy framework, this may not be the case as the system is rolled out and the full framework developed. The issue that comes from such an approach to emissions reduction is that despite buying project reductions from the economy, the overall emissions pathway for the economy as a whole still does not follow the expected trajectory. The ERF may also encounter a number of issues seen with the CDM, all of which are some form of additionality;

  1. Determining if there would have been higher emissions had the project not happened. Perhaps the reduction is something that would have happened anyway or the counterfactual position of higher emissions would never have actually happened. For example, an energy efficiency gain is claimed in terms of a CO2 reduction but the efficiency gain is subject to some amount of rebound due to increased use of the more efficient service, therefore negating a real reduction in emissions. Further, the counterfactual of higher emissions might never have existed as the original less efficient process would not have operated at the higher level.
  2. Double counting – the project presumes a reduction that is already being counted by somebody else within the economy as a whole. For example, an energy efficiency gain in a certain part of the supply chain is claimed as an emissions reduction, but this is already intrinsic to the overall emissions outcome for another process.
  3. Rent seeking – project proponents seek government money for actions already underway or even construct an apparent reduction.

The Australian emissions inventory will be measured bottom up based on fuel consumption, changes in forest cover and land use and established estimates / protocols for agriculture, coal mine fugitive emissions, landfill etc. It will not be possible to simply subtract the ERF driven reductions from such a total unless they are separate sequestration based reductions, e.g. soil carbon. This is because the ERF reductions are themselves part of the overall emissions of the economy.

The Green Paper clearly recognizes theses issues and proposes that the overall emissions pathway through to 2020 must be safeguarded. In Section 4 it discusses the need for “An effectively designed framework to discourage emissions growth above historical levels . . . “, with associated terminology including phrases such as “covered entities”, “baseline emission levels”, “action required from businesses” and “compliance”.  The safeguarding mechanism, rather than being a supplementary element of Direct Action, could end up becoming the main policy measure for decarbonisation if significant CO2 reductions are not achieved under the ERF. While this may not be the objective that the Government seeks, it does mean that the implementation of the safeguard mechanism needs to incorporate the design thinking that would otherwise be applied to the development of intended emission trading systems, such as the Alberta Specified Gas Emitters Regulation.

As currently described, the safeguarding mechanism looks like a baseline-and-credit system, with the baseline established at facility level either on an intensity or absolute emissions basis (both are referred to in the Green Paper). Should a facility exceed the baseline it could still achieve compliance by purchasing ACCUs from the market, either from project developers or other facilities that have over performed against their own baselines. Although the Government have made it very clear that they will not be establishing a system such as cap-and-trade that collects revenue from the market, facilities will nevertheless face compliance obligations and may have to purchase reduction units at the prevailing market price.

The level of trade and the need for facilities to purchase ACCUs will of course depend on the stringency of the baselines and this remains to be seen, however in setting these the Government will need to be mindful of the overall national goal and its need to comply with that. The development of a full baseline and credit trading system also raises the prospect of the market out-bidding the Government for ACCUs, particularly if the Government sets its own benchmark price for purchase, as is indicated in the Green Paper.

As Australia moves from a cap-and trade system under the Carbon pricing Mechanism (CPM) to the ERF and its associated safeguarding mechanism, the main change for the economy will be distributional in nature, given that a 5% reduction must still be achieved and the same types of projects should eventually appear. However, the biggest challenge facing any system in Australia could be around speedy design and implementation, given that the time remaining before 2020 is now very limited and the emission reduction projects being encouraged will themselves take time to deliver.

The US Submission on Elements of the 2015 Agreement has recently appeared on the UNFCCC website and outlines, in some detail, the approach the US is now seeking with regards “contributions”. Adaptation and Finance are also covered, although not to the depth of the section on Mitigation.

The submission makes it very clear that the US expects robust contributions from Parties, with schedules, transparency, reporting and review. There is also a useful discussion on the legal nature of a contribution. None of this is surprising as the US delegation to the recent COPs and various inter-sessional meetings has made it very clear that real action must be seen from all parties, not just those in developed countries.

But the submission makes no reference to the role of carbon markets or carbon pricing. Only in two locations does it even refer to market mechanisms and this is only in the context of avoiding double counting. This is coming from the Party that gave the world the carbon market underpinning of the Kyoto Protocol, which in turn has given rise to the CDM, the EU ETS, the CPM (in Australia) and the NZ ETS to name but a few, so perhaps reflects the current difficulty Parties are having keeping carbon price thinking on the negotiating agenda. 

I would argue that without a price on carbon emissions, the CO2 emissions issue will be much more difficult to fully resolve. Further to this, while individual countries may pursue such an agenda locally, the emissions leakage from such systems could remain high until the carbon price permeates much of the global energy system. This then argues for an international agreement that encourages the implementation of carbon pricing at a national level. The Kyoto Protocol did this through the Assigned Amount Unit, which gave value to carbon emissions as a property right. While there is no such “Kyoto like” design under consideration for the post 2020 period, the agreement we are looking for should at least lay the foundations for such markets in the future. The question is, how??

In the post 2020 world, carbon pricing is going to have to start at the national level, rather than be cascaded from the top down. Many nations are pursuing such an agenda, including a number of emerging economies such as China, South Korea, South Africa and Kazakhstan. Linkage of these carbon price regimes is seen as the key to expansion, which in turn encourages others to follow similar policy pathways and join the linked club. The reason this is done is not simply to have carbon price homogeneity, but to allow the transfer of emission reduction obligations to other parties such that they can be delivered more cost effectively. This allows one of two things to happen; the same reductions but at lower cost or greater reductions for the expected cost. The latter should ideally be the goal and is apparently the aspiration the USA has, given it states that the agreement should be “designed to promote ambitious efforts by a broad range of Parties.” The carbon price is simply a proxy for this process to allow terms of trade to be agreed as a reduction obligation is transferred.

All of this implies that the post 2020 agreement at least needs a placeholder of some description; to allow the transfer of reductions to take place between parties yet still have them counted against the national contribution. As it stands today, it is looking unlikely that explicit reference to carbon pricing or carbon markets will make its way into the agreement, but perhaps it doesn’t need to at this stage. On the back of a transfer mechanism, ambition could increase and a pricing regime for transfers could potentially evolve. If that happens to look like a global carbon market in the end, then so be it.

As politicians from all parties don Wellington boots and wade through flooded fields and streets in southern England, the subject of climate change is rising up the agenda. While all but a very few have stayed away from direct attribution of the England floods to climate change, it is also clear that nobody has a good set of words that describe the current situation and the link, or otherwise, to the climate issue. Rather, politicians seem to be stumbling through the discussion, grappling for the correct language to talk about linkage but not to drift into attribution.

The starting point is somewhat loaded, in that the issue itself is now referred to more widely as climate change, rather than global warming. Although both have been part of the lexicon since the 1970’s, the climate reference has certainly won the day (Google Ngram Viewer, see below). It brings with it an expectation that change is underway, whether it is or isn’t for a particular location. Global warming more accurately describes the core issue, i.e. that the atmosphere / ocean system is warming in response to increasing levels of carbon dioxide, but it’s arguably less emotive and exciting. It also doesn’t fully describe the full scope of the CO2 issue. Ngram - Climate vs Warming

Exactly how and by how much the climate will change as a result of ocean and atmosphere warming remains to be seen, although a wealth of science, analysis, and computer modeling certainly doesn’t point to a static situation. Some locations may see very distinct and obvious changes over time, others may see very little. The problem is that we are all very impatient, whereas warming and its consequences will be a long, slow drawn out affair with mixed impacts on specific locations. As a result, the temptation to jump to conclusions is almost overwhelming.

The MIT Joint Program on the Science and Policy of Global Change has been actively exploring how best to talk about uncertainty. Perhaps their most successful tool is the Greenhouse Gas Gamble “wheel of fortune”, which tries to put the uncertainty around warming into terms that people can understand. The Joint Program website offers a nice tool where the reader can “spin” the wheel and see the distribution of results through to 2100, both under a business as usual scenario and with a robust set of policy measure applied throughout the world.

 Greenhouse Gas Gamble

 

As the wheel is spun more and more times, a distribution emerges, which then gives some sense of the changes that may be in store for us. Most importantly, one feature of the “With Policy” case that is highlighted is the complete disappearance of the tail of the distribution, which in the “No Policy” case shows that very high temperature rises are possible. By contrast, even in the world where we take no action on climate at all, there is still some chance, albeit vanishingly small, that the temperature rise will be modest.

 Greenhouse Gas Gamble with spins

 

Tools similar to this could be developed for regional issues, such as precipitation in the UK, drought in Texas and heatwaves in Australia. They won’t be perfect, but may help in better understanding and more importantly, explaining what is going on. For example, even a historical distribution of precipitation for the winter period in southern England with the current winter shown would do a far better job in contextualizing the deluge we are aving, than vaguely saying it was a record and arguing that it might be climate change. A single point sitting several standard deviations away from an otherwise normal distribution would put it in the same category as the 2003 European heatwave, whereas a marginal outlier in a tight distribution might be viewed with less concern.

It may well be the case that the flooding in England is a result of enhanced warming from higher levels of CO2 in the atmosphere. Equally, it may just be part of the chaotic system we call “the weather”. Either way, a more informed discussion needs to take place, supported by data and helpful tools to unravel the messages contained within it.

One of the comments I quite often get at external events is that “The oil and gas industry has only got 20 years”. This doesn’t just come from enthusiastic climate campaigners, but from thoughtful, very well educated people in a number of disciplines related to the climate issue. A report by WWF a few years back took a similar but slightly less aggressive line, through the publication of an energy model forecast which showed that the world could be effectively fossil energy free as early as 2050.

It’s hard for anyone who has worked in this industry to imagine scenarios which see it vanish in a couple of decades, not because of the vested interest that we certainly have, but because of the vast scale, complexity and financial base of the industry itself. It has been built up over a period of 120+ years at a cost in the trillions (in today’s dollars), provides over 80% of primary energy globally, with that demand nearly doubling since 1980 and market share hardly budging. Demand may well double again by the second half of the century.

So why do people think that all this can be replaced in a relatively short space of time? A recent media story provides some insight.

As if often the case with the turn of the year, media outlets like to publish predictions. Once such set that appeared on CNN were by futurist Ray Kurzweil. He is described by CNN as:

. . . . one of the world’s leading inventors, thinkers, and futurists, with a 30-year track record of accurate predictions. Called “the restless genius” by The Wall Street Journal and “the ultimate thinking machine” by Forbes magazine, Kurzweil was selected as one of the top entrepreneurs by Inc. magazine, which described him as the “rightful heir to Thomas Edison.” Ray has written five national best-selling books. He is Director of Engineering at Google.

Kurzweil claims that:

By 2030 solar energy will have the capacity to meet all of our energy needs. If we could capture one part in ten thousand of the sunlight that falls on the Earth we could meet 100% of our energy needs, using this renewable and environmentally friendly source. As we apply new molecular scale technologies to solar panels, the cost per watt is coming down rapidly. Already Deutsche Bank, in a recent report, wrote “The cost of unsubsidized solar power is about the same as the cost of electricity from the grid in India and Italy. By 2014 even more countries will achieve solar ‘grid parity.’” The total number of watts of electricity produced by solar energy is growing exponentially, doubling every two years. It is now less than seven doublings from 100%.

That gives us just 14 years! But maybe not.

Kurzweil has compared the growth of the energy system to the way in which biological systems can grow. With huge amounts of food available, a biological system can continue to double in size on a regular time interval, but the end result is that it will either exhaust the food supply or completely consume its host (also exhausting the food supply), with both outcomes leading to collapse. Economic systems sometimes do this as well, but collapse is almost certain and there have been some spectacular examples over the last few centuries.

The more controlled pathway is one that may well see a burst of growth to establish a presence, followed by a much more regulated expansion limited by resources, finance, intervention, competition and a variety of other real world pressures. This is how energy systems tend to behave – they don’t continue to grow exponentially. Historically there are many examples of rapid early expansion, at least to the point of materiality (typically ~1% of global primary energy), followed by a long period of growth to some level which represents the economic potential of the energy source. Even the first rapid phase takes a generation, with the longer growth phase stretching out over decades.

Energy Deployment Laws

The chart above was developed by energy modelers in the Shell Scenario team, with their findings published in Nature back in 2009. The application of this type of rule gives a more realistic picture of how solar energy might grow, still very quickly, but not to meet 100% of global energy demand in just 14 years. The “Oceans” scenario, published last year as part of the Shell New Lens Scenarios, shows solar potentially dominating the global energy system by 2100, but at ~40% of primary energy (see below), not 100%. A second reality is that a single homogeneous system with everybody using the same technology for everything is unlikely – at least within this century. The existing legacy is just too big, with many parts of it having a good 50+ years of life ahead and more being built every day.

Solar in Oceans-2