Archive for the ‘Natural gas’ Category

From sunlight to Jet A1

In a world of near zero anthropogenic emissions of carbon dioxide, there remains the problem of finding a fuel or energy carrier of sufficiently high energy density that it remains practical to fly a modern jet aeroplane. Commercial aviation is heading towards some 1 billion tonnes of carbon dioxide per annum so doing nothing may not be an option.

Although planes will certainly evolve over the course of the century, the rate of change is likely to be slow and particularly so if a step change in technology is involved. In 100 years of civil aviation there have been two such step changes; the first commercial flights in the 1910s and the shift of the jet engine from the military to the commercial world with the development of the Comet and Boeing 707. The 787 Dreamliner is in many respects a world away from the 707, but in terms of the fuel used it is the same plane; that’s 60 years and there is no sign of the next change.

Unlike domestic vehicles where electricity and batteries offer an alternative, planes will probably still need hydrocarbon fuel for all of this century, perhaps longer. Hydrogen is a possibility but the fuel to volume ratio would change such that this could also mean a radical redesign of the whole shape of the plane (below), which might also entail redesign of other infrastructure such as airport terminals, air bridges and so on. Even the development and first deployment of the double decker A380, something of a step change in terms of shape and size, has taken twenty years and cost Airbus many billions.

h2airplane

For aviation, the simplest approach will probably be the development of a process to produce a look-alike hydrocarbon fuel. The most practical way to approach this problem is via an advanced biofuel route and a few processes are available to fill the need, although scale up of these technologies has yet to take place. But what if the biofuel route also proves problematic – say for reasons related to land use change or perhaps public acceptance in a future period of rising food prices? A few research programmes are looking at synthesising the fuel directly from water and carbon dioxide. This is entirely possible from a chemistry perspective, but it requires lots of energy; at least as much energy as the finished fuel gives when it is used and its molecules are returned to water and carbon dioxide.

Audi has been working on such a project and recently announced the production of the first fuel from their pilot plant (160 litres per day). According to their media release;

The Sunfire [Audi’s technology partner] plant requires carbon dioxide, water, and electricity as raw materials. The carbon dioxide is extracted from the ambient air using direct air capture. In a separate process, an electrolysis unit splits water into hydrogen and oxygen. The hydrogen is then reacted with the carbon dioxide in two chemical processes conducted at 220 degrees Celsius and a pressure of 25 bar to produce an energetic liquid, made up of hydrocarbon compounds, which is called Blue Crude. This conversion process is up to 70 percent efficient. The whole process runs on solar power.

Apart from the front end of the facility where carbon dioxide is reacted with hydrogen to produce synthesis gas (carbon monoxide and hydrogen), the rest of the plant should be very similar to the full scale Pearl Gas to Liquids (GTL) facility that Shell operates in Qatar. In that process, natural gas is converted to synthesis gas which is in turn converted to a mix of longer chain hydrocarbons, including jet fuel (contained within the Audi Blue Crude). The Pearl facility produces about 150,000 bbls/day of hydrocarbon product, so perhaps one hundred such facilities would be required to produce enough jet fuel for the world (this would depend on the yield of suitable jet fuel from the process which produces a range of hydrocarbon products that can be put to many uses). Today there are just a handful of gas-to-liquids plants in operation; Pearl and Oryx in Qatar, Bintulu in Malaysia and Mossel Bay in South Africa (and another in South Africa that uses coal as the starting feedstock). The final conversion uses the Fischer Tropsch process, originally developed about a century ago.

Each of these future “blue crude” facilities would also need a formidable solar array to power it. The calorific content of the fuels is about 45 TJ/kt, so that is the absolute minimum amount of energy required for the conversion facility. However, accounting for efficiency of the process and adding in the energy required for air extraction of carbon dioxide and all the other energy needs of a modern industrial facility, a future process might need up to 100 TJ/kt of energy input. The Pearl GTL produces 19 kt of product per day, so the energy demand to make this from water and carbon dioxide would be 1900 TJ per day, or 700,000 TJ per annum. As such,  this requires a nameplate capacity for a solar PV farm of about 60 GW – roughly equal to half the entire installed global solar generating capacity in 2013. A Middle East location such as Qatar receives about 2200 kWh/m² per annum, or 0.00792 TJ/m² and assuming a future solar PV facility that might operate at 35% efficiency (considerably better than commercial facilities today), the solar PV alone would occupy an area of some 250 km² , so perhaps 500 km² or more in total plot area (i.e. 22 kms by 22 kms in size) for the facility.

This is certainly not inconceivable, but it is far larger than any solar PV facilities in operation today; the Topaz solar array in California is on a site 25 square kms in size with a nameplate capacity of 550 MW.  It is currently the largest solar farm in the world and produces about 1.1 million MWh per annum (4000 TJ), but the efficiency (23%) is far lower than my future assumption above. At this production rate, 175 Topaz farms would be required to power a refinery with the hydrocarbon output of Pearl GTL. My assumptions represent a packing density of solar PV some four times better than Topaz (i.e. 100 MW/km² vs 22 MW/km²).

All this means that our net zero emissions world needs to see the construction of some 100 large scale hydrocarbon synthesis plants, together with air extraction facilities, hydrogen and carbon monoxide storage for night time operation of the reactors and huge solar arrays. This could meet all the future aviation needs and would also produce lighter and heavier hydrocarbons for various other applications where electricity is not an option (e.g. chemical feedstock, heavy marine fuels). In 2015 money, the investment would certainly run into the trillions of dollars.

Recent news from the International Energy Agency (IEA) has shown that the rise in global CO2 emissions from the energy system stalled in 2014. This was unusual on two counts – first that it happened at all and second that it happened in a year not linked with recession or low economic growth as in 1992 and 2009. In fact the global economy expanded by about 3%.

Information is scant at this point, but the IEA have apparently determined this using their Sectoral Approach (below, through to 2014), which has been flattening for a few years relative to their Reference Approach (following chart, ends at 2012). The Reference Approach and the Sectoral Approach often have different results because the Reference Approach is top-down using a country’s energy supply data and has no detailed information on how the individual fuels are used in each sector. One could argue that the Reference Approach is more representative of what the atmosphere sees, in that apart from sequestered carbon dioxide and products such as bitumen, the whole fossil energy supply eventually ends up as atmospheric carbon dioxide. The Reference Approach therefore indicates an upper bound to the Sectoral Approach, because some of the carbon in the fuel is not combusted but will be emitted as fugitive emissions (as leakage or evaporation in the production and/or transformation stage). No information has been provided by the IEA at this point as to the Reference Approach data for 2013 and 2014.

Global Energy System Emissions

Reference vs. Sectoral IEA

Putting to one side this technical difference, the flattening trend does represent a possible shift in global emissions development and it has certainly got many observers excited that this may well be so. If this is the case, what is driving this change and what might the outlook be?

It is clear that many governments are now intervening in domestic energy system development. There are incentives and mandates for renewable energy, enhanced efficiency programmes and some level of carbon pricing in perhaps a quarter of the global energy system, albeit at a fairly low level. More recently in China there has been a strong government reaction to air quality issues, which has given rise to some reduction in coal demand, particularly around major cities. But there is another factor as well and that is price – it is perhaps the overwhelming factor in determining fossil fuel usage and therefore setting the level of emissions. Price drives conservation, efficiency, the use of alternatives and therefore demand. Many of the aforementioned energy policy initiatives have been implemented during the recent decade or so of sharply rising energy prices.

A chart of the oil price (2013 $, as a proxy for energy prices) and global CO2 emissions going back to 1965 illustrates that big price fluctuations do seem to have an impact on emissions. Although emissions have risen throughout the period, sharp energy price excursions have led to emissions dips and plateaus as energy demand is impacted and similarly, price falls have led to resurgence in emissions. This isn’t universally true – certainly from 2004 to 2008 the very strong demand from China in particular was seemingly unaffected by the rising cost of energy, although the end of that period saw a global recession and a very visible dip in demand.

Oil price vs. Emissions

The latter part of 2014 brought with it a sharp reduction in energy prices (2015 is illustrative in the chart at $55 per barrel). With a much lower fossil energy price, demand may rise and the incentive for efficiency and the deployment of alternatives could well be impacted, although there may be some lag before this becomes apparent. The combination of these factors could therefore see emissions take yet another jump, but it is too early to see this in the data. 2015 emissions data might show the first signs of this.

There is of course continued upward pressure on emissions as well, such as the growth in coal use that is now underway in India. Over the three year period to the end of 2014, coal capacity increased from 112 GW to nearly 160 GW. This is the equivalent of some 300 million tonnes of CO2 per annum. By contrast, a five year period from 2002 to 2007 saw only 10 GW of new coal capacity installed in that country. Although India is installing considerable solar capacity, coal fired generation is likely to continue to grow rapidly. One area of emissions growth that is not being immediately challenged by a zero emissions alternative is transport. The automobile, bus, truck and aviation fleets are all expanding rapidly in that country.

The other big uncertainty is China, where local air quality concerns are catalysing some restructuring in their energy system. Certain factories and power plants that are contributing most to the local problems around cities such as Beijing and Shanghai are being shut, but there is still huge development underway across vast swathes of the country.  Some of this is a replacement for the capacity being closed around the cities, with electricity being transported through ultra high voltage grids that now run across the country. Gas is becoming a preferred fuel in metropolitan areas, but some of that gas is being synthetically produced from coal in other regions – a very CO2 intensive process. The scale of this is limited at the moment, but if all the current plans are actually developed this could become a large industry and therefore a further signifacnt source of emissions.

As observers look towards Paris and the expectation of a global deal on climate, the current pause in emissions growth, while comforting, may be a false signal in the morass of energy system data being published. Ongoing diligence will be required.

Comparing apples with oranges

The Climate Group has posted an interesting story on its website and has been tweeting a key graph from the piece of work (below) with the attached text saying “From 2000 to 2012, wind and solar energy increased respectively 16-fold and 49-fold”.

Climate Group Image

The story is headed “Wind and Solar Power is Catching up with Nuclear” and argues correctly that the global installed capacity of these two new sources of electricity are catching up with nuclear. Although the article concludes with the sobering reality that actual generation from wind and solar are still just a fraction of that from nuclear, the headline and certainly the tweets are somewhat misleading.

Both wind and solar have very low on-stream factors, something like 30% and 20% respectively in the USA, whereas nuclear is close to 90%. This means that although 1 GW of solar can deliver up to 1 GW of output, this is highly intermittent, needs considerable backup and results in an average output of only 200 MW (with a low of zero half the time). By contrast a 1 GW nuclear power station is on stream most of the time and delivers about 1 GW 24/7 throughout the year. Therefore, comparing solar or wind capacity with nuclear capacity gives little insight into the actual energy being generated, which is really the point of any comparison in the first instance. The global generating picture actually looks like this (Source: BP Statistical Review of World Energy 2014);

Generation by source

Wind, but particularly solar generation are still only a fraction of nuclear generation, even with the global nuclear turndown following Fukushima. Interestingly, both wind and solar are only rising at about the same rate that nuclear did in the 1960s and 1970s, so we might expect another 30+ years before they reach the level that nuclear is at today, at least in terms of actual generation.

The comparison of capacity rather than generation has become a staple of the renewable energy industry. Both coal and nuclear provide base load electricity and have very high on-stream factors. Depending on the national circumstances, natural gas may be base load and therefore also have a high on-stream factor, but in the USA it has been closer to 50% as it is quite often used intermittently to match the variability of renewables and the peaks in demand from customers (e.g. early evenings when people come home from work and cook dinner). This is because of the ease with which natural gas generation can be dispatched into or removed from the grid. However, natural gas is also becoming baseload in some parts of the USA given the price of gas and the closure of older coal plants.

Capacity comparisons look great in that they can make it appear that vast amounts of renewable energy is entering the energy mix when in fact that is not the case, at least not to the extent implied. Renewable energy will undoubtedly have its day, but like nuclear and even fossil fuels before it, a generation or two will likely have to pass before we can note its significant impact and possibly even its eventual dominance in the power sector.

As governments struggle to find practical routes forward with positive outcomes for CO2 mitigation, attention is turning to dealing with other greenhouse gases, particularly methane. A number of methane emission initiatives are now underway or being planned, for example those within the Climate and Clean Air Coalition.

Methane seems like an obvious place to start. Anthropogenic emissions are about 250 million tonnes per annum. A tonne of methane emitted now has a short term (20 years) impact on atmospheric warming which is some 80 times greater than a tonne of CO2. This means that over the period of twenty years, the methane will add 80 times the amount of heat to the atmosphere as the carbon dioxide. But methane breaks down in the atmosphere quite quickly with a ‘half life’ of about seven years, so on a 100 year basis (with the methane effectively gone) the impact of a tonne of methane emitted now compared to a tonne of CO2 is much less. The factor falls to about 28, but even with a lower multiplier reducing methane still seems to be a worthwhile endeavour. While agricultural methane may require real lifestyle changes to bring down, e.g. less meat consumption, industrial methane emission management looks like something that can be done. Often mitigation may be a case of good housekeeping, such as monitoring and maintaining pipelines to minimize fugitive emissions.

While most articles about methane simply use the GWP (Global Warming Potential over 100 years) of 28 and present data and economics on that basis, a few dig deeper. Of note is the work of the Oxford-Martin School who present a number of policy papers on methane. In the more popular press, Burning Question author Duncan Clark has written about methane.

Both follow a similar line of reasoning. They note that methane and CO2, while both greenhouse gases, behave very differently with regards their impact on the actual goal of the UNFCCC, to limit eventual peak warming to 2°C or less. As noted, methane is a relatively short lived gas in the atmosphere, whereas CO2 is a long lived gas that accumulates in the atmosphere. This means there is another dimension to the issue, time. The point in time at which they are emitted relative to each other and the shape of any reduction pathway relative to the other is important. Duncan Clark describes this in the following way:

The difference between carbon dioxide and methane is a bit like the difference between burning coal and paper on a fire. Both generate plenty of heat but whereas the coal burns steadily for a long time and accumulates if you keep adding more, the paper gives an intense burst of warmth but one that quickly disappears once you stop adding it.

Their conclusions are similar. Peak warming is largely dictated by the cumulative amount of CO2 emitted over time. If a certain amount of methane is also emitted, the timing of that emission is what matters. Methane that is emitted today will immediately impact the rate of warming, but long before we reach peak warming (assuming CO2 emissions are eventually brought under control and warming actually peaks) the methane will have left the atmosphere and been converted to carbon dioxide, in which case it’s impact on peak warming is based only on the CO2 that remains from the methane. We may have accelerated warming in the short term but peak warming will remain largely unchanged. In this case, the warming potential of methane expressed in terms of its impact on peak temperature falls sharply and comes close to the stoichiometric conversion of methane to carbon dioxide, which is about 3, i.e. a tonne of methane when combusted or oxidised in the atmosphere gives rise to about three tonnes of carbon dioxide. Conversely, methane that is emitted much later, say when we are close to peak warming, will directly add to whatever level of temperature we happen to reach.

Does this mean that we shouldn’t bother about methane today? Unfortunately the answer is an ambiguous one. If we are confident that the world will quickly and decisively reduce CO2 emissions then of course we must also be reducing methane and other greenhouse gases as well. If we don’t, then we will still have a methane problem at the time peak CO2 induced warming occurs, in which case we will almost certainly overshoot our peak warming goal, i.e. 2°C, with the additional warming from the other greenhouse gases. But if we don’t address the CO2 issue, then addressing the methane issue now doesn’t offer a lot of benefit for later on. Instead, the benefit that we do get is less short term warming as we will have removed the intense burst of warming that the methane is providing.

Of course, since we don’t know how well or otherwise the task of CO2 mitigation will proceed (despite the fact that the track record is pretty poor), we feel obliged to act on methane now in case the CO2 mitigation picks up.  At least we know that we will slow down the near term rate of warming by doing so.

Not surprisingly, it turns out that dealing with methane and atmospheric warming is just as complex as dealing with CO2. In the case of CO2, many are convinced that steps such as efficiency measures can curtail warming, when all they are probably doing is geographically or temporally shifting the same CO2 emissions such that the eventual accumulation in the atmosphere is unchanged. In the case of methane, treating it as if it were interchangeable with CO2 but with a convenient and high multiplier may make us feel that modest effort is delivering great benefit, when it may be the case that little benefit is being delivered at all.

In both cases it is the science that we have to look at to decide on the appropriate strategy, not expediency and certainly not sentiment.

With the USA (at a Federal level) going down the regulatory route instead, the Australian Prime Minister touring the world arguing against it and the UNFCCC struggling to talk about it, perhaps it is time to revisit the case for carbon pricing. Economists have argued the case for carbon pricing for over two decades and in a recent post I put forward my own reasons why the climate issue doesn’t get solved without one. Remember this;

Climate formula with carbon price (words)

Yet the policy world seems to be struggling to implement carbon pricing and more importantly, getting it to stick and remain effective. Part of the reason for this is a concern by business that it will somehow penalize them, prejudice them competitively or distort their markets. Of course there will be an impact, that’s the whole point, but nevertheless the business community should still embrace this approach to dealing with emissions. Here are the top ten reasons why;

Top Ten

  1. Action on climate in some form or other is an inconvenient but unavoidable inevitability. Business and  industry doesn’t really want direct, standards based regulation. These can be difficult to deal with, offer limited flexibility for compliance and may be very costly to implement for some legacy facilities.
  2. Carbon pricing, either through taxation or cap and trade offers broad compliance flexibility and provides the option for particular facilities to avoid the need for immediate capital investment (but still comply with the requirement).
  3. Carbon pricing offers technology neutrality. Business and industry is free to choose its path forward rather than being forced down a particular route or having market share removed by decree.
  4. Pricing systems offer the government flexibility to address issues such as cross border competition and carbon leakage (e.g. tax rebates or free allocation of allowances). There is a good history around this issue in the EU, with trade exposed industries receiving a large proportion of their allocation for free.
  5. Carbon pricing is transparent and can be passed through the supply chain, either up to the resource holder or down to the end user.
  6. A well implemented carbon pricing system ensures even (economic) distribution of the mitigation burden across the economy. This is important and often forgotten. Regulatory approaches are typically opaque when it comes to the cost of implementation, such that the burden on a particular sector may be far greater than initially recognized. A carbon trading system avoids such distortions by allowing a particular sector to buy allowances instead of taking expensive (for them) mitigation actions.
  7. Carbon pricing offers the lowest cost pathway for compliance across the economy, which also minimizes the burden on industry.
  8. Carbon pricing allows the fossil fuel industry to develop carbon capture and storage, a societal “must have” over the longer term if the climate issue is going to be fully resolved. Further, as the carbon pricing system is bringing in new revenue to government (e.g. through the sale of allowances), the opportunity exists to utilize this to support the early stage development of technologies such as CCS.
  9. Carbon pricing encourages fuel switching in the power sector in particular, initially from coal to natural gas, but then to zero carbon alternatives such as wind, solar and nuclear.
  10. And the most important reason;

It’s the smart business based approach to a really tough problem and actually delivers on the environmental objective.

Scaling up for global impact

  • Comments Off on Scaling up for global impact

A visit to Australia offers a quick reminder of the scale to which Liquid Natural Gas (LNG) production has grown over recent years. This was a technology that first appeared in the 1960s and saw a scale up over the 1970s and 1980s to some 60 million tonnes per annum globally. As energy demand soared in the 1990s and 2000s, LNG production quickly rose again to around 300 million tonnes per annum today and could reach 500 million tonnes per annum by 2030 (see Ernst & Young projection below).

2012OGJcolors

Flying into Australia we crossed the coast near Dampier in Western Australia, which is currently “Resource Central” for Australia. The waters were dotted with tankers (I counted 14 on the side of the plane I was sitting on) waiting for loading, many of which had the distinctive LNG cryogenic tanks on their decks. Two days later the first shipment of LNG from the new Papua New Guinea project took place and this received considerable coverage in the Australian media. Clearly LNG is booming in this region, with even more to come. Most major oil and gas companies have projects in development and there are several LNG “startups” considering projects.

This is a great example of technology scale up, which is going to be key to resolving the climate issue by progressively shifting energy production and use to near zero emissions over the course of this century. Carbon capture and storage (CCS) is one of the technologies that needs to be part of that scale up if we are serious about net zero emissions in the latter part of the century.

There are many parallels between LNG production and CCS which may offer some insight into the potential for CCS. Both require drilling, site preparation, pipelines, gas processing facilities, compression and gas transport, although LNG also includes a major cryogenic step which isn’t part of the CCS process.

LNG production and CCS are both gas processing technologies so the comparison between them needs to be on a volume basis, not on a tonnes basis. CO2 has a higher molecular weight than CH4 (methane), so the processing of a million tonnes of natural gas is the same as nearly 3 million tonnes of CO2. As such, the production scale up to 500 million tonnes of LNG by 2030 could be equated to nearly 1.5 billion tonnes of CO2 per annum in CCS terms, which is a number that starts to be significant in terms of real mitigation. The actual scale up from today to 2030 is projected to be 200-250 million tonnes of LNG, which in CCS terms is about 700 million tonnes of CO2.

This is both a good news and bad news story. The scale up of LNG shows that industrial expansion of a complex process involving multiple disciplines from across the oil and gas industry is entirely possible. LNG took two to three decades to reach 100 million tonnes, but less than ten years to repeat this. In the following ten years (2010-2020) production should nearly double again with an additional 200 million tonnes of capacity added. These latter rates of scale up are what we need now for technologies such as CCS, but we are clearly languishing in the early stages of deployment, with just a few million tonnes of production (if that) being added each year.

What is missing for CCS is the strong commercial impetus that LNG has seen over the last fifteen years as global energy demand shot up. With most, if not all, of the technologies needed for CCS already widely available in the oil and gas industry, it may be possible to shorten the initial early deployment stage which can last 20 years (as it did for LNG). If this could be achieved, CCS deployment at rates of a billion tonnes per decade, for starters, may be possible. This is the minimum scale needed for mitigation that will make a tangible difference to the task ahead.

The commercial case for CCS rests with government through mechanisms such as carbon pricing underpinned by a robust global deal on mitigation. That of course is another story.

Two sides to every coin

As we near the middle of the year and therefore have, at least in the Northern Hemisphere (i.e. Germany), long days with lots of sunshine, renewable energy statistics start to appear in the media and the renewables distortion field enveloping much of Europe expands just that little bit more. The first of these I have come across was posted by a number of on-line media platforms and highlighted the fact that on Sunday May 11th Germany generated nearly three quarters of its electricity from renewable sources. Given the extraordinary level of solar and wind deployment in recent years, it shouldn’t be a surprise that this can happen. But it’s rather a one sided view of the story.

The flip side is of course December and January when the solar picture looks very different. The Fraunhofer Institute for Solar Energy Systems ISE use data from the EEX Platform to produce an excellent set of charts showing the variability of renewable energy, particularly solar and wind. The monthly data for solar shows what one might expect in the northern latitudes, with very high solar in summer and a significant tailing off in winter. The ratio between January and July is a factor of 15 on a monthly average basis.

Annual solar production in Germany 2013j

But wind comes to the rescue to some extent, firstly with less overall monthly variability and secondly with higher levels of generation in the winter which offsets quite a bit of the loss from solar.

Annual wind production in Germany 2013

The combination of the two provides a more stable renewable electricity supply on a monthly basis, with the overall high to low production ratio falling to about 2. One could argue from this that in order to get some gauge of the real cost of renewable energy in Germany, monthly production of 6 TWh of electricity requires about 70 GW of solar and wind (average installed capacity in 2013, roughly 50% each). By comparison, 70 GW of natural gas CCGT online for a whole month at its rated capacity would deliver 51 TWh of electricity, nearly a factor of 9 more than for the same amount of installed solar plus wind. But to be fair, some of that 70 GW of natural gas will have downtime for maintenance etc., but even with a 20% capacity loss to 40 TWh, the delivery factor is still about 7. For solar on its own it will be closer to 10 in Germany.

Annual solar + wind production in Germany 2013

But this isn’t the end of the story. Weekly and daily data shows much greater intermittency. On a weekly basis the high to low production ratio rises to about 4, but on a daily basis it shoots up to 26.

Annual solar + wind production in Germany 2013 by week

 

Annual solar + wind production in Germany 2013 by day

Fortunately, Germany has an already existing and fully functioning fossil fuel + nuclear baseload generation system installed, which can easily take up the slack as intermittency brings renewable generation to a standstill. But the cost of this is almost never included in an assessment of the cost of renewable power generation. In Germany’s case this is a legacy system and therefore it is taken for granted, but for countries now building new capacity and extending the grid to regions that previously had nothing, this is a real cost that must be considered.

This is perhaps an anti-leapfrog argument (being that regions with no grid or existing capacity can leapfrog to renewables).  The German experience shows that you can shift to renewables more easily when you already have a fully depreciated fossil & nuclear stock, and your demand is flat.  Otherwise, this is looking like a potentially costly story that relies on storage technologies we still don’t have in mainstream commercial use.

____________________________

As a complete aside, but certainly the “flip side” of another issue, I came across this chart which highlights the flip side of rising CO2 levels in the ocean and atmosphere due to the combustion of fossil fuels – falling levels of oxygen. This is a very small effect (given the amount of oxygen in the atmosphere) and certainly not an issue, but it’s entirely measurable which is the interesting bit. The chart is produced by Ralph Keeling, son of the originator of the CO2 Keeling Curve.

Falling oxygen levels

 

In my previous post I responded to an article by environmentalist Paul Gilding where he argued that the rate of solar PV deployment meant it was now time to call “Game over” for the coal, oil and gas industries. There is no doubt that solar PV uptake is faster than most commentators imagined (but not Shell in our Oceans scenario) and it is clear that this is starting to change the landscape for the utility sector, but talk of “death spirals” may, in the words of Mark Twain, be an exaggeration.

In that same article, Gilding also talks about local battery storage via electric cars and the drive to distributed systems rather than centralized ones. He clearly envisages a world of micro-grids, rooftop solar PV, domestic electricity storage and the disappearance of the current utility business model. But there is much more to the energy world than what we see in central London or Paris today, or for that matter in rural Tasmania where Paul Gilding lives. It all starts with unappealing, somewhat messy but nevertheless essential processes such as sulphuric acid, ammonia, caustic soda and chlorine manufacture (to name but a few). Added together, about half a billion tonnes of these four products are produced annually. These are energy intensive production processes operating on an industrial scale, but largely hidden away from daily life. They are in or play a role in the manufacture of almost everything we use, buy, wear, eat and do. These core base chemicals also rely on various feedstocks. Sulphuric acid, for example, is made from the sulphur found in oil and gas and removed during the various refining and treatment processes. Although there are other viable sources of sulphur they have long been abandoned for economic reasons.

dow-chemical-plant-promo

The ubiquitous mobile phone (which everything now seems to get compared to when we talk about deployment) and the much talked about solar PV cell are just the tip of a vast energy consuming industrial system, built on base chemicals such as chlorine, but also making products with steel, aluminium, nickel, chromium, glass and plastics (to name but a few). The production of these materials alone exceeds 2 billion tonnes annually. All of this is of course made in facilities with concrete foundations, using some of the 3.4 billion tonnes of cement produced annually. The global industry for plastics is rooted in the oil and gas industry as well, with the big six plastics (see below) all starting their lives in refineries that do things like converting naphtha from crude oil to ethylene.

The big six plastics:

  • polyethylene – including low density (PE-LD), linear low density (PE-LLD) and high density (PE-HD)
  • polypropylene (PP)
  • polyvinyl chloride (PVC)
  • polystyrene solid (PS), expandable (PS-E)
  • polyethylene terephthalate (PET)
  • polyurethane (PUR)

All of these processes are also energy intensive, requiring utility scale generation, high temperature furnaces, large quantities of high pressure steam and so on. The raw materials for much of this comes from remote mines, another facet of modern life we no longer see. These in turn are powered by utility scale facilities, huge draglines for digging and vast trains for moving the extracted ores. An iron ore train in Australia might be made up of 336 cars, moving 44,500 tonnes of iron ore, is over 3 km long and utilizes six to eight locomotives including intermediate remote units. These locomotives often run on diesel fuel, although many in the world run on electric systems at high voltage, e.g. the 25 kV AC iron ore train from Russia to Finland.

The above is just the beginning of the industrial world we live in, built on a utility scale and powered by utilities burning gas and coal. These bring economies of scale to everything we do and use, whether we like it or not. Not even mentioned above is the agricultural world which feeds 7 billion people. The industrial heartland will doubtless change over the coming century, although the trend since the beginning of the industrial revolution has been for bigger more concentrated pockets of production, with little sign of a more distributed model. The advent of technologies such as 3D Printing may change the end use production step, but even the material that gets poured into the tanks feeding that 3D machine probably relied on sulphuric acid somewhere in its production chain.

Late last week saw the public release of the new Shell energy scenarios, under the heading “New Lens Scenarios”. This is always a much anticipated moment in Shell, a bit like the Olympics as it only happens every few years – the last ones were released in 2008. In the interim many people across the company get involved in the scenario process through workshops and meetings, but the core team manages to keep the final product under wraps until the big day. While we might get an early sniff of the story, the final product always contains new themes and ideas, designed not to recast the status quo paradigm, but to challenge and surprise where possible.

NLS

So it is with Mountains and Oceans, the two new scenarios that look out to the very end of this century, a first in terms of “viewing distance”. I won’t attempt to tell the whole scenario story here, better to direct you to the website, here. But the climate stories buried within them are of real interest and should act as a wake up call for governments around the world.

In my post last week I discussed the idea that the CO2 issue is best thought of as a stock problem, in other words fossil CO2 released from the “geosphere” is accumulating in the ocean/atmosphere system and adding to the background greenhouse warming that makes this planet habitable. Roughly, each additional trillion tonnes of carbon that is released makes the planet another 2°C hotter.Towards the trillionth tonne 

This has been shown by Allen et. al., Warming caused by cumulative carbon emissions towards the trillionth tonne, Nature Vol 458, 30 April 2009. The key chart is shown below. Peak warming vs cumulative carbon

This means that the focus of policymakers should be on the cumulative emissions of carbon over the long term, rather than on actual emissions on any given date. As such, climate policy needs to focus on limiting the accumulation, rather than simply slowing down the rate of emissions. For example, using energy more efficiently for the same level of production or GDP or supplementing the energy mix with renewable resources could well reduce annual emissions, but may do nothing to limit the accumulation over time. More renewable energy also gives policy makers a sense that they are addressing the problem of how to meet the surging demand for energy and also manage emissions, but over the long run it will just take a little longer to reach the same accumulation of carbon. Using up current proven reserves of oil gas and coal (about 900 billion tonnes of carbon), whether over 50 years, 60 years or 90 years, still delivers the same climate result.

Towards two trillion tonnes 

By contrast, deploying carbon capture and storage (CCS) and eventually linking it with any use of fossil resources resolves the accumulation issue. The New Lens Scenarios demonstrate this point very well.

In the Mountains scenario, which sees natural gas use grow to become the backbone of the world energy supply, the politics of the day allows CCS to start serious deployment in the 2030s and rapidly increase to peak deployment in the 2060s. As the energy mix shifts later in the century, CCS use declines somewhat. By 2100, emissions are effectively zero, with the prospect of some drawdown of atmospheric CO2 in the 22nd Century as CCS is combined with the use of biomass for energy. Importantly, cumulative emissions are capped and the amount of warming is limited, albeit not at 2°C.Mountains CCS

The Oceans scenario tells a different story. The underlying politics and social trends see more focus on renewable energy early on, with CCS not seriously deployed until 20-30 years later than Mountains and never growing to the same level. Although solar PV becomes very substantial in the energy mix, the time it takes to win the day allows cumulative carbon emissions to grow well past the Mountains scenario, adding to the potential warming by the end of the century. Oceans also caps the accumulation by 2100.

Oceans CCS 

Both scenarios make extensive use of CCS, but delaying deployment while lured by the attractiveness of a high renewable energy future has a real downside, more warming. 

We can see the evidence of government focus on renewable energy in the recent NER 300 funding in Europe. Despite the goal of establishing a CCS demonstration programme, no funds were delivered to CCS projects in Europe and the money was granted to renewable energy projects.   Green politics is fast becoming a distraction from the real climate priority of managing cumulative emissions, which requires CCS.

The scenarios are designed to tell stories and get us to think about the implications of the energy choices that we make. They are not forecasts or predictions, but they do represent viable alternative pathways which are economically, socially and technologically feasible. Enjoy the challenges posed.

A CCS project for Canada

I don’t normally use this blog to write about Shell, but last week saw an announcement that is very relevant and worthy of some further elaboration. Shell Canada, as operator of the Athabasca Oil Sands Project joint venture (with Chevron and Marathon), announced plans to proceed with a carbon capture and storage project (Quest) within the current oils sands project. This is a project that has been under discussion in one form or another since almost day one of production from the facilities, but the lack of a workable economic justification for the project has been the major impediment to progress.

In recent years the story has changed though. The Government of Alberta has developed a carbon pricing system which provides a level of underlying support for the project. The World Bank “State and Trend of the Carbon Market 2012” report describes the Alberta system (on page 89) as follows:

Alberta is Canada’s largest greenhouse gas (GHG) emitting province, accounting for 34% of the country’s total GHG emissions in 2010. This represents 235 MtCO2e, a 41% increase from 1990 levels, driven primarily by increased production activity in its oil and gas sector. On July 1, 2007, Alberta launched a mandatory GHG emission intensity-based mechanism, enacting the first GHG emissions legislation in Canada. Approximately 100 entities with annual emissions exceeding 100,000 tCO2e (ktCO2e), are required by the legislation to reduce their emission intensity by 12% from average 2003-2005 levels. Entities that do not meet reduction requirements on a given year may choose to meet these obligations by:

  • Trading “Emissions Performance Credits” (EPC) that are awarded to covered entities that reduce emissions below their set target;
  • Paying CN$15 (US$15.2) into a technology fund; and/or
  • Purchasing Alberta-based offsets issued by the Alberta Offsets Registry under an approved protocol.

N.B. The World Bank chart below shows the number of offsets retired annually through the system with an estimate for 2011 (not announced at the time the report was published). The price has remained very close to the technology fund alternative.

As such, this system provides an underlying base level of support of some CAN$15 per tonne of CO2 for the CCS project. In addition, in 2011 the Alberta Government announced a further support mechanism for CCS though the system, which now grants a second bonus credit for CCS projects meeting certain criteria. The Canadian based Pembina Institute published the diagram below, challenging the environmental integrity of the approach, but it also gives a simple explanation of how the mechanism works. In a completely closed system the environmental integrity  argument would be correct, but in the open ended Alberta system with payment into a technology fund as a compliance option, the argument is hardly valid. 

A further, but much less quantifiable, price signal is that coming from the California Low Carbon Fuel Standard (LCFS) and to a much lesser extent the EU Fuel Quality Directive (FQD). These mechanisms place a carbon footprint target on the fuel in the transport sector with a starting baseline about equal to the carbon footprint of oil products processed through a conventional production and refining route and then declining by about 1% per annum. When oil sands products arrive in these markets, their higher carbon footprint generates a penalty on the use of the component in the fuel pool which manifests itself as a price on carbon emissions associated with the production and use of the product. Of course the product may be targeted at other markets, but even a small location constraint on a product can lead to a trading discount in some market circumstances. This is also a carbon price of sorts. In any case, the prevalence of LCFS type approaches could well increase over the years ahead, which could penalize oil sands relative to some other production routes.

The combination of Provincial and Federal grants, a Province based carbon pricing system and its bonus credits and consideration of the role played by fuel standards in export markets in the future has allowed the project to get the green light. This should be seen as good news. CCS is the critical technology for real long term reductions in emissions – I have argued in the past that it may well be the only technology, so supporting it now and getting at least some early projects up and running should be an essential policy goal. Support remains a dilemma for policy makers, particularly in challenging economic times. However, there is a valid role to play here in that almost every carbon roadmap to 2030 and beyond shows CCS being required, yet there is currently no carbon price signal strong enough in any jurisdiction to actually build one now and therefore begin the process of demonstration and commercialization.

The project itself is medium in scale, storing about one million tonnes per annum of CO2 coming from the Hydrogen Manufacturing Unit (HMU) linked to the oil sands bitumen upgrader. The HMU produces hydrogen by steam reforming of natural gas, with a nearly pure CO2 stream as a byproduct. At high temperatures (700–1100 °C), steam (H2O) reacts with methane (CH4) to yield syngas.

 CH4 + H2O → CO + 3 H2

 In a second stage, additional hydrogen is generated through the lower-temperature water gas shift reaction, performed at about 130 °C:

 CO + H2O → CO2 + H2

 Heat required to drive the process is supplied by burning some portion of the natural gas. A very simple overview of the process is shown below.

 

The capture plant is located in Fort Saskatchewan, approx 50 km N.E. of Edmonton, Alberta. The CO2 will be transported by 12 inch pipeline to storage, approximately 65 km north of the upgrader site. The CO2 will be stored in a saline aquifer formation called Basal Cambrian Sands (BCS). At 2,300 metres below the surface it is some of the deepest sandstone in the region, with multiple caprock and salt seal layers and no significant faulting visible from wells or seismic activity. The BCS is well below hydrocarbon bearing formations and potable water zones in the region. Relatively few wells have been drilled into the BCS and none within 10 km of the proposed storage site.

It’s been a long road from initial discussion, to early concept and finally the investment decision last week. But the end result is a real CCS project!!