The Conversation

Subscribe to The Conversation feed
Updated: 30 min 59 sec ago

Renewables will be cheaper than coal in the future. Here are the numbers

Wed, 2017-09-27 11:07

In a recent Conversation FactCheck I examined the question: “Is coal still cheaper than renewables as an energy source?” In that article, we assessed how things stand today. Now let’s look to the future.

In Australia, 87% of our electricity generation comes from fossil fuels. That’s one of the highest levels of fossil fuel generation in the world.

So we have important decisions to make about how we’ll generate energy as Australia’s fleet of coal-fired power stations reach the end of their operating lives, and as we move to decarbonise the economy to meet our climate goals following the Paris agreement.

What will the cost of coal-fired and renewable energy be in the coming decades? Let’s look at the numbers.

Improvements in technology will make renewables cheaper

As technology and economies of scale improve over time, the initial capital cost of building an energy generator decreases. This is known as the “learning rate”. Improvements in technology are expected to reduce the price of renewables more so than coal in coming years.

The chart below, produced by consulting firm Jacobs Group and published in the recent Finkel review of the National Electricity Market, shows the projected levelised cost of electricity (LCOE) for a range of technologies in 2020, 2030 and 2050.

The chart shows a significant reduction in the cost of solar and wind, and a relatively static cost for mature technologies such as coal and gas. It also shows that large-scale solar photovoltaic (PV) generation, with a faster learning rate, is projected to be cheaper than wind generation from around 2020.

Notes: Numbers in Figure A.1 refer to the average. For each generation technology shown in the chart, the range shows the lowest cost to the highest cost project available in Jacobs’ model, based on the input assumptions in the relevant year. The average is the average cost across the range of projects; it may not be the midpoint between the highest and lowest cost project. Large-scale Solar Photovoltaic includes fixed plate, single and double axis tracking. Large-scale Solar Photovoltaic with storage includes 3 hours storage at 100 per cent capacity. Solar Thermal with storage includes 12 hours storage at 100 per cent capacity. Cost of capital assumptions are consistent with those used in policy cases, that is, without the risk premium applied. The assumptions for the electricity modelling were finalised in February 2017 and do not take into account recent reductions in technology costs (e.g. recent wind farm announcements). Independent Review into the Future Security of the National Electricity Market

Wind prices are already falling rapidly. For example: the graph above shows the 2020 price for wind at A$92 per megawatt-hour (MWh). But when the assumptions for the electricity modelling were finalised in February 2017, that price was already out of date.

In its 2016 Next Generation Renewables Auction, the Australian Capital Territory government secured a fixed price for wind of A$73 per MWh over 20 years (or A$56 per MWh in constant dollars at 3% inflation).

In May 2017, the Victorian renewable energy auction set a record low fixed price for wind of A$50-60 per MWh over 12 years (or A$43-51 per MWh in constant dollars at 3% inflation). This is below the AGL price for electricity from the Silverton wind farm of $65 per MWh fixed over five years.

These long-term renewable contracts are similar to a LCOE, because they extend over a large fraction of the lifetime of the wind farm.

The tables and graph below show a selection of renewable energy long-term contract prices across Australia in recent years, and illustrate a gradual decline in wind energy auction results (in constant 2016 dollars), consistent with improvements in technology and economies of scale.

But this analysis is still based on LCOE comparisons – or what it would cost to use these technologies for a simple “plug and play” replacement of an old generator.

Now let’s price in the cost of changes needed to the entire electricity network to support the use of renewables, and to price in other factors, such as climate change.

Carbon pricing will increase the cost of coal-fired power

The economic, environmental and social costs of greenhouse gas emissions are not included in simple electricity cost calculations, such as the LCOE analysis above. Neither are the costs of other factors, such as the health effects of air particle pollution, or deaths arising from coal mining.

The risk of the possible introduction of carbon emissions mitigation policies can be indirectly factored into the LCOE of coal-fired power through higher rates for the weighted average cost of capital (in other words, higher interest rates for loans).

The Jacobs report to the Finkel Review estimates that the weighted average cost of capital for coal will be 15%, compared with 7% for renewables.

The cost of greenhouse gas emissions can be incorporated more directly into energy prices by putting a price on carbon. Many economists maintain that carbon pricing is the most cost-effective way to reduce global carbon emissions.

One megawatt-hour of coal-fired electricity creates approximately one tonne of carbon dioxide. So even a conservative carbon price of around A$20 per tonne would increase the levelised cost of coal generation by around A$20 per MWh, putting it at almost A$100 per MWh in 2020.

According to the Jacobs analysis, this would make both wind and large-scale photovoltaics – at A$92 and A$91 per MWh, respectively – cheaper than any fossil fuel source from the year 2020.

It’s worth noting here the ultimate inevitability of a price signal on carbon, even if Australia continues to resist the idea of implementing a simple carbon price. Other policies currently under consideration, including some form of a clean energy target, would put similar upward price pressure on coal relative to renewables, while the global move towards carbon pricing will eventually see Australia follow suit or risk imposts on its carbon-exposed exports.

Australia’s grid needs an upgrade

Renewable energy (excluding hydro power) accounted for around 6% of Australia’s energy supply in the 2015-16 financial year. Once renewable energy exceeds say, 50%, of Australia’s total energy supply, the LCOE for renewables should be used with caution.

This is because most renewable energy – like that generated by wind and solar – is intermittent, and needs to be “balanced” (or backed up) in order to be reliable. This requires investment in energy storage. We also need more transmission lines within the electricity grid to ensure ready access to renewable energy and storage in different regions, which increases transmission costs.

And, there are additional engineering requirements, like building “inertia” into the electricity system to maintain voltage and frequency stability. Each additional requirement increases the cost of electricity beyond the levelised cost. But by how much?

Australian National University researchers calculated that the addition of pumped-hydro storage and extra network construction would add a levelised cost of balancing of A$25-30 per MWh to the levelised cost of renewable electricity.

The researchers predicted that eventually a future 100% renewable energy system would have a levelised cost of generation in current dollars of around A$50 per MWh, to which adding the levelised cost of balancing would yield a network-adjusted LCOE of around A$75-80 per MWh.

The Australian National University result is similar to the Jacobs 2050 LCOE prediction for large-scale solar photovoltaic plus pumped hydro of around A$69 per MWh, which doesn’t include extra network costs.

The AEMO 100% Renewables Study indicated that this would add another A$6-10 per MWh, yielding a comparable total in the range A$75-79 per MWh.

This would make a 100% renewables system competitive with new-build supercritical (ultrasupercritical) coal, which, according to the Jacobs calculations in the chart above, would come in at around A$75(80) per MWh between 2020 and 2050.

This projection for supercritical coal is consistent with other studies by the CO2CRC in 2015 (A$80 per MWh) and used by CSIRO in 2017 (A$65-80 per MWh).

So, what’s the bottom line?

By the time renewables dominate electricity supply in Australia, it’s highly likely that a price on carbon will have been introduced. A conservative carbon price of at least A$20 per tonne would put coal in the A$100-plus bracket for a megawatt-hour of electricity. A completely renewable electricity system, at A$75-80 per MWh, would then be more affordable than coal economically, and more desirable environmentally.

The Conversation

Ken Baldwin receives funding from the Australian Research Council.

Categories: Around The Web

To avoid crisis, the gas market needs a steady steer, not an emergency swerve

Wed, 2017-09-27 06:07

Rising gas costs are “the single biggest factor in the current rise in electricity prices”.

What is most noteworthy about this statement is not the fact that it is true, but that it was made by Prime Minister Malcolm Turnbull, many of whose party colleagues remain convinced that renewable energy is the real bogeyman.

Read more: Big gas shortage looming, but government stays hand on export controls

Turnbull’s comments were made in response to a report released this week by the Australian Energy Market Operator (AEMO), which yet again warns of impending gas shortages.

I argue below that renewables are a solution to the problem, rather than its cause. But first, is there actually a gas crisis?

A gas crisis?

Although AEMO has predicted a potential gas shortfall for the east coast, there is no shortage of gas. Unprecedented amounts are being produced and exported as liquified natural gas (LNG) from terminals in Queensland, while at the same time the domestic market is being starved, driving prices sky-high.

Read more: Memo to COAG: Australia is already awash with gas

Without government action there could indeed be a domestic shortfall next year, but the government has already set in place a system of export restrictions to ensure domestic supply. These restrictions have not yet been invoked, but the crisis for the government is that they may have to be, and the decision must be made before November 30.

Emergency export restrictions are an intervention of last resort for a governing party built on free-market principles. They are necessary because the government has failed to champion a longer-term and less interventionist strategy, such as the reservation of a certain percentage of gas produced from new gas fields for domestic use. Western Australia has had a policy of 15% reservation for many years and other states are following suit.

Read more: Our power grid is crying out for capacity, but should we open the gas valves?

Not only is there plenty of gas being produced, but it would be relatively painless to divert some of it to the domestic market. AEMO notes several times in its report that producers have some flexibility in where they send their gas. In particular, a significant proportion of the exported gas is not under long-term contract but is destined for the overseas spot market, where surplus energy is traded for immediate delivery. This gas could easily be diverted to the east coast market.

On current projections, 63.4 petajoules of gas is destined for the spot market in 2018. To put this in context, the projected shortfall is 54PJ in 2018 and 48PJ in 2019. In other words, the uncontracted gas destined for the spot market is more than enough to make up the expected shortfall.

Turnbull is also arguing that the potential shortage is due to state bans on gas exploration and production. However, the production costs associated with as-yet-untapped reserves and resources in those states are much higher than for Queensland. Thus, even in the absence of bans it would still make sense to target untapped Queensland resources first.

Moving the gas south

The extra gas released in Queensland for domestic use would need to be transported to the southern states by pipelines that are already close to capacity. This is a potential problem. However, it could be resolved by means of “gas swaps”.

Gas produced in the southern states that has been contracted for sale through the Queensland terminals could be swapped for gas released by Queensland producers for distribution to the southern states. This would avoid bottlenecks and gas transportation costs.

In the longer term, the problem could be solved by AGL’s proposal to establish a liquid natural gas (LNG) import terminal (a regasification plant) at Western Port in Victoria.

This facility could process LNG either from Queensland or from further afield. The terminal would have the potential to provide all of Victoria’s household and business customer gas needs. If all goes to plan, AGL will begin construction in 2019 and bring the terminal into operation by 2020–21.

Our free-market government is now firmly in interventionist mode, with gas export restrictions and plans to fund a Snowy pumped hydro scheme. There is even a proposal to subsidise the continued operation of the AGL’s Liddell coal-fired power station beyond its scheduled closure in 2022.

Read more: Baffled by baseload? Dumbfounded by dispatchables? Here’s a glossary of the energy debate

But rather than continuing to badger AGL about keeping Liddell open, the government would be wiser to press the firm to bring its regasification plant online as soon as possible. Not only does it make economic sense, but it is greatly preferable from an environmental point of view.

The renewables solution

Another way to deal with the predicted gas shortfall is to reduce demand. According to AEMO figures, gas-powered electricity generation in 2018 is expected to require 176PJ of gas, dropping to 135PJ in 2019. The lower demand in 2019 is due to increased renewable energy generation, as well as increased consumer energy efficiency.

Recalling that the shortfall in gas for 2018 is 48PJ, it is apparent that this shortfall would be wiped out by a 30% reduction in gas used for gas-fired power generation. Based on 2016 figures, that would require an increase of roughly 30% in power generation from renewables.

Given the relatively short time it now takes to build new renewable generators, this is a very promising path. Coupled with battery storage or pumped hydro, these new generators would provide dispatchable power exactly as gas does. All that is required is for the government to implement the right policy settings.

Finally, state government policies may already be taking us in this direction. The Queensland government recently announced a major program of incentives for solar power. This will significantly increase renewable power generation and dampen the demand for gas-fired power. AEMO notes this development but states explicitly that this has not been taken into account in its projections.

For whatever reason, AEMO’s final conclusion is not as gloomy as its analysis might suggest. It states that the gas situation in eastern and south eastern Australia “is expected to remain tight”. Rather than calling for action, it considers that the situation “warrants continued close attention and monitoring”. Amid all the talk of impending crisis, what we need is steady pressure on the steering wheel, rather than a sharp swerve.

The Conversation

Andrew Hopkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

How TV weather presenters can improve public understanding of climate change

Tue, 2017-09-26 11:38
shutterstock

A recent Monash University study of TV weather presenters has found a strong interest from free-to-air presenters in including climate change information in their bulletins.

The strongest trends in the survey, which had a 46% response rate, included:

  • 97% of respondents thought climate change is happening;

  • 97% of respondents believed viewers had either “strong trust” or “moderate trust” in them as a reliable source of weather information;

  • 91% of respondents were comfortable with presenting local historical climate statistics, and just under 70% were comfortable with future local climate projections; and

  • 97% of respondents thought their audiences would be interested in learning about the impacts of climate change.

According to several analyses of where Australians get their news, in the age of ubiquitous social media TV is still the single largest news source.

And when one considers that social media and now apps are increasingly used as the interface for sharing professional content from news organisations – which includes TV news – the reach of TV content is not about to be challenged anytime soon.

The combined audience for primetime free-to-air TV in the five capital city markets alone is a weekly average of nearly 3 million viewers. This does not include those using catch-up on portable devices, and those watching the same news within the pay TV audience. And there are those who are getting many of the same news highlights and clips through their Facebook feeds and app-based push media.

Yet the ever-more oligopolistic TV industry in Australia is very small. And professional weather presenters are a rather exclusive group: there are only 75 such presenters in Australia.

It is because of this, rather than in spite of it, that weather presenters are able to command quite a large following. And they are highly promoted by the networks themselves – on freeway billboards and station advertising. This promotion makes weather presenters among the most trusted media personalities, while simultaneously presenting information that is regarded as apolitical.

At the same time, Australians have a keen interest in talking about weather. It tends to unite us.

These three factors – trust, the impartial nature of weather, and Australian’s enthusiasm for the weather – puts TV presenters in an ideal position to present climate information. Such has been the experience in the US, where the Centre for Climate Change Communication together with Climate Matters have partnered with more than 350 TV weathercasters to present simple, easy-to-process factual climate information.

In the US it is about mainstreaming climate information as factual content delivered by trusted sources. The Climate Matters program found TV audiences value climate information the more locally based it was.

Monash’s Climate Change Communication Research Hub is conducting research as a precondition to establishing such a program in Australia. The next step is to survey the audiences of the free-to-air TV markets in the capital city markets to evaluate Australians’ appetite for creating a short climate segment alongside the weather on at least a weekly basis.

As in the US, TV audiences are noticing more and more extreme weather and want to understand what is causing it, and what to expect in the future.

The Climate Change Communication Research Hub is also involved in creating “climate communications packages” that can be tested with audiences. These are largely based on calendar and anniversary dates, and show long-term trends using these dates as datapoints.

The calendar dates could be sporting dates, or how climate can be understood in relation to a collection of years based on a specific date, or the start of a season for fire or cyclones. There has been so much extreme weather in recent years that there are plenty of anniversaries.

Let’s take November 21, 2016 – the most severe thunderstorm asthma event ever to impact Melbourne. It saw 8,500 presentations to hospital emergency departments and nine tragic deaths.

There is no reason why this event can’t be covered this year in the context of climate as a community service message. As explained in the US program, just a small increase in higher average spring temperatures leads to the production of a higher count of more potent pollen. Also, as more energy is fed into the destructive power of storm systems, the prospect of breaking up pollen and distributing it efficiently throughout population centres is heightened.

The need to be better prepared for thunderstorms in spring is thus greater, even for those who have never had asthma before.

For its data, the Climate Change Communication Research Hub will be relying on the information from the Bureau of Meteorology and the CSIRO, but will call on the assistance of a wide range of organisations such as the SES, state fire services, and health authorities in conducting its research.

In February 2018, the hub will hold a workshop with TV weather presenters as part of the Australian Meteorological and Oceanographic Society conference. At the conference the planning for the project will be introduced, with a pilot to be conducted on one media market to be rolled out to multiple markets in the second year.

The program is not intended to raise the level of concern about climate change, but public understanding of it. As survey after survey shows, Australians are already concerned about climate change. But more information is needed about local and regional impacts that will help people make informed choices about mitigation, adaptation and how to plan their lives – beyond tomorrow’s weather.

The Conversation Disclosure

David Holmes received funding from Monash University to conduct research for the project described in this article.

Categories: Around The Web

Baffled by baseload? Dumbfounded by dispatchables? Here's a glossary of the energy debate

Tue, 2017-09-26 06:07
High-voltage power lines stand near an electricity substation on the outskirts of Sydney. Reuters

Australia’s energy market is a prominent fixture in our daily news cycle. Amid the endless ideology and politics swirling around the sector, technical terms such as “baseload power” and “dispatchable generation” are thrown around so often that there is a danger the meaning of these terms can get lost in the public debate.

The term “energy crisis” is bandied around quite loosely with some confusion around whether the crisis is about prices or security of supply. The politics of this are infernal and largely avoidable if all sides of politics had paid consistent and principled attention to energy policy over the 20 years since the formation of the National Energy Market.

It’s worth setting the record straight on the meaning of some of these terms and how they relate to climate policies, new technologies and the progression of market reform and regulation in Australia.

This glossary, which is by no means exhaustive, is a first step.

Baseload power

Baseload power refers to generation resources that generally run continuously throughout the year and operate at stable output levels. The continuous operation of baseload resources makes economic sense because they have low running costs relative to other sources of power. The value of baseload plants is mostly economic, and not related to their ability to follow the constantly varying system demand.

Baseload plants include coal-fired and gas-fired combined-cycle power plants. However, Australia’s international commitment to reduce carbon emissions is curtailing the economic viability of traditional baseload sources.

Coal-fired power stations like this one at Loy Yang are being gradually retired. Wholesale market (the “National Energy Market”)

The term National Energy Market is confusing because it refers to a competitive market for wholesale energy mostly on the east cost of Australia. It doesn’t include Western Australia or the Northern Territory and also includes the gas system. The National Energy Market allows all kinds of utility-scale power resources to connect to transmission system to meet large-scale power requirements.

However, industry talk about the “energy market” or even the “NEM” can also refer to the entire supply chain that includes the networks for voltage transmission, and medium- and low-voltage distribution as well as the retailing to the end consumer. The prices consumers see include all these aspects of the supply chain. This can add significantly to confusion.

The wholesale market is referred to as a “market” because there is competition between generators. Each generator places daily price “bids” to sell power and adjusts quantities in up to 10 price bands every five minutes. In this way, the sale of power is matched to the available energy and performance of the generating unit.

The market works to efficiently dispatch all variable and “dispatchable” resources to minimise the cost of electricity. The Australian Energy Market Operator (AEMO) co-ordinates the National Energy Market.

Wholesale price

The wholesale “spot” price at which power is traded in the NEM is based on the highest accepted generator offers to balance supply and demand in each region. This is intended to encourage efficient behaviour by generators, as well as to co-ordinate efficient directing of resources.

Storage

Storage refers to energy captured for later use, typically in a battery. Electricity has been expensive to store in the past, but the cost of storage is expected to continue to fall with the improvement of battery technologies. For example, lithium-ion batteries were developed for mobile communications and laptops but now are being upscaled for electric vehicles and utility-scale energy storage.

Lithium-ion batteries were developed for mobile phones, but are now being used as part of electric vehicles such as Tesla Inc’s Model S and Model X. Reuters

Due to traditionally low storage levels in the system, electricity has to be mostly generated within seconds of when it is needed, otherwise the stability of the system can be put at risk. Storage technology will become more valuable as the market penetration of wind and solar power increases. With declining costs of various battery technologies, this will become easier to deliver.

Demand (and peak demand)

Demand refers to the amount of electricity required to meet consumption levels at any given moment. Power refers to the rate of energy consumption in megawatts (millions of Watts, or MW), whereas energy in megawatt-hours (MWh) refers to the total consumption over a period, such as a day, month or year.

Peak demand is the highest rate of energy consumption required in a particular season, such as heating in winter or cooling in summer. It is a vital measure because it determines how much generation equipment is needed to cover for unexpected outages and maintain reliable supply.

Dispatchable generation

Dispatchable generation refers to a type of generation based on fossil fuels or hydro power that can be controlled to balance electricity supply and demand. More flexible power plants based on natural gas firing (such as open-cycle gas turbines or hydro power plants) can operate at partial loading and respond to short-term changes in supply and demand.

Flexibility is the key here. Storage can provide flexibility as well, either from batteries or pumped-hydro storage. The need for such resources is becoming more urgent due to retirement of the older baseload plants and the growing amount of less emissions-intensive energy sources.

Frequency control

Synchronous generators in power stations spin at around 50 cycles per second. This speed is referred to as “frequency” (denoted Hertz, symbol Hz). Controlling this constant frequency is essential for maintaining voltage and thus reliability.

If there is loss of generation somewhere, extra power is drawn through the electricity network from other plants. This causes these generators’ rotors to slow down and the system frequency to fall. A key parameter is the so-called “maximum rate of change of frequency”. The faster the frequency changes, the less time is available to take corrective action.

Inertia

Inertia refers to the ability of a system to maintain a steady frequency after a significant imbalance between generation and load. The higher the inertia, the slower the rate of change of frequency after a disturbance.

One critical concern is that inertia must almost always be sufficient to enable stable power. Given many coal-fired power plants are being retired, the amount of inertia is falling markedly.

Eventually power systems will need to provide inertia explicitly by adding synchronous rotors (operating independently of power generation) or by providing other power system controls that are able to respond very quickly to deviations in power system frequency. These can be based on a combination of storage and advanced power electronics already available today.

Regional markets within the National Energy Market

The National Energy Market operates as five interconnected regional markets in the eastern states: Queensland, New South Wales, Victoria, South Australia and Tasmania. This reflects the way the power systems were originally set up under state authorities.

The National Energy Market cannot operate as a single market with a single price due to two important factors. It is not cost-effective to completely remove power transmission constraints between the state regions, and electrical losses in power transmission mean that each location requires a different price to efficiently reflect the impact of these losses.

When there are large power flows between regions, the prices can vary by up to 30% between regions due to losses. High prices occur when there is a power shortage relative to demand. Negative prices occur when load is less than the minimum stable generation committed. During periods of high prices (usually due to high demand or, less frequently, due to lower capacity) greater price differences can occur when the interconnectors reach their limits, causing very high-priced generation in the importing region to be dispatched.

The National Energy Market operates across Australia’s east coast. Interconnectors

In view of the long distances in the National Energy Market (4000km from end to end, the longest synchronous power system in the world), there are significant constraints in transmission capacity between the state-based regions. These constraints are given special treatment called “interconnectors”.

The marginal power losses across these interconnectors are calculated every five minutes to support efficient dispatch of resources and to ensure that the spot prices in each region are efficient and consistent with prevailing supply and demand. These interconnectors have limited capacity (due to overheating and other factors), however, and AEMO carefully manages their use to ensure balancing and inertia can be provided across regions.

Ancillary services and spinning reserve

Ancillary services refer to a variety of methods the market requires for consistent frequency and voltage control. They maintain the quality of supply and support the stability of the power system against disturbances. This frequency control is required during normal operation to maintain the continuous balance of energy supply and demand. For this purpose some generation capacity is held in reserve in order to vary its output up and down to adjust the total system generation level.

This difference between the maximum power output and the lower operating level is called “spinning reserve”. Spinning reserve is also required for output reduction to cover sudden disconnection of load or sudden increase in solar or wind power.

Transmission upgrades

The upgrading of the transmission system, including the interconnectors, is a complex regulatory process. Transmission has a significant value across the whole electricity supply chain from producers to consumers.

This value is easy to measure given electricity market conditions at any given moment. But it’s difficult to predict when these interconnectors need to be built or replaced because some transmission assets can operate for up to 80 years. Significant co-ordination is required in planning new investments as the location and deployment timing of new renewable generation capacity is uncertain and variable.

30-minute price settlement windows (and five-minute ones)

Generators are paid the spot price for all their output, and consumers (via retailers) are charged at the spot price for their consumption by AEMO. This “trading” price is calculated every 30 minutes for the purpose of transacting the cash flows (as an average of the five-minute dispatch price). This process is called “settlement”.

There is a plan in place to move to five-minute settlement over the next three years. This would help reward more flexible resources (including batteries) as they respond more efficiently to the impact of sudden changes in output.

The Conversation

Ariel Liebman receives funding from the Australian Federal Departments of Education and Foreign Affairs and Trade through the Australia Indonesia Centre

ross.gawler@monash.edu is affiliated with Monash Univeristy, Jacobs Consulting and McDonald Gawler Pty Ltd I occasionally consult to participants in the National Electricity Market in affiliation with Monash University or Jacobs Consulting or through McDonald Gawler Pty Ltd, a small private company. I contribute a small monthly donation to Get-up!

Categories: Around The Web

New law finally gives voice to the Yarra River's traditional owners

Mon, 2017-09-25 17:13

On September 21, the Victorian Parliament delivered a major step forward for Victoria’s traditional owners, by passing the Yarra River Protection (Wilip-gin Birrarung murron) Act 2017. Until now, the Wurundjeri people have had little recognition of their important role in river management and protection, but the new legislation, set to become law by December 1, will give them a voice.

The Act is remarkable because it combines traditional owner knowledge with modern river management expertise, and treats the Yarra as one integrated living natural entity to be protected.

The new law recognises the various connections between the river and its traditional owners. In a first for Victorian state laws, it includes Woi-wurrung language (the language of the Wurundjeri) in both the Act’s title and in its preamble. The phrase Wilip-gin Birrarung murron means “keep the Yarra alive”. Six Wurundjeri elders gave speeches in Parliament in both English and Woi-wurrung to explain the significance of the river and this Act to their people.

The Act also gives an independent voice to the river by way of the Birrarung Council, a statutory advisory body which must have at least two traditional owner representatives on it.

Read more: Three rivers are now legally people, but that’s just the start of looking after them.

Giving legal powers to rivers has become fashionable recently. Aotearoa, New Zealand passed legislation in March to give legal personhood to the Whanganui River, the voice of that river being an independent guardian containing Māori representation.

Within a week of that decision, the Uttarakhand High Court in India ruled that the Ganga and Yamuna Rivers are living entities with legal status, and ordered government officers to assume legal guardianship of the two rivers (although that decision has since been stayed by the Indian Supreme Court).

All of these developments recognise that rivers are indivisible living entities that need protection. But the Victorian legislation differs in that it doesn’t give the Yarra River legal personhood or assign it a legal guardian. The Birrarung Council, although the “independent voice” of the Yarra, will have only advisory status.

Speaking for the silent

The practice of giving legal voice to entities that cannot speak for themselves is not a new one. Children have legal guardians, as do adults who are not in a position to make decisions for themselves. We also give legal status to many non-human entities, such as corporations.

The idea of doing the same for rivers and other natural objects was first suggested back in 1972. In general terms, giving something legal personhood means it can sue or be sued. So a river’s legal guardian can go to court and sue anyone who pollutes or otherwise damages the river. (Theoretically, a river could also be sued, although this has yet to be tested.)

So how will the Yarra River be protected, if it doesn’t have legal personhood or a guardian?

Like the Whanganui River Settlement legislation, the Yarra River Protection Act provides for the development of a strategic plan for the river’s management and protection. This includes a long term community vision, developed through a process of active community participation, that will identify areas for protection. The strategic plan will also be informed by environmental, social, cultural, recreational and management principles.

These Yarra protection principles further enhance the recognition of traditional owner connection to the Yarra River. They highlight Aboriginal cultural values, heritage and knowledge, and the importance of involving traditional owners in policy planning and decision-making.

And the Birrarung Council will have an important role to play. It will provide advice and can advocate for the Yarra River, even if it can’t actually make decisions about its protection, or take people who damage the Yarra River to court.

Importantly, the Council does not have any government representatives sitting on it. Its members are selected by the environment minister for four-year terms and once appointed they can’t be removed unless they’re found to be unfit to hold office (for example, for misconduct or neglect of duty). This makes sure that the Council’s advice to the minister is truly independent.

So, although the new law will not give the Yarra River full legal personhood, it does enshrine a voice for traditional owners in the river’s management and protection – a voice that has been unheard for too long.

The Conversation

Katie O'Bryan is a member of the National Environmental Law Association, Environmental Justice Australia and the Australian Conservation Foundation.

Categories: Around The Web

I've always wondered: can animals be left- and right-pawed?

Mon, 2017-09-25 06:03
Southpaws seem to be more common among cats and dogs than humans. Eric Isselee/Shutterstock.com

This is an article from I’ve Always Wondered, a series where readers send in questions they’d like an expert to answer. Send your question to alwayswondered@theconversation.edu.au

While watching my cat engaging in yet another battle with my shoelace, I noticed that he seemed mainly to use his left front paw. Do animals have a more dextrous side that they favour for particular tasks, just like humans? – Mike, Perth.

The short answer is: yes they do! Like humans, many animals tend to use one side of the body more than the other. This innate handedness (or footedness) is called behavioural or motor laterality.

The term laterality also refers to the primary use of the left or right hemispheres of the brain. The two halves of the animal brain are not exactly alike, and each hemisphere differs in function and anatomy. In general terms, the left hemisphere controls the right side of the body and the right hemisphere controls the left side.

Laterality is an ancient inherited characteristic and is widespread in the animal kingdom, in both vertebrates and invertebrates. Many competing theories (neurological, biological, genetic, ecological, social and environmental) have been proposed to explain how the phenomenon developed, but it remains largely a mystery.

Animal ‘handedness’

Humans tend to be right-handed. Lefties or “southpaws” make up only about 10% of the human population, and more males than females are left-handed.

Great apes show similar handedness patterns to humans. Most chimps, for instance, seem to be right-handed. But not many studies have looked at laterality in non-primate animals.

Read more: Why are most people right-handed? The answer may be in the mouths of our ancestors.

There is some evidence to suggest that dogs and cats can be right- or left-pawed, although the ratio seems to be more evenly split than in humans, and it is unclear whether there are sex differences.

If you’re a pet owner you can do an experiment for yourself. Which paw does your cat or dog lead with when reaching out for something, or to tap open a pet door?

To test your pet dog, you can place a treat-filled Kong toy directly in front of your dog and see which paw he or she uses to hold it to get the food out. A dog may use either paw or both paws.

To test your pet cat, you can set a “food puzzle” by putting a treat inside a glass jar and watching to see which paw your cat uses. Don’t forget to repeat it lots of times and take notes to see whether the effect is real or just random chance!

Don’t forget to repeat the experiment lots of times.

Horses also seem to prefer to circle in one direction rather than the other. Meanwhile, one study suggests that kangaroos are almost exclusively lefties, although the neural basis for this is unknown.

Lateralisation and brain function

In humans, the left hemisphere is mainly associated with analytical processes and language and the right hemisphere with orientation, awareness and musical abilities, although this dichotomy is simplistic at best.

Is there evidence of lateralised brain function in non-human animals too? A team of Italian researchers think so. They found that dogs wag their tails to the right when they see something they want to approach, and to the left when confronted with something they would rather avoid. This suggests that, just as for people, the right and left halves of the brain do different jobs in controlling emotions.

Laterality is also connected to the direction in which hair grows (so-called stuctural laterality), or even to the senses (sensory laterality). Many animals use they left eye and left ear (indicating right brain activation) more often than the right ones when investigating objects that are potentially frightening. However, asymmetries in olfactory processing (nostril use) are less well understood.

Research suggests most kangaroos are southpaws. Ester Inbar/Wikimedia Commons, CC BY

The left or right bias in sensory laterality is separate from that of motor laterality (or handedness). However, some researchers think that side preference is linked to the direction of hair whorls (“cow licks”), which can grow in a clockwise or anticlockwise direction. More right-handed people have a clockwise hair pattern, although it is unclear if this is true of other animals.

The direction of hair growth and handedness are also related to temperament. Left-handed people might be more vulnerable to stress, as are left-pawed dogs and many other animals. In general, many animals, including humans, that have a clockwise hair whorl are less stress-prone than those with anticlockwise hair growth. The position of the hair whorl also matters; cattle and horses with hair whorls directly above the eyes are more typically difficult to handle than those with whorls lower down on the face.

Elsewhere in the animal kingdom, snails also have a form of laterality, despite having a very different nervous system to vertebrates like us. Their shells spiral in either a “right-handed” or “left-handed” direction – a form of physical asymmetry called “chirality”. This chirality is inherited – snails can only mate with matching snails.

Chirality is even seen in plants, depending on the asymmetry of their leaves, and the direction in which they grow.

As an aside, left-handedness has been discriminated against in many cultures for centuries. The Latin word sinistra originally meant “left” but its English descendant “sinister” has taken on meanings of evil or malevolence. The word “right”, meanwhile, connotes correctness, suitability and propriety. Many everyday objects, from scissors to notebooks to can-openers, are designed for right-handed people, and the Latin word for right, dexter, has given us the modern word “dextrous”.

Why is the brain lateralised?

One adaptive advantage of lateralisation is that individuals can perform two tasks at the same time if those tasks are governed by opposite brain hemispheres. Another advantage might be resistance to disease – hand preference in animals is associated with differences in immune function, with right-handed animals mounting a better immune response.

Does it matter if your cat, dog, horse or cow favours one paw (or hoof) over another? Determining laterality – or which side of the brain dominates the other – could change the way domestic animals are bred, raised, trained and used, including predicting which puppies will make the best service dogs, and which racehorses will race better on left- or right-curving tracks.

And even if your dog or cat never clutches a pen, or uses one limb more than the other, just be grateful that they haven’t yet developed opposable thumbs!

This article is dedicated to the memory of Bollo the cat, who inspired this question but has since passed away.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Can two clean energy targets break the deadlock of energy and climate policy?

Fri, 2017-09-22 17:08
Climate policy has become bogged down in the debate over a clean energy target. Shutterstock

Malcolm Turnbull’s government has been wrestling with the prospect of a clean energy target ever since Chief Scientist Alan Finkel recommended it in his review of Australia’s energy system. But economist Ross Garnaut has proposed a path out of the political quagmire: two clean energy targets instead of one.

Garnaut’s proposal is essentially a flexible emissions target that can be adapted to conditions in the electricity market. If electricity prices fail to fall as expected, a more lenient emissions trajectory would likely be pursued.

This proposal is an exercise in political pragmatism. If it can reassure both those who fear that rapid decarbonisation will increase energy prices, and those who argue we must reduce emissions at all costs, it represents a substantial improvement over the current state of deadlock.

Ross Garnaut/Yann Robiou DuPont, Author provided Will two targets increase investor certainty?

At a recent Melbourne Economic Forum, Finkel pointed out that investors do not require absolute certainty to invest. After all, it is for accepting risks that they earn returns. If there was no risk to accept there would be no legitimate right to a return.

But Finkel also pointed out that investors value policy certainty and predictability. Without it, they require more handsome returns to compensate for the higher policy risks they have to absorb.

Read more: Turnbull is pursuing ‘energy certainty’ but what does that actually mean?

At first sight, having two possible emissions targets introduces yet another uncertainty (the emissions trajectory). But is that really the case? The industry is keenly aware of the political pressures that affect emissions reduction policy. If heavy reductions cause prices to rise further, there will be pressure to soften the trajectory.

Garnaut’s suggested approach anticipates this political reality and codifies it in a mechanism to determine how emissions trajectories will adjust to future prices. Contrary to first impressions, it increases policy certainty by providing clarity on how emissions policy should respond to conditions in the electricity market. This will promote the sort of policy certainty that the Finkel Review has sought to engender.

Could policymakers accept it?

Speaking of political realities, could this double target possibly accrue bipartisan support in a hopelessly divided parliament? Given Tony Abbott’s recent threat to cross the floor to vote against a clean energy target (bringing an unknown number of friends with him), the Coalition government has a strong incentive to find a compromise that both major parties can live with.

Read more: Abbott’s disruption is raising the question: where will it end?

Turnbull and his energy minister, Josh Frydenberg, who we understand are keen to see Finkel’s proposals taken up, could do worse than put this new idea on the table. They have to negotiate with parliamentary colleagues whose primary concern is the impact of household electricity bills on voters, as well as those who won’t accept winding back our emissions targets.

Reassuringly, the government can point to some precedent. Garnaut’s proposal is novel in Australia’s climate policy debate, but is reasonably similar to excise taxes on fuel, which in some countries vary as a function of fuel prices. If fuel prices decline, excise taxes rise, and vice versa. In this way, governments can achieve policy objectives while protecting consumers from the price impacts of those objectives.

The devil’s in the detail

Of course, even without the various ideologies and vested interests in this debate, many details would remain to be worked out. How should baseline prices be established? What is the hurdle to justify a more rapid carbon-reduction trajectory? What if prices tick up again, after a more rapid decarbonisation trajectory has been adopted? And what if prices don’t decline from current levels: are we locking ourselves into a low-carbon-reduction trajectory?

These issues will need to be worked through progressively, but there is no obvious flaw that should deter further consideration. The fundamental idea is attractive, and it looks capable of ameliorating concerns that rapid cuts in emissions will lock in higher electricity prices.

For mine, I would not be at all surprised if prices decline sharply as we begin to decarbonise, such is the staggering rate of technology development and cost reductions in renewable energy. But I may of course be wrong. Garnaut’s proposal provides a mechanism to protect consumers if this turns out to be the case.

The Conversation

Bruce Mountain does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Is BHP really about to split from the Minerals Council's hive mind?

Fri, 2017-09-22 13:04

Shareholder action has struck again (perhaps). The Australasian Centre for Corporate Responsibility, on behalf of more than 120 shareholders of BHP, has convinced the Big Fella to reconsider its membership of the Minerals Council of Australia.

Business associations and umbrella groups exist to advance the interests of their members. The ones we know most about are those that are in the public eye, lobbying, producing position papers that put forward controversial and unpopular positions (while giving their members plausible deniability), running television adverts, and attacking their opponents as naive idealists at best, or luddites and watermelons (green on the outside, red on the inside) at worst.

Read more: Risky business: how companies are getting smart about climate change.

This has been going on for a century, as readers of the late Alex Carey’s Taking the Risk out of Democracy: Corporate Propaganda versus Freedom and Liberty will know. (Another Australian, Sharon Beder ably continued his work, and more recently yet another Australian, Kerryn Higgs, wrote excellently on this.)

Alongside the gaudy outfits sit lower-profile and occasionally very powerful coordinating groups, such as the Australian Industry Greenhouse Network (see Guy Pearse’s book High and Dry for details).

Ultimately, however, membership of these groups can have costs to companies – beyond the financial ones. If an industry body strays too far from the public mood, individual companies can feel the heat. This happened in the United States in 2002, with the Global Climate Coalition, a front group for automakers and the oil industry that succeeded in defeating the Kyoto Protocol but then outlived its usefulness. It happened again in 2009 when a group of companies (including Nike, Microsoft and Johnson & Johnson) decided their reputations were being damaged by continued membership of the US Chamber of Commerce, which was taking a particularly intransigent line on President Barack Obama’s climate efforts.

Doings Down Under

What’s interesting in this latest spat is that it involves two very powerful players. Let’s look at them in turn.

The Minerals Council of Australia (MCA) began life in 1967, as the Australian Mining Industry Council, when Australia’s export boom for coal, iron ore and other commodities was taking off.

From its earliest days it found itself embroiled in both Aboriginal land rights and environmental disputes, having established an environment subcommittee in 1972. Over time, the Council took a robust line on both topics, to put it mildly.

In 1990, at the height of green concerns, the then federal environment minister Ros Kelly offered a scathing assessment of the council, saying that its idea of a sustainable industry was:

…one in which miners can mine where they like, for however long they want. It is about, for them, sustaining profits and increasing access to all parts of Australia they feel could be minerally profitable, even if it is of environmental or cultural significance.

Meanwhile, the council’s intransigent position on Aboriginal land rights, especially after the 1992 Mabo decision, caused it to lose both credibility and – crucially – access to land rights negotiations.

Geoff Allen, a business guru who had created the Business Council of Australia, was called in to write a report, which led the Minerals Council to adopt its present name, and a more emollient tone.

The MCA’s peak of influence (so far?) was its role in the Keep Mining Strong campaign of 2010, which sank Kevin Rudd’s planned super-profits tax. The following year, it combined with other business associations to form the Australian Trade and Industry Alliance, launching an advertising broadside against Julia Gillard’s carbon pricing scheme (which was not, as former Liberal staffer Peta Credlin has now admitted, a “carbon tax”).

Bashing the carbon tax.

The MCA has since kept up a steady drumbeat of attacks on renewable energy, and most infamously supplied the (lacquered) lump of coal brandished by Treasurer Scott Morrison in parliament.

Read more: Hashtags v bashtags: a brief history of mining advertisements (and their backlashes).

The most important thing, for present purposes, to understand about the MCA is that it may well have been the subject of a reverse takeover by the now defunct Australian Coal Association. In a fascinating article in 2015, Mike Seccombe pointed out that:

Big as the coalmining industry is in Australia, it accounts for only a bit more than 20% of the value of our mineral exports. Yet now the Minerals Council has come to be dominated by just that one sector, coal… Representatives of the biggest polluters on the planet now run the show.

This brings us to BHP. As a global resources player, with fingers in many more pies than just coal (indeed, it has spun its coal interests off into a company called South32), it has remained phlegmatic about carbon pricing, even as the MCA and others have got into a flap.

Read more: Say what you like about BHP, it didn’t squander the boom.

To BHP, the advent of carbon pricing in Australia was if anything a welcome development. The move offered two main benefits: valuable experience of doing business under carbon pricing, and a chance to influence policy more easily than in bigger, more complex economies.

In 2000, the company’s American chief executive, Paul Anderson, tried to get the Business Council of Australia to discuss ratification of the Kyoto Protocol (which would build pressure for local carbon pricing). He couldn’t get traction. Interviewed in 2007, he recalled:

I held a party and nobody came… They sent some low-level people that almost read from things that had been given to them by their lawyers. Things like, ‘Our company does not acknowledge that carbon dioxide is an issue and, if it is, we’re not the cause of it and we wouldn’t admit to it anyway.’

The schism

As the physicist Niels Bohr said, “prediction is very difficult, especially about the future”. I wouldn’t want to bet on whether BHP will actually go ahead and leave the MCA, or whether the Minerals Council will revise its hostile position on environmental sustainability.

BHP has promised to “make public, by 31 December 2017, a list of the material differences between the positions we hold on climate and energy policy, and the advocacy positions on climate and energy policy taken by industry associations to which we belong”.

In reaching for a metaphor to try and explain the situation, I find myself coming back to an episode of Star Trek: The Next Generation. The heroic crew has captured an individual from the “Borg”, a collective hive-mind entity. They plan to implant an impossible image in its brain, knowing that upon release it will reconnect, shunt the image upwards for the hive mind to try to understand, and thus drive the entire Borg stark raving mad as it tries in vain to compute the information it is receiving.

This analogy is admittedly crude, I’ll grant you. It is, I submit, also a pretty accurate picture of what might happen when an MCA member grows a climate conscience.

The Conversation
Categories: Around The Web

Developing countries can prosper without increasing emissions

Fri, 2017-09-22 05:40

One of the ironies of fighting climate change is that developed countries – which have benefited from decades or centuries of industrialisation – are now asking developing countries to abandon highly polluting technology.

But as developing countries work hard to grow their economies, there are real opportunities to leapfrog the significant investment in fossil fuel technology typically associated with economic development.

This week, researchers, practitioners and policy makers from around the world are gathered in New York city for the International Conference on Sustainable Development as part of Climate Week. We at ClimateWorks will be putting the spotlight on how developing countries can use low- or zero-emissions alternatives to traditional infrastructure and technology.

Read more: How trade policies can support global efforts to curb climate change

Developing nations are part of climate change

According to recent analysis, six of the top 10 emitters of greenhouse gases are now developing countries (this includes China). Developing countries as a bloc already account for about 60% of global annual emissions.

If we are are to achieve the global climate targets of the Paris Agreement, these countries need an alternative path to prosperity. We must decouple economic growth from carbon emissions. In doing so, these nations may avoid many of the environmental, social and economic costs that are the hallmarks of dependence on fossil fuels.

This goal is not as far-fetched as it might seem. ClimateWorks has been working as part of the Deep Decarbonization Pathways Project, a global collaboration of researchers looking for practical ways countries can radically reduce their carbon emissions – while sustaining economic growth.

For example, in conjunction with the Australian National University, we have modelled a deep decarbonisation pathway that shows how Australia could achieve net zero emissions by 2050, while the economy grows by 150%.

Similarly, data compiled by the World Resources Institute shows that 21 countries have reduced annual greenhouse gas emissions while simultaneously growing their economies since 2000. This includes several eastern European countries that have experienced rapid economic growth in the past two decades.

PricewaterhouseCoopers’ Low Carbon Index also found that several G20 countries have reduced the carbon intensity of their economies while maintaining real GDP growth, including nations classified as “developing”, such as China, India, South Africa and Mexico.

‘Clean’ economic growth for sustainable development

If humankind is to live sustainably, future economic growth must minimise environmental impact and maximise social development and inclusion. That’s why in 2015, the UN adopted the Sustainable Development Goals: a set of common aims designed to balance human prosperity with protection of our planet by 2030.

These goals include a specific directive to “take urgent action to combat climate change and its impacts”. Likewise, language in the Paris Climate Agreement recognises the needs of developing countries in balancing economic growth and climate change.

The Sustainable Development Goals are interconnected, and drawing these links can provide a compelling rationale for strong climate action. For example, a focus on achieving Goal 7 (Affordable and Clean Energy) that also considers Goal 13 (Climate Action) will prioritise low or zero-emissions energy technologies. This in turn delivers health benefits and saves lives (Goal 3) through improved air quality, which also boosts economic productivity (Goal 8).

Read more: Climate change set to increase air pollution deaths by hundreds of thousands by 2100

Therefore efforts to limit global temperature rise to below 2℃ must be considered within the context of the Sustainable Development Goals. These global goals are intrinsically linked to solving climate change.

But significant barriers prevent developing countries from adopting low-emissions plans and ambitious climate action. Decarbonisation is often not a priority for less developed countries, compared to key issues such as economic growth and poverty alleviation. Many countries struggle with gaps in technical and financial expertise, a lack of resources and inconsistent energy data. More fundamentally, poor governance and highly complex or fragmented decision-making also halt progress.

It’s in the best interest of the entire world to help developing countries navigate these problems. Creating long-term, lowest-emissions strategies, shaped to each country’s unique circumstances, is crucial to maintaining growth while reducing emissions. Addressing these problems is the key to unlocking the financial flows required to move to a just, equitable and environmentally responsible future.

The Conversation

Meg Argyriou does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Politics podcast: AGL chief economist Tim Nelson on what to do with Liddell

Thu, 2017-09-21 13:53

In the eye of the storm over energy policy is Liddell, an ageing coal-fired power station owned by energy giant AGL.

Prime Minister Malcolm Turnbull has twisted the arm of AGL chief executive Andy Vesey to take to the company’s board the proposition that it should extend the plant’s life beyond its scheduled 2022 closure, or alternatively sell it to an operator that would carry it on.

AGL chief economist Tim Nelson says the company is running the rule over both options but he argues preserving the power station may not be the best solution. “The decision is not just economic, it is also also a commitment on carbon risk.”

Nelson says the emissions profile of extending the life of coal-fired power stations is inconsistent with current commitments in AGL’s greenhouse gas policy and the government’s undertakings under the Paris climate accord. Add to that the hefty rehabilitation costs for 50-year-old Liddell and it seems “the numbers don’t add up”.

While AGL is reviewing government options, it is so far sticking to its alternatives for the site – repurposing it, or repowering it with zero-emissions technology.

But without a coherent policy framework it is hard to see an orderly transition in the energy market. Nelson says a clean energy target could fix the uncertainty, encouraging the replacement of old technology with a combination of renewables and “complementary capacity from flexible sources”.

The Conversation

Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Review of historic stock routes may put rare stretches of native plants and animals at risk

Thu, 2017-09-21 10:54
The travelling stock routes are a precious national resource. Author provided

Since the 19th century, Australian drovers have moved their livestock along networks of stock routes. Often following traditional Indigenous pathways, these corridors and stepping-stones of remnant vegetation cross the heavily cleared wheat and sheep belt in central New South Wales.

The publicly owned Travelling Stock Reserve network of New South Wales is now under government review, which could see the ownership of much of this crown land move into private hands.

But in a study published today in the Australian Journal of Botany we suggest that privatising stock routes may endanger vital woodlands and put vulnerable species at risk.

Read more: How ancient Aboriginal star maps have shaped Australia’s highway network

The review will establish how individual reserves are currently being used. Although originally established for graziers, the patches of bush in the network are now more likely to be used for recreation, cultural tourism, biodiversity conservation, apiary and drought-relief grazing.

This shift away from simply moving livestock has put pressure on the government to seek “value” in the network. The review will consider proposals from individuals and organisations to buy or acquire long-term leases for particular reserves.

It is likely that most proposals to purchase travelling stock reserves would come from existing agricultural operations.

A precious national resource

Travelling stock reserves across New South Wales represent some of the most intact examples of now-endangered temperate grassy woodland ecosystems.

Our research found that changing the status or use of these reserves could seriously impact these endangered woodlands. They criss-cross highly developed agricultural landscapes, which contain very limited amounts of remnant vegetation (areas where the bush is relatively untouched). Travelling stock reserves are therefore crucially important patches of habitat and resources for native plants and animals.

This isn’t the first time a change in ownership of travelling stock reserves has been flagged. Over the last century, as modern transport meant the reserves were used less and less for traditional droving, pressure to release these areas for conventional agriculture has increased.

Historic stock routes are still used for grazing cattle. Daniel Florance, Author provided

To understand what a change in land tenure might mean to the conservation values of these woodlands, we spent five years monitoring vegetation in stock reserves in comparison to remnant woodlands on private farmland.

We found that travelling stock reserves contained a higher number of native plant species, more native shrubs, and less exotic plants than woodland remnants on private land.

The higher vegetation quality in travelling stock reserves was maintained over the five years, which included both the peak of Australia’s record-breaking Millennium Drought and the heavy rainfall that followed, referred to as the “Big Wet”.

The take-home message was that remnant woodland on public land was typically in better nick than in private hands.

Importantly, other studies have found that this high-quality vegetation is critical for many threatened and vulnerable native animals. For example, eastern yellow robins and black-chinned honeyeaters occur more frequently in places with more shrubs growing below the canopy.

The vulnerable superb parrot also uses travelling stock reserves for habitat. Damian Michael, Author provided

The contrast we saw between woodlands in travelling stock reserves and private land reflects the different ways they’re typically managed. Travelling stock reserves have a history of periodic low-intensity grazing, mostly by cattle, with long rest periods. Woodland on active farms tend to be more intensively grazed, by sheep and cattle, often without any strategic rest periods.

The stock reserves’ future

The uncertain future of travelling stock reserves casts doubt on the state of biodiversity across New South Wales.

The current review of travelling stock reserves is considering each reserve in isolation. It flies in the face of the belief of many managers, practitioners and researchers that the true value of these reserves is in the integrity of the entire network – that the whole is greater than the sum of its parts.

Travelling stock reserves protect threatened species, allow the movement of wildlife, are seed sources for habitat restoration efforts, and support the ecosystem of adjacent agricultural land. These benefits depend on the quality of the remnant vegetation, which is determined by the grazing regime imposed by who owns and manages the land.

Of course, not all travelling stock reserves are in good condition. Some are subject to high-intensity livestock grazing (for example, under longer-term grazing leases) coupled with a lack of funding to manage and enhance natural values.

Changing the land tenure status of travelling stock reserves risks increasing grazing pressure, which our study suggests would reduce ecosystem quality and decrease their conservation value.

The travelling stock routes are important parts of our ecosystem, our national heritage, and our landscape. They can best be preserved by remaining as public land, so the entire network can be managed sustainably.

This requires adequate funding for the Local Land Services, so they can appropriately manage pest animals, weeds, erosion and illegal firewood harvesting and rubbish dumping.

Travelling stock reserves are more than just The Long Paddock – they are important public land, whose ecological value has been maintained under public control. They should continue to be managed for the public good.

The Conversation

Luke S. O'Loughlin has received funding from the Hermon Slade Foundation and the Holsworth Wildlife Endowment Fund

Damian Michael receives funding from the Australian Government (National Environmental Science Program) and the Murray Local Land Services

David Lindenmayer receives funding from the Australian Research Council, the Australian Government (National Environmental Science Program), the Ian Potter Foundation, the Vincent Fairfax Family Foundation, the Murray Local Land Services and the Riverina Local land Services

Thea O'Loughlin received funding from the Murray Local Land Services.

Categories: Around The Web

Want energy storage? Here are 22,000 sites for pumped hydro across Australia

Thu, 2017-09-21 06:36

The race is on for storage solutions that can help provide secure, reliable electricity supply as more renewables enter Australia’s electricity grid.

With the support of the Australian Renewable Energy Agency (ARENA), we have identified 22,000 potential pumped hydro energy storage (PHES) sites across all states and territories of Australia. PHES can readily be developed to balance the grid with any amount of solar and wind power, all the way up to 100%, as ageing coal-fired power stations close.

Solar photovoltaics (PV) and wind are now the leading two generation technologies in terms of new capacity installed worldwide each year, with coal in third spot (see below). PV and wind are likely to accelerate away from other generation technologies because of their lower cost, large economies of scale, low greenhouse emissions, and the vast availability of sunshine and wind.

New generation capacity installed worldwide in 2016. ANU/ARENA, Author provided

Although PV and wind are variable energy resources, the approaches to support them to achieve a reliable 100% renewable electricity grid are straightforward:

  • Energy storage in the form of pumped hydro energy storage (PHES) and batteries, coupled with demand management; and

  • Strong interconnection of the electricity grid between states using high-voltage power lines spanning long distances (in the case of the National Electricity Market, from North Queensland to South Australia). This allows wind and PV generation to access a wide range of weather, climate and demand patterns, greatly reducing the amount of storage needed.

PHES accounts for 97% of energy storage worldwide because it is the cheapest form of large-scale energy storage, with an operational lifetime of 50 years or more. Most existing PHES systems require dams located in river valleys. However, off-river PHES has vast potential.

Read more: How pushing water uphill can solve our renewable energy issues.

Off-river PHES requires pairs of modestly sized reservoirs at different altitudes, typically with an area of 10 to 100 hectares. The reservoirs are joined by a pipe with a pump and turbine. Water is pumped uphill when electricity generation is plentiful; then, when generation tails off, electricity can be dispatched on demand by releasing the stored water downhill through the turbine. Off-river PHES typically delivers maximum power for between five and 25 hours, depending on the size of the reservoirs.

Most of the potential PHES sites we have identified in Australia are off-river. All 22,000 of them are outside national parks and urban areas.

The locations of these sites are shown below. Each site has between 1 gigawatt-hour (GWh) and 300GWh of storage potential. To put this in perspective, our earlier research showed that Australia needs just 450GWh of storage capacity (and 20GW of generation power) spread across a few dozen sites to support a 100% renewable electricity system.

In other words, Australia has so many good sites for PHES that only the best 0.1% of them will be needed. Developers can afford to be choosy with this significant oversupply of sites.

Pumped hydro sites in Australia. ANU/ARENA, Author provided

Here is a state-by-state breakdown of sites (detailed maps of sites, images and information can be found here):

NSW/ACT: Thousands of sites scattered over the eastern third of the state

Victoria: Thousands of sites scattered over the eastern half of the state

Tasmania: Thousands of sites scattered throughout the state outside national parks

Queensland: Thousands of sites along the Great Dividing Range within 200km of the coast, including hundreds in the vicinity of the many wind and PV farms currently being constructed in the state

South Australia: Moderate number of sites, mostly in the hills east of Port Pirie and Port Augusta

Western Australia: Concentrations of sites in the east Kimberley (around Lake Argyle), the Pilbara and the Southwest; some are near mining sites including Kalgoorlie. Fewer large hills than other states, and so the minimum height difference has been set at 200m rather than 300m.

Northern Territory: Many sites about 300km south-southwest of Darwin; a few sites within 200km of Darwin; many good sites in the vicinity of Alice Springs. Minimum height difference also set at 200m.

The maps below show synthetic Google Earth images for potential upper reservoirs in two site-rich regions (more details on the site search are available here). There are many similarly site-rich regions across Australia. The larger reservoirs shown in each image are of such a scale that only about a dozen of similar size distributed across the populated regions of Australia would be required to stabilise a 100% renewable electricity system.

Araluen Valley near Canberra. At most, one of the sites shown would be developed. ANU/ARENA, Author provided Townsville, Queensland. At most, one of the sites shown would be developed. ANU/ARENA, Author provided

The chart below shows the largest identified off-river PHES site in each state in terms of energy storage potential. Also shown for comparison are the Tesla battery and the solar thermal systems to be installed in South Australia, and the proposed Snowy 2.0 system.

Largest identified off-river PHES sites in each state, together with other storage systems for comparison. ANU/ARENA, Author provided

The map below shows the location of PHES sites in Queensland together with PV and wind farms currently in an advanced stage of development, as well as the location of the Galilee coal prospect. It is clear that developers of PV and wind farms will be able to find a PHES site close by if needed for grid balancing.

Solar PV (yellow) and wind (green) farms currently in an advanced stage of development in Queensland, together with the Galilee coal prospect (black) and potential PHES sites (blue). ANU/ARENA, Author provided

Annual water requirements of a PHES-supported 100% renewable electricity grid would be less than one third that of the current fossil fuel system, because wind and PV do not require cooling water. About 3,600ha of PHES reservoir is required to support a 100% renewable electricity grid for Australia, which is 0.0005% of Australia’s land area, and far smaller than the area of existing water storages.

PHES, batteries and demand management are all likely to have prominent roles as the grid transitions to 50-100% renewable energy. Currently, about 3GW per year of wind and PV are being installed. If this continued until 2030 it would be enough to supply half of Australia’s electricity consumption. If this rate is doubled then Australia will reach 100% renewable electricity in about 2033.

Fast-track development of a few excellent PHES sites can be completed in 2022 to balance the grid when Liddell and other coal-fired power stations close.

The Conversation

Andrew Blakers receives funding from the Australian Renewable Energy Agency

Matthew Stocks receives funding from the Australian Renewable Energy Agency for R&D projects on solar photovoltaics and integration of renewable energy. He owns shares in Origin Energy.

Bin Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

More than 1,200 scientists urge rethink on Australia's marine park plans

Thu, 2017-09-21 06:36

The following is a statement from the Ocean Science Council of Australia, an internationally recognised independent group of university-based Australian marine researchers, and signed by 1,286 researchers from 45 countries and jurisdictions, in response to the federal government’s draft marine parks plans.

We, the undersigned scientists, are deeply concerned about the future of the Australian Marine Parks Network and the apparent abandoning of science-based policy by the Australian government.

On July 21, 2017, the Australian government released draft management plans that recommend how the Marine Parks Network should be managed. These plans are deeply flawed from a science perspective.

Of particular concern to scientists is the government’s proposal to significantly reduce high-level or “no-take” protection (Marine National Park Zone IUCN II), replacing it with partial protection (Habitat Protection Zone IUCN IV), the benefits of which are at best modest but more generally have been shown to be inadequate.

Read more: Australia’s new marine parks plan is a case of the emperor’s new clothes.

The 2012 expansion of Australia’s Marine Parks Network was a major step forward in the conservation of marine biodiversity, providing protection to habitats and ecological processes critical to marine life. However, there were flaws in the location of the parks and their planned protection levels, with barely 3% of the continental shelf, the area subject to greatest human use, afforded high-level protection status, and most of that of residual importance to biodiversity.

The government’s 2013 Review of the Australian Marine Parks Network had the potential to address these flaws and strengthen protection. However, the draft management plans have proposed severe reductions in high-level protection of almost 400,000 square kilometres – that is, 46% of the high-level protection in the marine parks established in 2012.

Commercial fishing would be allowed in 80% of the waters within the marine parks, including activities assessed by the government’s own risk assessments as incompatible with conservation. Recreational fishing would occur in 97% of Commonwealth waters up to 100km from the coast, ignoring the evidence documenting the negative impacts of recreational fishing on biodiversity outcomes.

Under the draft plans:

  • The Coral Sea Marine Park, which links the iconic Great Barrier Reef Marine Park to the waters of New Caledonia’s Exclusive Economic Zone (also under consideration for protection), has had its Marine National Park Zones (IUCN II) reduced in area by approximately 53% (see map below)

  • Six of the largest marine parks have had the area of their Marine National Park Zones IUCN II reduced by between 42% and 73%

  • Two marine parks have been entirely stripped of any high-level protection, leaving 16 of the 44 marine parks created in 2012 without any form of Marine National Park IUCN II protection.

Proposed Coral Sea Marine Park zoning, as recommended by independent review (left) and in the new draft plan (right), showing the proposed expansion of partial protection (yellow) vs full protection (green). From http://www.environment.gov.au/marinereservesreview/reports and https://parksaustralia.gov.au/marine/management/draft-plans/

The replacement of high-level protection with partial protection is not supported by science. The government’s own economic analyses also indicate that such a reduction in protection offers little more than marginal economic benefits to a very small number of commercial fishery licence-holders.

Retrograde step

This retrograde step by Australia’s government is a matter of both national and international significance. Australia has been a world leader in marine conservation for decades, beginning with the establishment of the Great Barrier Reef Marine Park in the 1970s and its expanded protection in 2004.

At a time when oceans are under increasing pressure from overexploitation, climate change, industrialisation, and plastics and other forms of pollution, building resilience through highly protected Marine National Park IUCN II Zones is well supported by decades of science. This research documents how high-level protection conserves biodiversity, enhances fisheries and assists ecosystem recovery, serving as essential reference areas against which areas that are subject to human activity can be compared to assess impact.

The establishment of a strong backbone of high-level protection within Marine National Park Zones throughout Australia’s Exclusive Economic Zone would be a scientifically based contribution to the protection of intact marine ecosystems globally. Such protection is consistent with the move by many countries, including Chile, France, Kiribati, New Zealand, Russia, the UK and US to establish very large no-take marine reserves. In stark contrast, the implementation of the government’s draft management plans would see Australia become the first nation to retreat on ocean protection.

Australia’s oceans are a global asset, spanning tropical, temperate and Antarctic waters. They support six of the seven known species of marine turtles and more than half of the world’s whale and dolphin species. Australia’s oceans are home to more than 20% of the world’s fish species and are a hotspot of marine endemism. By properly protecting them, Australia will be supporting the maintenance of our global ocean heritage.

The finalisation of the Marine Parks Network remains a remarkable opportunity for the Australian government to strengthen the levels of Marine National Park Zone IUCN II protection and to do so on the back of strong evidence. In contrast, implementation of the government’s retrograde draft management plans undermines ocean resilience and would allow damaging activities to proceed in the absence of proof of impact, ignoring the fact that a lack of evidence does not mean a lack of impact. These draft plans deny the science-based evidence.

We encourage the Australian government to increase the number and area of Marine National Park IUCN II Zones, building on the large body of science that supports such decision-making. This means achieving a target of at least 30% of each marine habitat in these zones, which is supported by Australian and international marine scientists and affirmed by the 2014 World Parks Congress in Sydney and the IUCN Members Assembly at the 2016 World Conservation Congress in Hawaii.

You can read a fully referenced version of the science statement here, and see the list of signatories here.

The Conversation

Jessica Meeuwig does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

A cleanish energy target gets us nowhere

Wed, 2017-09-20 14:40
Shutterstock

It seems that the one certainty about any clean energy target set by the present government is that it will not drive sufficient progress towards a clean, affordable, reliable energy future. At best, it will provide a safety net to ensure that some cleanish energy supply capacity is built.

Future federal governments will have to expand or complement any target set by this government, which is compromised by its need to pander to its rump. So a cleanish energy target will not provide investment certainty for a carbon-emitting power station unless extraordinary guarantees are provided. These would inevitably be challenged in parliament and in the courts.

Read more: Turnbull is pursuing ‘energy certainty’ but what does that actually mean?

Even then, the unstoppable evolution of our energy system would leave an inflexible baseload power station without a market for much of the electricity it could generate. Instead, we must rely on a cluster of other strategies to do the heavy lifting of driving our energy market forward.

The path forward

It’s clear that consumers large and small are increasingly investing “behind the meter” in renewable energy technology, smart management systems, energy efficiency and energy storage. In so doing, they are buying insurance against future uncertainty, capturing financial benefits, and reducing their climate impacts. They are being helped by a wide range of emerging businesses and new business models, and existing energy businesses that want to survive as the energy revolution rolls on.

The Australian Energy Market Operator (AEMO) is providing critically important information on what’s needed to deliver energy objectives. The recently established Energy Security Board will work to make sure that what’s needed is done – in one way or another. Other recommendations from the Finkel Review are also helping to stabilise the electricity situation.

The recent AEMO/ARENA demand response project and various state-level energy efficiency retailer obligation schemes and renewable energy targets are examples of how important energy solutions can be driven outside the formal National Energy Market. They can bypass the snail-paced progress of reforming the NEM.

States will play a key role

State governments are setting their own renewable energy targets, based on the successful ACT government “contracts for difference” approach, discussed below. Victoria has even employed the architect of the ACT scheme, Simon Corbell. Local governments, groups of businesses and communities are developing consortia to invest in clean energy solutions using similar models.

Some see state-level actions as undermining the national approach and increasing uncertainty. I see them as examples of our multi-layered democratic system at work. Failure at one level provokes action at another.

State-level actions also reflect increasing energy diversity, and the increasing focus on distributed energy solutions. States recognise that they carry responsibilities for energy: indeed, the federal government often tries to blame states for energy failures.

There is increasing action at the network, retail and behind-the-meter levels, driven by business and communities. While national coordination is often desirable, mechanisms other than national government leadership can work to complement national action, to the extent it occurs.

Broader application of the ACT financing model

A key tool will be a shift away from the current RET model to the broader use of variations of the ACT’s contract for difference approach. The present RET model means that project developers depend on both the wholesale electricity price and the price of Large Generation Certificates (LGCs) for revenue. These are increasingly volatile and, over the long term, uncertain. In the past we have seen political interference and low RET targets drive “boom and bust” outcomes.

So, under the present RET model, any project developer faces significant risk, which makes financing more difficult and costly.

The ACT contract for difference approach applies a “market” approach by using a reverse auction, in which rival bidders compete to offer the desired service at lowest cost. It then locks in a stable price for the winners over an agreed period of time.

The approach reduces risk for the project developer, which cuts financing costs. It shifts cost risk (and opportunity) to whoever commits to buy the electricity or other service. The downside risk is fairly small when compared with the insurance of a long-term contract and the opportunity to capture savings if wholesale electricity prices increase.

The ACT government has benefited from this scheme as wholesale prices have risen. It also includes other requirements such as the creation of local jobs. This approach can be applied by agents other than governments, such as the consortium set up by the City of Melbourne.

For business and public sector consumers, the prospect of reasonably stable energy prices, with scope to benefit if wholesale prices rise and limited downside risk, is attractive in a time of uncertainty. For project developers, a stable long-term revenue stream improves project viability.

The approach can also potentially be applied to other aspects of energy service provision, such as demand response, grid stabilisation or energy efficiency. It can also be combined with the traditional “power purchase agreement” model, where the buyer of the energy guarantees a fixed price but the project developer carries the risk and opportunity of market price variations. It can also apply to part of a project’s output, to underpin it.

While sorting out wholesale markets is important, we need to remember that this is just part of the energy bill. Energy waste, network operations, retailing and pricing structures such as high fixed charges must also be addressed. Some useful steps are being taken, but much more work is needed.

The Conversation Disclosure

Alan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy. He has shares in Hepburn Wind.

Categories: Around The Web

Vietnam's typhoon disaster highlights the plight of its poorest people

Wed, 2017-09-20 05:38

Six people lost their lives when Typhoon Doksuri smashed into central Vietnam on September 16, the most powerful storm in a decade to hit the country.

Although widespread evacuations prevented a higher death toll, the impact on the region’s most vulnerable people will be extensive and lasting.

Read more: Typhoon Haiyan: a perfect storm of corruption and neglect.

Government sources report that more than 193,000 properties have been damaged, including 11,000 that were flooded. The storm also caused widespread damage to farmland, roads, and water and electricity infrastructure. Quang Binh and Ha Tinh provinces bore the brunt of the damage.

Central Vietnam is often in the path of tropical storms and depressions that form in the East Sea, which can intensify to form tropical cyclones known as typhoons (the Pacific equivalent of an Atlantic hurricane).

Typhoon Doksuri developed and tracked exactly as forecast, meaning that evacuations were relatively effective in saving lives. What’s more, the storm moved quickly over the affected area, delivering only 200-300 mm of rainfall and sparing the region the severe flooding now being experienced in Thailand.

Doksuri is just one of a spate of severe tropical cyclones that have formed in recent weeks, in both the Pacific and Atlantic regions. Hurricanes Harvey, Irma and, most recently, Maria have attracted global media coverage, much of it focused on rarely considered angles such as urban planning, poverty, poor development, politics, the media coverage of disasters – as well as the perennial question of climate change.

Disasters are finally being talked about as part of a discourse of systemic oppression - and this is a great step forward.

Vietnam’s vulnerability

In Vietnam, the root causes of disasters exist below the surface. The focus remains on the natural hazards that trigger disasters, rather than on the vulnerable conditions in which many people are forced to live.

Unfortunately, the limited national disaster data in Vietnam does not allow an extensive analysis of risk. Our research in central Vietnam is working towards filling this gap and the development of more comprehensive flood mitigation measures.

Central Vietnam has a long and exposed coastline. It consists of 14 coastal provinces and five provinces in the Central Highlands. The Truong Son mountain range rises to the west and the plains that stretch to the coast are fragmented and narrow. River systems are dense, short and steep, with rapid flows.

These physical characteristics often combine with widespread human vulnerability, to deadly effect. We can see this in the impact of Typhoon Doksuri, but also to a lesser extent in the region’s annual floods.

Flood risk map by province using Multi-Criteria Decision-Making method and the national disaster database. Author provided

Rapid population growth, industrial development and agricultural expansion have all increased flood risk, especially in Vietnam’s riverine and coastal areas. Socially marginalised people often have to live in the most flood-prone places, sometimes as a result of forced displacement.

Floods and storms therefore have a disproportionately large effect on poorer communities. Most people in central Vietnam depend on their natural environment for their livelihood, and a disaster like Doksuri can bring lasting suffering to a region where 30-50% of people are already in poverty.

When disaster does strike, marginalised groups face even more difficulty because they typically lack access to public resources such as emergency relief and insurance.

The rural poor will be particularly vulnerable after this storm. Affected households have received limited financial support from the local government, and many will depend entirely on charity for their recovery.

Better research, less bureaucracy

This is not to say that Vietnam’s government did not mount a significant effect to prepare and respond to Typhoon Doksuri. But typically for Vietnam, where only the highest levels of government are trusted with important decisions, the response was bureaucratic and centralised.

This approach can overlook the input of qualified experts, and lead to decisions being taken without enough data about disaster risk.

Our research has generated a more detailed picture of disaster risk (focused on flood hazard) in the region. We have looked beyond historical loss statistics and collected data on hazards, exposure and vulnerability in Quang Nam province.

Left: flooding hazard map for Quang Nam province. Right: risk of flooding impacts on residents, calculated on the basis of flood hazards from the left map, plus people’s exposure and vulnerability. Author provided

Our findings show that much more accurate, sensitive and targeted flood protection is possible. The challenge is to provide it on a much wider scale, particularly in poor regions of the world.

Reduce risk, and avoid creating new risk

An effective risk management approach can help to reduce the impacts of flooding in central Vietnam. Before a disaster ever materialises, we can work to reduce risk - and avoid activities that exacerbate it - for example land grabbing for development, displacing the poor, environmental degradation, discrimination against minorities.

Read more: Irma and Harvey: very different storms, but both affected by climate change.

It is critical that subject experts, particularly scientists, are involved in decisions about disaster risk - in Vietnam and around the world. There must be a shift to more proactive approaches, guided by deep knowledge both of the local context and of the latest scientific advances.

Our maps will help planners and politicians to recognise high-risk areas, prepare flood risk plans, and set priorities for both flood defences and responses to vulnerability. The maps are also valuable tools for communication.

But at the same time as emphasising data-driven decisions, we also need to advocate for a humanising approach in dealing with some of the most oppressed, marginalised, poor and disadvantaged members of the global community.

The Conversation

Jason von Meding receives funding from the Australian government and Save the Children for collaborative projects in Vietnam.

Chinh Luu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Keeping global warming to 1.5 degrees: really hard, but not impossible

Tue, 2017-09-19 09:35
The window for staving off the worst of climate change is wider than we thought, but still pretty narrow. Tatiana Grozetskaya/Shutterstock.com

The Paris climate agreement has two aims: “holding the increase in global average temperature to well below 2℃ above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5℃”. The more ambitious of these is not yet out of reach, according to our new research.

Despite previous suggestions that this goal may be a lost cause, our calculations suggest that staying below 1.5℃ looks scientifically feasible, if extremely challenging.

Read more: What is a pre-industrial climate and why does it matter?.

Climate targets such as the 1.5℃ and 2℃ goals have been interpreted in various ways. In practice, however, these targets are probably best seen as focal points for negotiations, providing a common basis for action.

To develop policies capable of hitting these targets, we need to know the size of the “carbon budget” – the total amount of greenhouse emissions consistent with a particular temperature target. Armed with this knowledge, governments can set policies designed to reduce emissions by the corresponding amount.

In a study published in Nature Geoscience, we and our international colleagues present a new estimate of how much carbon budget is left if we want to remain below 1.5℃ of global warming relative to pre-industrial temperatures (bearing in mind that we are already at around 0.9℃ for the present decade).

We calculate that by limiting total CO₂ emissions from the beginning of 2015 to around 880 billion tonnes of CO₂ (240 billion tonnes of carbon), we would give ourselves a two-in-three chance of holding warming to less than 0.6℃ above the present decade. This may sound a lot, but to put it in context, if CO₂ emissions were to continue to increase along current trends, even this new budget would be exhausted in less than 20 years 1.5℃ (see Climate Clock). This budget is consistent with the 1.5℃ goal, given the warming that humans have already caused, and is substantially greater than the budgets previously inferred from the 5th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), released in 2013-14.

This does not mean that the IPCC got it wrong. Having predated the Paris Agreement, the IPCC report included very little analysis of the 1.5℃ target, which only became a political option during the Paris negotiations themselves. The IPCC did not develop a thorough estimate of carbon budgets consistent with 1.5℃, for the simple reason that nobody had asked them to.

The new study contains a far more comprehensive analysis of the factors that help to determine carbon budgets, such as model-data comparisons, the treatment of non-CO₂ gases, and the issue of the maximum rates at which emissions can feasibly be reduced.

Tough task

The emissions reductions required to stay within this budget remain extremely challenging. CO₂ emissions would need to decline by 4-6% per year for several decades. There are precedents for this, but not happy ones: these kinds of declines have historically been seen in events such as the Great Depression, the years following World War II, and during the collapse of the Soviet Union – and even these episodes were relatively brief.

Yet it would be wrong to conclude that greenhouse emissions can only plummet during times of economic collapse and human misery. Really, there is no historical analogy to show how rapidly human societies can rise to this challenge, because there is also no analogy for the matrix of problems (and opportunities) posed by climate change.

There are several optimistic signs that peak emissions may be near. From 2000 to 2013 global emissions climbed sharply, largely because of China’s rapid development. But global emissions may now have plateaued, and given the problems that China encountered with pollution it is unlikely that other nations will attempt to follow the same path. Rapid reduction in the price of solar and wind energy has also led to substantial increases in renewable energy capacity, which also offers hope for future emissions trajectories.

In fact, we do not really know how fast we can decarbonise an economy while improving human lives, because so far we haven’t tried very hard to find out. Politically, climate change is an “aggregate efforts global public good”, which basically means everyone needs to pull together to be successful.

This is hard. The problem with climate diplomacy (and the reason it took so long to broker a global agreement) is that the incentives for nations to tackle climate change are collectively strong but individually weak.

Read more: Paris climate targets aren’t enough but we can close the gap.

This is, unfortunately, the nature of the problem. But our research suggests that a 1.5℃ world, dismissed in some quarters as a pipe dream, remains physically possible.

Whether it is politically possible depends on the interplay between technology, economics, and politics. For the world to achieve its most ambitious climate aspiration, countries need to set stronger climate pledges for 2030, and then keep making deep emissions cut for decades.

No one is saying it will be easy. But our calculations suggest that it can be done.

The Conversation

Dave Frame receives funding from the Deep South National Science Challenge and Victoria University of Wellington.

H. Damon Matthews receives funding from the Natural Science and Engineering Research Council of Canada.

Categories: Around The Web

Curious Kids: What happens if a venomous snake bites another snake of the same species?

Mon, 2017-09-18 05:39
Scientists usually use the word "venomous" rather than "poisonous" when they're talking about snakes. Flickr/Sirenz Lorraine, CC BY

This is an article from Curious Kids, a series for children. The Conversation is asking kids to send in questions they’d like an expert to answer. All questions are welcome – serious, weird or wacky!

If a lethally poisonous snake bites another lethally poisonous snake of the same species does the bitten snake suffer healthwise or die? – Ella, age 10, Wagga Wagga.

Hi Ella,

That’s a great question.

If a venomous snake is bitten by another venomous snake of the same species, (for example during a fight or mating), then it will not be affected.

However, if a snake is bitten by a venomous snake of another species, it probably will be affected.

This is probably because snakes have evolved to be immune to venom from their own species, because bites from mates or rivals of the same species probably happen fairly often.

But a snake being regularly bitten by another snake from a different species? It’s unlikely that would happen very often, so snakes haven’t really had a chance to develop immunity to venom from other species.

Read more: Guam’s forests are being slowly killed off – by a snake

Scientists often collect venom from snakes to create anti-venoms. Kalyan Varma/Wikimedia Snakes can break down venom in the stomach

Many people believe that snakes are immune to their own venom so that they don’t get harmed when eating an animal it has just injected full of venom.

But in fact, they don’t need to be immune. Scientists have found that special digestive chemicals in the stomachs of most vertebrates (animals with backbones) break down snake venom very quickly. So the snake’s stomach can quickly deal with the venom in the animal it just ate before it has a chance to harm the snake.

People that have snakes as pets often see this. If one venomous snake bites a mouse and injects venom into it, for example, you can then feed that same dead mouse to another snake. The second snake won’t die.

Read more: Curious Kids: How do snakes make an ‘sssssss’ sound with their tongue poking out?

The eastern brown snake, which is found in Australia, is one of the most venomous snakes in the world. Flickr/Justin Otto, CC BY The difference between venom and poison

By the way, scientists usually use the word “venomous” rather than “poisonous” when they’re talking about snakes. Many people often mix those words up. Poisons need to be ingested or swallowed to be dangerous, while venoms need to be injected via a bite or a sting.

Some snakes can inject their toxins into their prey, which makes them venomous. However, there seem to be a couple of snake species that eat frogs and can store the toxins from the frogs in their body. This makes them poisonous if the snake’s body is eaten. Over time, many other animals will have learned that it is not safe to eat those snakes, so this trick helps keep them safe.

Hello, curious kids! Have you got a question you’d like an expert to answer? Ask an adult to send your question to us. You can:

* Email your question to curiouskids@theconversation.edu.au
* Tell us on Twitter by tagging @ConversationEDU with the hashtag #curiouskids, or
* Tell us on Facebook

CC BY-ND

Please tell us your name, age and which city you live in. You can send an audio recording of your question too, if you want. Send as many questions as you like! We won’t be able to answer every question but we will do our best.

The Conversation

Jamie Seymour does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Bacterial baggage: how humans are spreading germs all over the globe

Fri, 2017-09-15 14:21
Bacteria cultured from a sample of air in a public building. Khamkhlai Thanet/Shutterstock.com

Humans are transporting trillions of bacteria around the world via tourism, food and shipping, without stopping to think about the potential damage being caused to bacterial ecosystems.

When we think about endangered species, we typically think of charismatic mammals such as whales, tigers or pandas. But the roughly 5,500 mammal species on Earth is a relatively paltry number – and it pales in comparison with bacteria, of which there are at least a million different species.

Despite their vast numbers, little research has been done to understand the impact that modern human practices have on these tiny organisms, which have an important influence on many facets of our lives.

Read more: Microbes: the tiny sentinels that can help us diagnose sick oceans.

In an article published today in Science, my colleagues and I explore how humans move bacteria around the globe, and what this might mean for our own welfare.

Human effects on our planet are so profound that we have entered a new geological age, called the Anthropocene. One of the key features of this new world is the way we affect other organisms. We have altered the distribution of animals and plants, creating problems with feral animals, weeds and other invasive species. We have caused many species to decline so much that they have gone extinct.

There are also grounds for concern over the way humans are affecting bacterial species, and in many cases we are causing the same type of problems that affect larger organisms.

Bacterial population structures are definitely changing, and bacterial species are being transported to new locations, becoming the microbial equivalent of weeds or feral animals. Perhaps some bacteria are even on their way to extinction, although we don’t really have enough information to be certain yet.

How do they get around?

Let’s start by talking about sewage and manure. Animal and human faeces release gut microorganisms back into the environment, and these organisms are vastly different from the organisms that would have been released 100 years ago. This is because humans and our domesticated animals – cows, sheep, goats, pigs and chickens – now comprise 35 times more biomass than all the wild mammals on land.

Human sewage and livestock manure contain very specific subsets of microbes, meaning those populations are enriched and replenished in the environment, at the expense of the native microbes. Sewage and manure also distribute enormous quantities of genes that confer resistance to antibiotics and disinfectants.

Waste water, sewage sludge and manure are used extensively in agriculture. So gut organisms from humans and agricultural animals go on to contaminate foodstuffs. These food products, along with their bacteria, are then shipped around the world.

Then there are the 1.2 billion international tourist movements per year, which also unintentionally transport gut microorganisms to exotic locations. For instance, tourism can rapidly spread antibiotic resistant pathogens between continents.

It’s not just humans and their food that cause concern – there are also vast quantities of microbe-laden materials that move along with us. Each year, roughly 100 million tonnes of ballast water are discharged from ships in US ports alone. This movement of microorganisms via shipping is changing the distribution of bacteria in the oceans. It also transports pathogens such as cholera.

Humans also move vast quantities of sand, soil and rock. It may seem hard to believe, but it is estimated that human activities are now responsible for moving more soil than all natural processes combined. As every gram of soil contains roughly a billion bacteria,this amounts to huge numbers of microorganisms being moved around the planet.

The fallout

Why should we care if bacteria are being spread to new places? Besides the obvious potential for spreading diseases to humans, animals and crops, there are also hidden dangers.

Microorganisms are invisible to the naked eye, so we tend to ignore them and don’t necessarily appreciate their role in how the planet operates. Bacteria are crucial to biogeochemistry – the cycling of nutrients and other chemicals through ecosystems.

Read more: Your microbiome is shared with your family… including your pets.

For instance, before humans invented a way to make fertiliser industrially, every single nitrogen atom in our proteins and DNA had to be chemically captured by a bacterial cell before it could be taken up by plants and then enter the human food chain. The oxygen we breathe is largely made by photosynthetic microorganisms in the oceans (and not mainly by rainforests, as is commonly believed).

Our effects on bacteria have the potential to alter these fundamental bacterial functions. It is vital to gain a better understanding of how humans are affecting microbes’ distribution, their abundance, and their life-sustaining processes. Although bacteria are invisible, we overlook them at our peril.

The Conversation

Michael Gillings receives funding from the Australian Research Council

Categories: Around The Web

After 30 years of the Montreal Protocol, the ozone layer is gradually healing

Fri, 2017-09-15 05:35
Clouds over Australia's Davis Research Station, containing ice particles that activate ozone-depleting chemicals, triggering the annual ozone hole. Barry Becker/BOM/AAD, Author provided

This weekend marks the 30th birthday of the Montreal Protocol, often dubbed the world’s most successful environmental agreement. The treaty, signed on September 16, 1987, is slowly but surely reversing the damage caused to the ozone layer by industrial gases such as chlorofluorocarbons (CFCs).

Each year, during the southern spring, a hole appears in the ozone layer above Antarctica. This is due to the extremely cold temperatures in the winter stratosphere (above 10km altitude) that allow byproducts of CFCs and related gases to be converted into forms that destroy ozone when the sunlight returns in spring.

As ozone-destroying gases are phased out, the annual ozone hole is generally getting smaller – a rare success story for international environmentalism.

Back in 2012, our Saving the Ozone series marked the Montreal Protocol’s silver jubilee and reflected on its success. But how has the ozone hole fared in the five years since?

Read more: What is the Antarctic ozone hole and how is it made?.

The Antarctic ozone hole has continued to appear each spring, as it has since the late 1970s. This is expected, as levels of the ozone-destroying halocarbon gases controlled by the Montreal Protocol are still relatively high. The figure below shows that concentrations of these human-made substances over Antarctica have fallen by 14% since their peak in about 2000.

Past and predicted levels of controlled gases in the Antarctic atmosphere, quoted as equivalent effective stratospheric chlorine (EESC) levels, a measure of their contribution to stratospheric ozone depletion. Paul Krummel/CSIRO, Author provided

It typically takes a few decades for these gases to cycle between the lower atmosphere and the stratosphere, and then ultimately to disappear. The most recent official assessment, released in 2014, predicted that it will take 30-40 years for the Antarctic ozone hole to shrink to the size it was in 1980.

Signs of recovery

Monitoring the ozone hole’s gradual recovery is made more complicated by variations in atmospheric temperatures and winds, and the amount of microscopic particles called aerosols in the stratosphere. In any given year these can make the ozone hole bigger or smaller than we might expect purely on the basis of halocarbon concentrations.

Launching an ozone-measuring balloon from Australia’s Davis Research Station in Antarctica. Barry Becker/BOM/AAD, Author provided

The 2014 assessment indicated that the size of the ozone hole varied more during the 2000s than during the 1990s. While this might suggest it has become harder to detect the healing effects of the Montreal Protocol, we can nevertheless tease out recent ozone trends with the help of sophisticated atmospheric chemistry models.

Reassuringly, a recent study showed that the size of the ozone hole each September has shrunk overall since the turn of the century, and that more than half of this shrinking trend is consistent with reductions in ozone-depleting substances. However, another study warns that careful analysis is needed to account for a variety of natural factors that could confound our detection of ozone recovery.

The 2015 volcano

One such factor is the presence of ozone-destroying volcanic dust in the stratosphere. Chile’s Calbuco volcano seems to have played a role in enhancing the size of the ozone hole in 2015.

At its maximum size, the 2015 hole was the fourth-largest ever observed. It was in the top 15% in terms of the total amount of ozone destroyed. Only 2006, 1998, 2001 and 1999 had more ozone destruction, whereas other recent years (2013, 2014 and 2016) ranked near the middle of the observed range.

Average ozone concentrations over the southern hemisphere during October 1-15, 2015, when the Antarctic ozone hole for that year was near its maximum extent. The red line shows the boundary of the ozone hole. Paul Krummel/CSIRO/EOS, Author provided

Another notable feature of the 2015 ozone hole was that it was at its biggest observed extent for much of the period from mid-October to mid-December. This coincided with a period during which the jet of westerly winds in the Antarctic stratosphere was particularly unaffected by the warmer, more ozone-rich air at lower latitudes. In a typical year, the influx of air from lower latitudes helps to limit the size of the ozone hole in spring and early summer.

The 2017 hole

As noted above, the ozone holes of 2013, 2014 and 2016 were relatively unremarkable compared with that of 2015, being close to the long-term average for overall ozone loss.

In general respects, these ozone holes were similar to those seen in the late 1980s and early 1990s, before the peak of ozone depletion. This is consistent with a gradual recovery of the ozone layer as levels of ozone-depleting substances gradually decline.

This year’s hole began to form in early August, and the timing was similar to the long-term average. Stratospheric temperatures during the Antarctic winter were slightly cooler than in 2016, which would favour enhancement of the chemical changes that lead to ozone destruction in spring. However, temperatures climbed above average in mid-August during a disturbance to the polar winds, delaying the hole’s expansion. As of the second week of September, the warmer-than-average temperatures have continued but the ozone hole has grown slightly larger than the long-term average since 1979.

Read more: Saving the ozone layer: why the Montreal Protocol worked.

While annual monitoring continues, which includes measurements under the Australian Antarctic Program, a more comprehensive assessment of the ozone layer’s prospects is set to arrive late next year. Scientists across the globe, coordinated by the UN Environment Program and the World Meteorological Organisation, are busy preparing the next report required under the Montreal Protocol, called the Scientific Assessment of Ozone Depletion: 2018.

This peer-reviewed report will examine the recent state of the ozone layer and the atmospheric concentration of ozone-depleting chemicals, how the ozone layer is projected to change, and links between ozone change and climate.

In the meantime we’ll watch the 2017 hole as it peaks then shrinks over the remainder of the year, as well as the ozone holes of future years, which will tend to grow less and less large as the ozone layer heals.

The Conversation

Andrew Klekociuk is employed by the Australian Antarctic Division and is funded by the Department of the Environment and Energy of the Australian government.

Paul Krummel is employed by CSIRO and receives funding from MIT, NASA, Australian Bureau of Meteorology, Department of the Environment and Energy, and Refrigerant Reclaim Australia.

Categories: Around The Web

Predicting disaster: better hurricane forecasts buy vital time for residents

Thu, 2017-09-14 05:36

Hurricane Irma (now downgraded to a tropical storm) caused widespread devastation as it passed along the northern edge of the Caribbean island chain and then moved northwards through Florida. The storm’s long near-coastal track exposed a large number of people to its force.

At its peak, Hurricane Irma was one of the most intense ever observed in the North Atlantic. It stayed close to that peak for an unusually long period, maintaining almost 300km per hour winds for 37 hours.

Both of these factors were predicted a few days in advance by the forecasters of the US National Hurricane Center. These forecasts relied heavily on modern technology - a combination of computer models with satellite, aircraft and radar data.

Read more: Irma and Harvey: very different storms, but both affected by climate change

Forecasting is getting better

Although Irma was a very large and intense storm, and many communities were exposed to its force, our capacity to manage and deal with these extreme weather events has saved many lives.

There are many reasons for this, including significant construction improvements. But another important factor is much more accurate forecasts, with a longer lead time. When Tropical Cyclone Tracy devastated Darwin in 1974, the Bureau of Meteorology could only provide 12-hour forecasts of the storm’s track, giving residents little time to prepare.

These days, weather services provide three to five days’ advance warning of landfall, greatly improving our ability to prepare. What’s more, today’s longer-range forecasts are more accurate than the short-range forecasts of a few decades ago.

We have also become better at communicating the threat and the necessary actions, ensuring that an appropriate response is made.

The improvement in forecasting tropical cyclones (known as hurricanes in the North Atlantic region, and typhoons in the northwest Pacific) hasn’t just happened by good fortune. It represents the outcome of sustained investment over many years by many nations in weather satellites, faster computers, and the science needed to get the best out of these tools.

Tropical cyclone movement and intensity is affected by the surrounding weather systems, as well as by the ocean surface temperature. For instance, when winds vary significantly with height (called wind shear), the top of the storm attempts to move in a different direction from the bottom, and the storm can begin to tilt. This tilt makes the storm less symmetrical and usually weakens it. Irma experienced such conditions as it moved northwards from Cuba and onto Florida. But earlier, as it passed through the Caribbean, a low-shear environment and warm sea surface contributed to the high, sustained intensity.

In Irma’s case, forecasters used satellite, radar and aircraft reconnaissance data to monitor its position, intensity and size. The future track and intensity forecast relies heavily on computer model predictions from weather services around the world. But the forecasters don’t just use this computer data blindly – it is checked against, and synthesised with, the other data sources.

In Australia, government and industry investment in supercomputing and research is enabling the development of new tropical cyclone forecast systems that are more accurate. They provide earlier warning of tropical cyclone track and intensity, and even advance warning of their formation.

Still hard to predict destruction

Better forecasting helps us prepare for the different hazards presented by tropical cyclones.

The deadliest aspects of tropical cyclones are storm surges (when the sea rises and flows inland under the force of the wind and waves) and flooding from extreme rainfall, both of which pose a risk of drowning. Worldwide, all of the deadliest tropical cyclones on record featured several metres’ depth of storm surge, widespread freshwater flooding, or both.

Wind can severely damage buildings, but experience shows that even if the roof is torn off, well-constructed buildings still provide enough shelter for their occupants to have an excellent chance of surviving without major injury.

By and large, it is the water that kills. A good rule of thumb is to shelter from the wind, but flee from the water.

Windy.com combines weather data from the Global Forecast System, North American Mesoscale and the European Centre for Medium-Range Weather Forecasts to create a live global weather map.

This means that predicting the damage and loss caused by a tropical cyclone is hard, because it depends on both the severity of the storm and the vulnerability of the area it hits.

Hurricane Katrina in 2005 provides a good illustration. Katrina was a Category 3 storm when it made landfall over New Orleans, about as intense at landfall as Australian tropical cyclones Vance, Larry and Yasi. Yet Katrina caused at least 1,200 deaths and more than $US100 billion in damage, making it the third deadliest and by far the most expensive storm in US history. One reason was Katrina’s relatively large area, which produced a very large storm surge. But the other factor was the extraordinary vulnerability of New Orleans, with much of the city below normal sea level and protected by levées that were buried or destroyed by the storm surge, leading to extensive deep flooding.

We have already seen with Hurricane Irma that higher sea levels have exacerbated the sea surge. Whatever happens in the remainder of Irma’s path, it will already be remembered as a spectacularly intense storm, and for its very significant impacts in the Caribbean and Florida. One can only imagine how much worse those impacts would have been had the populations not been forewarned.

But increased population and infrastructure in coastal areas and the effects of climate change means we in the weather forecast business must continue to improve. Forewarned is forearmed.

The Conversation

Andrew Dowdy is working on a project funded through the National Environmental Science Programme (NESP) Earth Systems and Climate Change Hub.

Jeffrey David Kepert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Pages