The Conversation

Subscribe to The Conversation feed
Updated: 1 hour 10 min ago

Solar households to lose subsidies, but it's a bright future for the industry

Tue, 2016-08-16 14:56
Windbacks to solar subsidies may encourage larger systems. Solar panel image from www.shutterstock.com

Solar households in Victoria, South Australia and New South Wales will this year cease to be paid for power they export into the electricity grid. In South Australia, some households will lose 16 cents per kilowatt-hour (c/kWh) from September 31. Some Victorian households will lose 25 c/kWh, and all NSW households will stop receiving payments from December 31.

These “feed-in tariffs” were employed to kick-start the Australian solar photovoltaic (PV) industry. They offered high payments for electricity fed back into the grid from roof-mounted PV systems. These varied from state to state and time to time.

For many householders, these special tariffs are ending. Their feed-in tariffs will fall precipitously to 4-8 c/kWh, which is the typical rate available to new PV systems. In some cases households may lose over A$1,000 in income over a year.

But while the windback may hurt some households, it may ultimately be a good sign for the industry.

What can households do?

At present, householders with high feed-in tariffs are encouraged to export as much electricity to the grid as possible. These people will soon have an incentive to use this electricity and thereby displace expensive grid electricity. This will minimise loss of income.

Reverse-cycle air conditioning (for space heating and cooling) uses a lot of power that can be programmed to operate during daylight hours when solar panels are most likely to be generating electricity. The same applies to heating water, either by direct heating or through use of a heat pump. For heating water, solar PV is now competitive with gas, solar thermal and electricity from the grid.

Batteries, both stationary (for house services) and mobile (for electric cars), will also help control electricity use in the future.

A boost for the industry?

The ending of generous feed-in tariffs is likely to modestly encourage the solar PV industry. This is because many existing systems have a rating of only 1.5 kilowatts (kW), which could not have been increased without loss of the generous feed-in tariff.

Many householders will now choose to increase the size of their PV system to 5-10kW – in effect a new system given the disparity in average PV sizing between then and now.

A new large-scale PV market is also opening on commercial rooftops. Many businesses have daytime electrical needs that are better matched to solar availability than are domestic dwellings.

This allows businesses to consume the large amounts of the power their panels produce and hence minimise high commercial electricity tariffs. The constraining factors in this market are often not technical or economic, and include the fact that many businesses rent from landlords and tend to have short terms for investment. Business models are being developed to circumvent these constraints.

The rooftop PV market also now has large potential in competing with retail electricity prices. The total cost of a domestic 10kW PV system is about A$15,000. Over a 25-year lifetime this would yield an energy cost of 7 c/kWh.

This is about one-quarter of the typical Australian retail electricity tariff, about half of the off-peak electricity tariff, and similar to the typical retail gas tariff. Rooftop PV delivers energy services to the home more cheaply than anything else and has the capacity to drive natural gas out of domestic and commercial markets.

According to the Australian Bureau of Statistics, there are 9 million dwellings in Australia, and the floor area of new residential dwellings averaged 200 square metres over the past 20 years. Some of these dwellings are in multi-storey blocks, others have shaded roofs and, of course, south-facing roofs are less suitable than other orientations for PV.

However, if half the dwellings had one-third of their roofs covered in 20% efficient PV panels then 60 gigawatts (GW) could be accommodated. For perspective, this would cover 40% of Australian electricity demand. Commercial rooftops are a large additional market.

Solar getting big

Virtually all PV systems in Australia are roof-mounted. However, this is about to change because ground-mounted PV systems are becoming competitive with wind energy. We can see the falling cost of solar in the Queensland Solar 120 scheme, the Australian Capital Territory wind and PV reverse auctions and the Australian Renewable Energy Agency Large Scale Solar program , which all point to the declining cost of PV and wind.

Together, wind and PV constitute virtually all new generation capacity in Australia and half of the new generation capacity installed worldwide each year.

The total cost of a 10-50 megawatt PV system (1,000 times bigger than a 10kW system) is in the range A$2,100/kW (AC). A 25-year lifetime yields an energy cost of 8 c/kWh. This is only a little above the cost of wind energy and is fully competitive with new coal or gas generators.

Hundreds of 10-50MW PV systems can be distributed throughout sunny inland Australia close to towns and high-capacity powerlines. Australia’s 2020 renewable energy target is likely to be met with a large PV component, in addition to wind.

Wide distribution of PV and wind from north Queensland to Tasmania minimises the effect of local weather and takes full advantage of the complementary nature of the two leading renewable energy technologies.

The declining cost of PV and wind, coupled with the ready availability of pumped hydro storage, allows a high renewable electricity fraction (70-100%) to be achieved at modest cost by 2030.

The Conversation

Andrew Blakers receives research funding from the Australian Renewable Energy Agency, the Australian Indonesian Centre, the Australian Research Council, Excellerate Australia and private companies.

Categories: Around The Web

Our planet is heating - the empirical evidence

Tue, 2016-08-16 14:31

In an entertaining and somewhat chaotic episode of ABC’s Q&A (Monday 15th August) pitching science superstar Brian Cox against climate contrarian and global conspiracy theorist and now senator Malcolm Roberts, the question of cause and effect and empirical data was raised repeatedly in regard to climate change.

Watching I pondered on a question - what would it take me to change my mind? After all, I should dearly love to be convinced that climate was not changing, or if it were, it were not due to our unrelenting emissions of CO2 and other greenhouse gases. That would make things just so much easier, all round.

So what would make me change my mind?

There are two elements to this question. The first is the observational basis, and the question of empirical data. The second relates to cause and effect, and the question of the greenhouse effect.

On the second, I will only add that the history of our planet is not easily reconciled without recourse to a strong greenhouse effect. If you have any doubt then you simply need to read my former colleague Ian Plimer.
As I have pointed out before, in his 2001 award-winning book “A Short History of Planet Earth”, Ian has numerous references to the greenhouse effect especially in relation to what all young geologists learn as the faint young sun paradox:

“The early sun had a luminosity of some 30 per cent less than now and, over time, luminosity has increased in a steady state.”

“The low luminosity of the early sun was such that the Earth’s average surface temperature would have been below 0C from 4500 to 2000 million years ago. But there is evidence of running water and oceans as far back as 3800 million years ago.”

The question is, what kept the early Earth from freezing over?

Plimer goes on to explain: “This paradox is solved if the Earth had an enhanced greenhouse with an atmosphere of a lot of carbon dioxide and methane.”

With Ian often touted as one of the grand priests of climate contrarians, I doubt that Malcolm would consider him part of the cabal of global climate change conspiracists, though that would be ironic.

As a geologist, I need to be able to reconcile the geological record of a watery planet from time immemorial with the faint young sun hypothesis. And, as Ian points out, with nothing else on the menu, the greenhouse effect is all we have.

If the menu changes, then I will reconsider.

How about the empirical data?

Along with Brian Cox, I find it implausible that an organisation like NASA, with a record of putting a man on the moon, could or would fabricate data to the extent Malcolm Roberts insinuates. It sounds such palpable nonense, it is something you might expect from an anti-vaxer.

However, a clear message from the Q&A episode is there is no way to convince Malcolm Roberts that the meteorological temperature data has not been manipulated to achieve a predetermined outcome. So he simply is not going to accept those data as being empirical.

However, the relevant data does not just include the records taken by meteorological authorities. It also includes the the record preserved beneath our feet in the temperature logs from many thousands of boreholes across all inhabited continents. And the importance of those logs is that they are reproducible. In fact Malcolm can go out an re-measure them himself, if he needs convincing they are “emprical”.

The idea that the subsurface is an effective palaeo-thermometer is a simple one that we use in our every day life, or used to at least prior to refrigeration, as it provides the logic for the cellar.

When we perturb the temperature at the surface of the earth, for example as the air temperature rises during the day, it sends a heat pulse downwards into the earth. The distance the pulse travels is related to its duration. As the day turns to night and the surface cools, a cooling pulse will follow, lagging behind, but eventually cancelling, the daily heating. The diurnal surface temperature perturbations produce a wave like train of heating and cooling that can felt with diminishing amplitude down to a skin depth less than a metre beneath the surface before all information is cancelled out, and the extremes of both day and night are lost.

Surface temperatures also change on a seasonal basis from summer to winter and back again, and those temperatures propagate even further to depths of around 10 metres before completely cancelling [1].

On even longer cycles the temperature anomalies propagate much further, and may reach down to a kilometre or more. For example, we know that over the last million years the temperature on the earth has cycled in an out of numerous ice ages, on a cycle of about 100,000 years. Cycles on that timescale can propagate more than one kilometre into the earth, as we see in deep boreholes, such as the Blanche borehole near the giant Olympic Dam mine in South Australia. From our analysis of the Blanche temperature logs we infer a surface temperature amplitude of around 8°C over the glacial cycle.

So what do we see in the depth range of 20-100 metres that is sensitive to the last 100 years, and most relevant to the question of changing climate?

The image below shows the temperature log from a borehole that we purpose drilled in Gippsland as part of AuScope AGOS program.

Temperature log in the upper 70 metres of the Tynong AGOS borehole drilled and cored to a depth of 500 metres. The temperature logs shown here were obtained by Kate Gordon, as a student at the University of Melbourne.

The temperature profile shows various stages. Above the water table at about 15 metres depth, due to infiltration of groundwater in the vadose zone, the temperatures in the borehole rapidly equilibrate to seasonal surface temperature changes. In the winter, when this temperature log was obtained, the temperatures in this shallow zone trend towards the ambient temperature around 12°C. In summer, they rise to over 20°C. Beneath the vadose zone, the temperature in the borehole responds to the conduction of heat influenced by two dominant factors, the changing surface temperature on time-scales of decades to many hundred of years, and the heat flow from the deeper hot interior of the earth. During a rapid surface warming cycle lasting more than several decades the normal temperature gradient in which temperatures increase with depth can be reversed, so that we get a characteristic rollover (with a minimum here seen at about 30 depth).

Inversion of the Tynong temperature log for surface temperature change over the last 700 years, with uncertainties at the 95% confidence interval. The inversion, which is based on Fourier’s law of heat conduction, shows that we can be confident that the Tynong AGOS borehole temperature record is responding to a long-term heating cycle of 0.3-1.3°C over the last century at the 95% confidence level. The inversion shown here was performed by Kate Gordon.

In geophysics we use the techniques of inversion to identify causative signals, and their uncertainties, in records such as the Tynong borehole log, as well as in the estimation of the value of buried ore bodies and hydrocarbon resources. As shown in the second image, the inversion of the Tynong temperature log for surface temperature change over the last 700 years, with uncertainties at the 95% confidence interval, is compelling. Not surprisingly as we go back in time the uncertainties become larger. However, the inversion, which is based on Fourier’s law of heat conduction, shows that we can be confident that the Tynong AGOS borehole temperature record is responding to a long-term heating cycle of 0.3-1.3°C over the last century at the 95% confidence level.

If there were just one borehole that showed this record, it would not mean much. However the characteristic shallow rollover is present in all the boreholes we have explored, and has been reported in many thousands of boreholes from all around the world.

The only way we know to sensibly interpret such empirical evidence is that ground beneath our feet, down to a depth of around 50 metres or so is now heating from above. The physics that explains these observations dates back to Joseph Fourier, over 200 years ago, so its not exactly new or even contentious. In effect the solid earth below is now absorbing heat from the atmosphere above, counter to the normal process of loosing heat to it. However, if Malcolm can bring to the table an alternative physics to explain these observations, while not falling foul of all the other empirical observations that Fourier’s law of heat conduction admits, then I am happy to consider, and put it to the test. (I suspect Brian Cox would be too, since all good physicists would relish the discovery of a new law of such importance as Fourier’s law).

Perhaps the hyper-skeptical Malcolm thinks that somehow the global cabal of climate scientists has got into all these thousands of boreholes with an electrical heater to propagate the heat signal that artificially simulates surface heating. More fool me.

But, if he does, then I am perfectly happy to arrange to drill a new borehole and, along with him, measure the temperature profile, making sure we don’t let those pesky climate scientists get at the hole with their heating coils before we have done so.

And I’ll bet him we can reproduce the signal from Tynong shown above.

But I’ll only do it on the condition that Malcolm agrees, that when we do (reproduce the signal), he will publicly acknowledge the empirical evidence of a warming world entirely consist with NASA’s surface temperature record.

Malcolm, are you on? Will you take on my bet, and use the Earth’s crust as the arbiter? and perhaps Brian will stream live to the BBC?

The Conversation Disclosure

Mike Sandiford receives funding from the Australian Research Council to investigate the thermal structure of the Australian crust.

Categories: Around The Web

Fishing, not oil, is at the heart of the South China Sea dispute

Tue, 2016-08-16 06:08

Contrary to the view that the South China Sea disputes are driven by a regional hunger for seabed energy resources, the real and immediate prizes at stake are the region’s fisheries and marine environments that support them.

It is also through the fisheries dimensions to the conflict that the repercussions of the recent ruling of the arbitration tribunal in the Philippines-China case are likely to be most acutely felt.

It seems that oil is sexier than fish, or at least the lure of seabed energy resources has a more powerful motivating effect on policymakers, commentators and the media alike. However, the resources really at stake are the fisheries of the South China Sea and the marine environment that sustains them.

The real resource at stake

For a relatively small (around 3 million square kilometres) patch of the oceans, the South China Sea delivers an astonishing abundance of fish. The area is home to at least 3,365 known species of marine fishes, and in 2012, an estimated 12% of the world’s total fishing catch, worth US$21.8 billion, came from this region.

These living resources are worth more than money; they are fundamental to the food security of coastal populations numbering in the hundreds of millions.

Indeed, a recent study showed that the countries fringing the South China Sea are among the most reliant in the world on fish as source of nutrients. This makes their populations especially susceptible to malnutrition as fish catches decline.

These fisheries also employ at least 3.7 million people (almost certainly an underestimate given the level of unreported and illegal fishing in the region).

This is arguably one of the most important services the South China Sea fisheries provide to the global community – keeping nearly 4 million young global citizens busy, who would otherwise have few employment options.

But these vital resources are under enormous pressure.

A disaster in the making

The South China Sea’s fisheries are seriously over-exploited.

Last year, two of us contributed to a report finding that 55% of global marine fishing vessels operate in the South China Sea. We also found that fish stocks have declined 70% to 95% since the 1950s.

Over the past 30 years, the number of fish caught each hour has declined by a third, meaning fishers are putting in more effort for less fish.

This has been accelerated by destructive fishing practices such as the use of dynamite and cyanide on reefs, coupled with artificial island-building. The coral reefs of the South China Sea have been declining at a rate of 16% per decade.

Even so, the total amount of fish caught has increased. But the proportion of large species has declined while the proportion of smaller species and juvenile fish has increased. This has disastrous implications for the future of fishing in the South China Sea.

We found that, by 2045, under business as usual, each of the species groups studied would suffer stock decreases of a further 9% to 59%.

The ‘maritime militia’

Access to these fisheries is an enduring concern for nations surrounding the South China Sea, and fishing incidents play an enduring role in the dispute.

Chinese/Taiwanese fishing fleets dominate the South China Sea by numbers. This is due to the insatiable domestic demand for fish coupled with heavy state subsidies to enable Chinese fishers build larger vessels with longer range.

Competition between rival fishing fleets for a dwindling resource in a region of overlapping maritime claims inevitably leads to fisheries conflicts. Fishing boats have been apprehended for alleged illegal fishing leading to incidents between rival patrol boats on the water, such as the one in March 2016 between Chinese and Indonesian vessels.

Fishing boats are not just used to catch fish. Fishing vessels have long been used as proxies to assert maritime claims.

China’s fishing fleets have been characterised as a “maritime militia” in this context. Numerous incidents have involved Chinese fishing vessels operating (just) within China’s so-called nine-dashed line claim but in close proximity to other coastal states in areas they consider to be part of their exclusive economic zones (EEZs).

The disputed South China Sea area. Author/American Journal of International Law

The Chinese Coast Guard has increasingly played an important role in providing logistical support such as refueling as well as intervening to protect Chinese vessels from arrest by the maritime enforcement efforts of other South China Sea coastal states.

Fisheries as flashpoint

The July 2016 ruling in the dispute between the Philippines and China demolishes any legal basis to China’s claim to extended maritime zones in the southern South China Sea and any right to resources.

The consequence of this is that the Philippines and, by extension, Malaysia, Brunei and Indonesia are free to claim rights over the sea to 200 nautical miles from their coasts as part of their EEZs.

This also creates a pocket of high seas outside any national claim in the central part of the South China Sea.

There are signs that this has emboldened coastal states to take a stronger stance against what they will undoubtedly regard as illegal fishing on China’s part in “their” waters.

Indonesia already has a strong track record of doing so, blowing up and sinking 23 apprehended illegal fishing vessels in April and live-streaming the explosions to maximise publicity. It appears that Malaysia is following suit, threatening to sink illegal fishing vessels and turn them into artificial reefs.

The snag is that China has vociferously rejected the ruling. There is every indication that the Chinese will continue to operate within the nine-dashed line and Chinese maritime forces will seek to protect China’s claims there.

This gloomy view is underscored by the fact that China has recently opened a fishing port on the island of Hainan with space for 800 fishing vessels, a figure projected to rise to 2,000. The new port is predicted to play an important role in “safeguarding China’s fishing rights in the South China Sea”, according to a local official.

On August 2, the Chinese Supreme People’s Court signalled that China had the right to prosecute foreigners “illegally entering Chinese waters” – including areas claimed by China but which, in line with the tribunal’s ruling, are part of the surrounding states' EEZs – and jail them for up to a year.

Ominously, the following day Chinese Defence Minister Chang Wanquan warned that China should prepare for a “people’s war at sea” in order to “safeguard sovereignty”. This sets the scene for increased fisheries conflicts.

Ways forward

The South China Sea is crying out for the creation of a multilateral management, such as through a marine protected area or the revival of a decades-old idea of turning parts of the South China Sea, perhaps the central high seas pocket, into an international marine peace park.

Such options would serve to protect the vulnerable coral reef ecosystems of the region and help to conserve its valuable marine living resources.

A co-operative solution that bypasses the current disputes over the South China Sea may seem far-fetched. Without such action, however, its fisheries face collapse, with dire consequences for the region. Ultimately, the fishers and fishes are going to be the losers if the dispute continues.

The Conversation

Clive Schofield served as an independent expert witness (provided by the Philippines) to the Arbitration Tribunal in the case between the Republic of the Philippines and the People’s Republic of China.

Rashid Sumaila receives funding from research councils in Canada, Belmont, Genome Canada/BC, ADM Capital Foundation, Hong Kong, Pew Charitable Trusts.

William Cheung received funding from ADM Capital Foundation to co-produce the report Boom or Bust - Future Fish in the South China Sea.

Categories: Around The Web

We have almost certainly blown the 1.5-degree global warming target

Mon, 2016-08-15 14:13

The United Nations climate change conference held last year in Paris had the aim of tackling future climate change. After the deadlocks and weak measures that arose at previous meetings, such as Copenhagen in 2009, the Paris summit was different. The resulting Paris Agreement committed to:

Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change.

The agreement was widely met with cautious optimism. Certainly, some of the media were pleased with the outcome while acknowledging the deal’s limitations.

Many climate scientists were pleased to see a more ambitious target being pursued, but what many people fail to realise is that actually staying within a 1.5℃ global warming limit is nigh on impossible.

There seems to be a strong disconnect between what the public and climate scientists think is achievable. The problem is not helped by the media’s apparent reluctance to treat it as a true crisis.

The 1.5℃ limit is nearly impossible

In 2015, we saw global average temperatures a little over 1℃ above pre-industrial levels, and 2016 will very likely be even hotter. In February and March of this year, temperatures were 1.38℃ above pre-industrial averages.

Admittedly, these are individual months and years with a strong El Niño influence (which makes global temperatures more likely to be warmer), but the point is we’re already well on track to reach 1.5℃ pretty soon.

So when will we actually reach 1.5℃ of global warming?

On our current emissions trajectory we will likely reach 1.5℃ within the next couple of decades (2024 is our best estimate). The less ambitious 2℃ target would be surpassed not much later.

This means we probably have only about a decade before we break through the ambitious 1.5℃ global warming target agreed to by the world’s nations in Paris.

A University of Melbourne research group recently published these spiral graphs showing just how close we are getting to 1.5℃ warming. Realistically, we have very little time left to limit warming to 2℃, let alone 1.5℃.

This is especially true when you bear in mind that even if we stopped all greenhouse gas emissions right now, we would likely experience about another half-degree of warming as the oceans “catch up” with the atmosphere.

Parallels with climate change scepticism

The public seriously underestimates the level of consensus among climate scientists that human activities have caused the majority of global warming in recent history. Similarly, there appears to be a lack of public awareness about just how urgent the problem is.

Many people think we have plenty of time to act on climate change and that we can avoid the worst impacts by slowly and steadily reducing greenhouse gas emissions over the next few decades.

This is simply not the case. Rapid and drastic cuts to emissions are needed as soon as possible.

In conjunction, we must also urgently find ways to remove greenhouse gases already in the atmosphere. At present, this is not yet viable on a large scale.

Is 1.5℃ even enough to avoid “dangerous” climate change?

The 1.5℃ and 2℃ targets are designed to avoid the worst impacts of climate change. It’s certainly true that the more we warm the planet, the worse the impacts are likely to be. However, we are already experiencing dangerous consequences of climate change, with clear impacts on society and the environment.

For example, a recent study found that many of the excess deaths reported during the summer 2003 heatwave in Europe could be attributed to human-induced climate change.

Also, research has shown that the warm seas associated with the bleaching of the Great Barrier Reef in March 2016 would have been almost impossible without climate change.

Climate change is already increasing the frequency of extreme weather events, from heatwaves in Australia to heavy rainfall in Britain.

These events are just a taste of the effects of climate change. Worse is almost certainly set to come as we continue to warm the planet.

It’s highly unlikely we will achieve the targets set out in the Paris Agreement, but that doesn’t mean governments should give up. It is vital that we do as much as we can to limit global warming.

The more we do now, the less severe the impacts will be, regardless of targets. The simple take-home message is that immediate, drastic climate action will mean far fewer deaths and less environmental damage in the future.

This article is adapted from a blog post that originally appeared here.

The Conversation

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Benjamin J. Henley receives funding from an ARC Linkage Project and is associated with the ARC Centre of Excellence for Climate System Science.

Categories: Around The Web

Survey: two-thirds of Great Barrier Reef tourists want to 'see it before it's gone'

Mon, 2016-08-15 06:16
Check it out while you can. Tourism Queensland/Wikimedia Commons, CC BY-SA

The health of the Great Barrier Reef (GBR) is declining – a fact that has not been lost on the world’s media.

The issue has made international headlines and attracted comment from public figures such as US President Barack Obama and British businessman Richard Branson.

Some media outlets and tourism operators have sought to downplay the effects, presumably to try to mitigate the impact on tourism. The industry provides roughly 65,000 jobs and contributes more than A$5 billion a year to the Australian economy.

But our research suggests that the ailing health of the GBR has in fact given tourists a new reason to visit, albeit one that doesn’t exactly promise a long-term future.

When we surveyed hundreds of GBR tourists last year, 69% of them said they had opted to visit the reef “before it is gone” – and that was before the latest bleaching generated fresh international headlines about its plight.

‘Last chance’ tourism

“Last chance tourism” (LCT) is a phenomenon whereby tourists choose to visit a destination that is perceived to be in danger, with the express intention of seeing it before it’s gone.

The media obviously play a large role in this phenomenon – the more threatened the public perceives a destination to be, the bigger the market for LCT.

There’s a vicious cycle at play here: tourists travel to see a destination before it disappears, but in so doing they contribute to its demise, either directly through on-site pressures or, in the case of climate-threatened sites such as the GBR, through greenhouse gas emissions. These added pressures increase the vulnerability of the destination and in turn push up the demand for LCT still further.

The GBR often features on lists of tourist destinations to see before they disappear, alongside places such as Glacier National Park, the Maldives and the Galapagos Islands.

While the media have proclaimed the reef to be an LCT destination, it has not previously been empirically confirmed that tourists are indeed motivated to visit specifically because of its vulnerable status.

Surveying reef tourists

We wanted to find out how many of the GBR’s holidaymakers are “last chance” tourists. To that end, we surveyed 235 tourists visiting three major tourism hotspots, Port Douglas, Cairns and Airlie Beach, to identify their leading motivations for visiting.

We gave them a suggested list of 15 reasons, including “to see the reef before it is gone”; “to rest and relax”; “to discover new places and things”, and others. We then asked them to rate the importance of each reason on a five-point scale, from “not at all” to “extremely”.

We found that 69% of tourists were either “very” or “extremely” motivated to see the reef before it was gone. This reason attracted the highest proportion of “extremely” responses (37.9%) of any of the 15 reasons.

This reason was also ranked the fourth-highest by average score on the five-point scale. The top three motivations by average score were: “to discover new places and things”; “to rest and relax; and "to get away from the demands of everyday life”.

Our results also confirmed that the media have played a large role in shaping tourists' perceptions of the GBR. The internet was the most used information source (68.9% of people) and television the third (54.4%), with word of mouth coming in second (57%).

Airlie Beach, a great spot for some last-chance tourism. Damien Dempsey/Wikimedia Commons, CC BY

Our findings suggest that the GBR’s tribulations could offer a short-term tourism boost, as visitors flock to see this threatened natural wonder. But, in the long term, the increased tourism might exacerbate the pressure on this already vulnerable region – potentially even hastening the collapse of this ecosystem and the tourism industry that relies on its health.

This paradox is deepened further when we consider that many of the tourists in our survey who said they were visiting the reef to “see it before it is gone” nevertheless had low levels of concern about their own impacts on the region.

Where to from here?

We undertook our survey in 2015, before this year’s bleaching event, described as the most severe in the GBR’s history.

This raises another question: is there a threshold beyond which the GBR is seen as “too far gone” to visit? If so, might future more frequent or severe bleaching episodes take us past that threshold?

As the most important source of information for tourists visiting the GBR, the media in particular need to acknowledge their own important role in informing the public. Media outlets need to portray the reef’s current status as accurately as possible. The media’s power and influence also afford them a great opportunity to help advocate for the GBR’s protection.

Educating tourists about the threats facing the GBR is an important way forward, particularly as our research identified major gaps in tourists' understanding of the specific threats facing the GBR and the impacts of their own behaviour. Many survey respondents, for instance, expressed low levels of concern about agricultural runoff, despite this being one of the biggest threats facing the GBR.

Of course, tourism is just one element in a complex web of issues that affect the GBR and needs to be part of a wider consideration of the reef’s future.

The only thing that is certain is that more needs to be done to ensure this critical ecosystem can survive, so that tourists who think this is the last chance to see it can hopefully be proved wrong.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

How the entire nation of Nauru almost moved to Queensland

Mon, 2016-08-15 06:16
Nauru's parliament would have been rebuilt in Queensland, but with less power. CdaMVvWgS/Wikimedia Commons

Nauru is best known to most Australians as the remote Pacific island where asylum seekers who arrive by boat are sent. What is less well known is that in the 1960s, the Australian government planned to relocate the entire population of Nauru to an island off the Queensland coast.

The irony of this is striking, especially in light of continuing revelations that highlight the non-suitability of Nauru as a host country for refugees. It also provides a cautionary tale for those considering wholesale population relocation as a “solution” for Pacific island communities threatened by the impacts of climate change.

Extensive phosphate mining on Nauru by Australia, Britain and New Zealand during the 20th century devastated much of the country. The landscape was so damaged that scientists considered it would be uninhabitable by the mid-1990s. With the exorbitant cost of rehabilitating the island, relocation was considered the only option.

In 1962, Australia’s prime minister Robert Menzies acknowledged that the three nations had a “clear obligation … to provide a satisfactory future for the Nauruans”, given the large commercial and agricultural benefits they had derived from Nauru’s phosphate. This meant “either finding an island for the Nauruans or receiving them into one of the three countries, or all of the three countries”.

That same year, Australia appointed a Director of Nauruan Resettlement to comb the South Pacific looking for “spare islands offering a fair prospect”. Possible relocation sites in and around Fiji, Papua New Guinea, the Solomon Islands, and Australia’s Northern Territory were explored, but were ultimately deemed inappropriate. There weren’t enough job opportunities and there were tensions with the locals.

Fraser Island in Queensland was also considered, but the Australian government decided it didn’t offer sufficiently strong economic prospects to support the population. The Nauruans thought this was a convenient excuse (and archival materials show that the timber industry was fiercely opposed).

The Curtis solution

In 1963, Curtis Island near Gladstone was offered as an alternative. Land there was privately held, but the Australian government planned to acquire it and grant the Nauruans the freehold title. Pastoral, agricultural, fishing and commercial activities were to be established, and all the costs of resettlement, including housing and infrastructure, were to be met by the partner governments at an estimated cost of 10 million pounds – around A$274 million in today’s terms.

But the Nauruans refused to go. They did not want to be assimilated into White Australia and lose their distinctive identity as a people. Many also saw resettlement as a quick-fix solution by the governments that had devastated their homeland, and a cheap option compared with full rehabilitation of the island.

Australia also refused to relinquish sovereignty over Curtis Island. While the Nauruans could become Australian citizens, and would have the right to “manage their own local administration” through a council “with wide powers of local government”, the island would officially remain part of Australia.

Frustrated by what it perceived as a genuine and generous attempt to meet the wishes of the Nauruan people, the Menzies government insisted it wouldn’t change its mind.

So the Nauruans stayed put.

Nauru’s phosphate industry has left the landscape scarred and useless for agriculture. CdaMVvWgS/Wikimedia Commons

The issue briefly resurfaced in 2003 when Australia’s foreign minister Alexander Downer once again suggested wholesale relocation as a possible strategy, given that Nauru was “bankrupt and widely regarded as having no viable future”. Nauru’s president dismissed the proposal, reiterating that relocating the population to Australia would undermine the country’s identity and culture.

Planned relocations in the Pacific

Today, “planned relocation” is touted as a possible solution for low-lying Pacific island countries, such as Kiribati and Tuvalu, which are threatened by sea-level rise and other long-term climate impacts.

But past experiences in the Pacific, such as the relocation of the Banabans in 1945 from present-day Kiribati to Fiji, show the potentially deep, intergenerational psychological consequences of planned relocation. This is why most Pacific islanders see it as an option of last resort. Unless relocation plans result from a respectful, considered and consultative process, in which different options and views are seriously considered, they will always be highly fraught.

Nauru today is at the highest level of vulnerability on the Environmental Vulnerability Index. The past destruction wrought by phosphate mining has rendered the island incapable of supporting any local agriculture or industry, with 90% of the land covered by limestone pinnacles.

It has a very high unemployment rate, scarce labour opportunities, and virtually no private sector – hence why the millions of dollars on offer to operate Australia’s offshore processing centres was so attractive. These factors also illustrate why the permanent resettlement of refugees on Nauru is unrealistic and unsustainable.

Nauru’s future seems sadly rooted in an unhealthy relationship of co-dependency with Australia, as its territory is once again exploited, at the expense of the vulnerable. And as the story of Curtis Island shows, there are no simple solutions, whether well-intentioned or not.

This is an overview of a longer article published in Australian Geographer.

The Conversation

Jane McAdam receives funding from the Australian Research Council and the Research Council of Norway. She is engaged in several international policy processes aimed at developing strategies to address human mobility in the context of climate change and disasters.

Categories: Around The Web

People power is the secret to reliable, clean energy

Fri, 2016-08-12 17:06
Australia will likely have to close more coal power stations to meet climate targets Coal power image from www.shutterstock.com

Australia’s energy watchdog, the Australian Energy Market Operator (AEMO), has issued a stark warning: more wind and solar power will demand new approaches to avoid interruptions to electricity supply.

In its annual Electricity Statement of Opportunities, released this week, AEMO indicated that the overall outlook for reliability has improved. So far, so good.

However, South Australia, Victoria, and New South Wales are potentially at greater risk of interruptions within ten years if the current trend of shutting down old coal-fired power stations accelerates, as we can expect from Australia’s efforts to meet national and international climate targets.

AEMO projections of supply 2016 AEMO Electricity Statement of Opportunities

The threat of power blackouts is reliable headline fodder as seen in yesterday’s Australian Financial Review. But the solution to this very real challenge is not to cling to ageing fossil fuel power stations.

Rather, as AEMO Chief Operating Officer Mike Cleary put it:

possible solutions could include an increased interconnection across [the electricity market], battery storage, and demand side management services.

While there is much excitement about battery technology, it is the oft-forgotten human dimension that offers the greatest potential. We consumers, the so-called “demand side” of the market, can play a crucial role in reducing the strain on the electricity network, which will in turn make for more reliable power.

The biggest variability that the electricity sector has to contend with is not intermittent solar or wind generation output, but the ups and downs of power demand.

People power

Helping business and household consumers manage their demand for power (or “demand management”) is a win-win scenario – lower costs for electricity and a stronger electricity system. Demand management and energy efficiency are key elements in lifting Australia’s energy productivity. Lifting energy productivity means we do not need to slow down the transition to a low-carbon economy.

Research from the University of Technology Sydney’s Institute for Sustainable Futures (ISF) that supported GetUp!’s Homegrown Power Plan highlights that we can not only retire coal power to achieve our climate targets, but also shift entirely to 100% renewable electricity generation by 2030.

However, to do this affordably we need to get smarter about saving energy and supporting the grid. GetUp!’s plan factored in a target to double Australia’s energy productivity by 2030, as advocated by the Australian Alliance to Save Energy and ClimateWorks.

Despite the potential, neither AEMO nor any other institution is tasked with assessing demand management opportunities that will strengthen the network and promote renewables. Work is needed to understand this demand-side opportunity, just as AEMO’s latest report provided for electricity supply.

It may also be time to revisit AEMO’s 2013 modelling on 100% renewables that did not factor in major energy productivity gains.

Switching up

The importance of demand management has been recognised since the dawn of the National Electricity Market in 1992. But this potential has never been properly tapped.

Happily, there are signs that this is finally changing. For example, the Australian Energy Regulator has announced a process to design a Demand Management Incentive Scheme. This will provide an incentive for electricity networks to help consumers reduce demand and cut energy costs.

The more progressive energy systems worldwide have already incorporated energy productivity into their energy policies and strategies. Germany is implementing its National Action Plan for Energy Efficiency as one of the twin pillars of its Energiewende (energy transition). And a 23% reduction in buildings' energy consumption by 2030 is one of the three key targets to achieve New York’s “Reforming the Energy Vision”.

The International Energy Agency (IEA) has also recommended energy efficiency improvements as its first measure to achieve peak energy emissions by 2020, in tandem with a US$130 billion increase in renewables investment. This “bridge” scenario was the IEA’s contribution to the 2015 Paris climate summit.

Policy measures recommended under the IEA Bridge Scenario OECD/IEA 2015 World Energy Outlook Special Report 2015: Energy and Climate Change, IEA Publishing Global energy-related GHG emissions reduction by policy measure in the Bridge Scenario relative to the INDC Scenario OECD/IEA 2015 World Energy Outlook Special Report 2015: Energy and Climate Change, IEA Publishing. Time for Australia to get serious

It is now time for Australia to embrace the link between demand management, energy productivity and renewable energy. We need these to work together so that we can achieve our carbon reduction goals while protecting electricity security and economic growth.

We have taken a good first step, releasing a comprehensive National Energy Productivity Plan at the end of 2015. It is not quite as ambitious as proposed by the Homegrown Power Plan (it only seeks a 40% improvement between 2015 and 2030) but it is a step in the right direction.

What are missing, as RMIT energy researcher Alan Pears points out, are the resources to make it happen: no additional funding has been allocated to deliver the 34 recommended measures.

We can unlock Australia’s energy productivity potential. And we can have a clean, affordable and reliable electricity system. But this will not happen by accident.

Let’s encourage our utilities to engage energy consumers in providing the solutions. In other words, power to the people.

The Conversation

The Institute for Sustainable Futures at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

The Institute for Sustainable Futures at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

Categories: Around The Web

The $8.2 billion water bill to clean up the Barrier Reef by 2025 – and where to start

Fri, 2016-08-12 06:04

In 2015, the Australian and Queensland governments agreed on targets to greatly reduce the sediment and nutrient pollutants flowing onto the Great Barrier Reef.

What we do on land has a real impact out on the reef: sediments can smother the corals, while high nutrient levels help to trigger more regular and larger outbreaks of crown-of-thorns starfish. This damage leaves the Great Barrier Reef even more vulnerable to climate change, storms, cyclones and other impacts.

Dealing with water quality alone isn’t enough to protect the reef, as many others have pointed out before. But it is an essential ingredient in making it more resilient.

The water quality targets call for sediment runoff to be reduced by up to 50% below 2009 levels by 2025, and for nitrogen levels to be cut by up to 80% over the same period. But so far, detailed information about the costs of achieving these targets has not been available.

Both the Australian and Queensland governments have committed more funding to improve water quality on the reef. In addition, the Queensland government established the Great Barrier Reef Water Science Taskforce, a panel of 21 experts from science, industry, conservation and government, led by Queensland Chief Scientist Geoff Garrett and funded by Queensland’s Department of Environment and Heritage Protection.

New work commissioned by the taskforce now gives us an idea of the likely cost of meeting those reef water quality targets.

This groundbreaking study, which drew on the expertise of water quality researchers, economists and “paddock to reef” modellers, has found that investing A$8.2 billion would get us to those targets by the 2025 deadline, albeit with a little more to be done in the Wet Tropics.

That A$8.2 billion cost is half the size of the estimates of between A$16 billion and A$17 billion discussed in a draft-for-comment report produced in May 2016, which were reported by the ABC and other media.

Those draft figures did not take into account the reductions in pollution already achieved between 2009 and 2013. They also included full steps of measures that then exceeded the targets. A full review process identified these, and now this modelling gives a more accurate estimate of what it would cost to deliver the targets using the knowledge and technology available today.

A future for farming

Importantly, the research confirms that a well-managed agricultural sector can continue to coexist with a healthy reef through improvements to land management practices.

Even more heartening is the report’s finding that we can get halfway to the nitrogen and sediment targets by spending around A$600 million in the most cost-effective areas. This is very important because prioritising these areas enables significant improvement while allowing time to focus on finding solutions that will more cost-effectively close the remaining gap.

Among those priority solutions are improving land and farm management practices, such as adopting best management practices among cane growers to reduce fertiliser loss, and in grazing to reduce soil loss.

While these actions have been the focus of many water quality programs to date, much more can be done. For example, we can have a significant impact on pollutants in the Great Barrier Reef water catchments by achieving much higher levels of adoption and larger improvements to practices such as maintaining grass cover in grazing areas and reducing and better targeting fertiliser use in cane and other cropping settings. These activities will be a focus of the two major integrated projects that will result from the taskforce’s recommendations.

A new agenda

The new study, produced by environmental consultancy Alluvium and a range of other researchers (and for which I was one of the external peer reviewers), is significant because nothing on this scale involving the Great Barrier Reef and policy costings has been done before.

Guidelines already released by the taskforce tell us a lot about what we need to do to protect the reef. Each of its ten recommendations now has formal government agreement and implementation has begun.

Alluvium’s consultants and other experts who contributed to the study – including researchers from CQ University and James Cook University – were asked to investigate how much could be achieved, and at what price, by action in the following seven areas:

  1. Land management practice change for cane and grazing

  2. Improved irrigation practices

  3. Gully remediation

  4. Streambank repair

  5. Wetland construction

  6. Changes to land use

  7. Urban stormwater management

Those seven areas for potential action were chosen on the basis of modelling data and expert opinion as the most feasible to achieve the level of change required to achieve the targets. By modelling the cost of delivering these areas and the change to nutrient and sediments entering the reef, the consultants were able to identify which activities were cheapest through to the most expensive across five catchment areas (Wet Tropics, Burdekin, Mackay-Whitsunday, Fitzroy and Burnett Mary).

Alluvium’s study confirmed the water science taskforce’s recommendation that investing in some catchments and activities along the Great Barrier Reef is likely to prove more valuable than in others, in both an environmental and economic sense.

Some actions have much lower costs and are more certain; these should be implemented first. Other actions are much more expensive. Of the total A$8.2 billion cost of meeting the targets, two-thirds (A$5.59 billion) could be spent on addressing gully remediation in just one water catchment (the Fitzroy region). Projects with such high costs are impractical and highly unlikely to be implemented at the scale required.

The Alluvium study suggests we would be wise not to invest too heavily in some costly repair measures such as wetland construction for nutrient removal just yet – at least until we have exhausted all of the cheaper options, tried to find other cost-effective ways of reaching the targets, and encouraged innovative landholders and other entrepreneurs to try their hand at finding ways to reduce costs.

The value of a healthier reef

The A$8.2 billion funding requirement between now and 2025 is large, but let’s look at it in context. It’s still significantly less than the A$13 billion that the Australian government is investing in the Murray-Darling Basin.

It would also be an important investment in protecting the more than A$5 billion a year that the reef generates for the Australian economy and for Queensland communities.

The immediate focus should be on better allocating available funds and looking for more effective solutions to meet the targets to protect the reef. More work is still needed to ensure we do so.

If we start by targeting the most cost-effective A$1 billion-worth of measures, that should get us more than halfway towards achieving the 2025 targets. The challenge now is to develop new ideas and solutions to deliver those expensive last steps in improving water quality. The Alluvium report provides a valuable tool long-term to ensure the most cost-effective interventions are chosen to protect the Great Barrier Reef.

This article was written with contributions from Geoff Garrett, Stuart Whitten, Steve Skull, Euan Morton, Tony Weber and Christine Williams.

Read more of The Conversation’s Great Barrier Reef coverage, including articles by experts including Jon Brodie and Ove Hoegh-Guldberg.

The Conversation

John Rolfe has previously received funding from the National Environmental Research Program and the National Environmental Science Program for economic studies evaluating the costs and benefits of reef protection.

Categories: Around The Web

Stopping land clearing and replanting trees could help keep Australia cool in a warmer future

Thu, 2016-08-11 15:53
Increasing land clearing could leave Australia hotter and drier. Wilderness Society

Land clearing is on the rise in Queensland and New South Wales, with land clearing laws being fiercely debated.

In Queensland in 2013–14, 278,000 hectares of native vegetation were cleared (1.2 times the size of the Australian Capital Territory). A further 296,000ha were cleared in 2014–15. These are the highest rates of deforestation in the developed world.

Land clearing on this scale is bad for a whole host of reasons. But our research shows that it is also likely to make parts of Australia warmer and drier, adding to the effects of climate change.

How do trees change the climate?

Land clearing releases greenhouse gases into the atmosphere, but the effect of land clearing on climate goes well beyond carbon emissions. It causes warming locally, regionally and even globally, and it changes rainfall by altering the circulation of heat and moisture.

Trees evaporate more water than any other vegetation type – up to 10 times more than crops and pastures. This is because trees have root systems that can access moisture deep within the soil. Crops and pastures have 70% of their roots in the top 30cm of the soil, while trees and other woody plants have 43% of their roots in the deeper part of the soil.

The increased evaporation and rough surface of trees creates moist, turbulent layers in the lower atmosphere. This reduces temperatures and contributes to cloud formation and increased rainfall. The increased rainfall then provides more moisture to soils and vegetation.

The clearing of deep-rooted native vegetation for shallow-rooted crops and pastures diminishes this process, resulting in a warmer and drier climate.

We can see this process at work along the “bunny fence” in southwest Western Australia, where there is a moister atmosphere and more clouds over native vegetation compared with nearby farming areas during summer.

Studies in Amazonia also indicate that as deforestation expands rainfall declines. A tipping point may be reached when deforestation reaches 30-50%, after which rainfall is substantially reduced. Complete deforestation results in the greatest decline in rainfall.

More trees, cooler moister climate

We wanted to know how land clearing could affect Australia’s climate in the future. We did this by modelling two scenarios for different amounts of land clearing, using models developed by CSIRO.

In the first scenario, crops and pasture expand in the semi-arid regions of eastern and southwest Australia. The second scenario limits crops and pastures to highly productive lands, and partially restores less productive lands to savanna woodlands.

We found that restoring trees to parts of Australia would reduce surface temperatures by up to 1.6℃, especially in western Queensland and NSW.

We also found that more trees reduced the overall climate-induced warming from 4.1℃ to 3.2℃ between 2050 and 2100.

Replanting trees could increase summer rainfall by 10% overall and by up to 15.2% in the southwest. We found soil moisture would increase by around 20% in replanted regions.

Our study doesn’t mean replanting all farmed land with trees, just areas that are less productive and less cost-effective to farm intensively. In our scenario, the areas that are restored in western Queensland and NSW would need a tree density of around 40%, which would allow a grassy understorey to be maintained. This would allow some production to continue such as cattle grazing at lower numbers or carbon farming.

Political and social challenges

Limiting land clearing represents a major challenge for Australia’s policymakers and farming communities.

The growing pressure to clear reflects a narrow economic focus on achieving short- to medium-term returns by expanding agriculture to meet the growing global demand for food and fibre.

However, temperatures are already increasing and rainfall is decreasing over large areas of eastern and southwest Australia. Tree clearing coupled with climate change will make growing crops and raising livestock even harder.

Balancing farming with managing climate change would give land owners on marginal land new options for income generation, while the most efficient agricultural land would remain in production. This would need a combination of regulation and long-term financial incentives.

The climate benefits of limiting land clearing must play a bigger part in land management as Australia’s climate becomes hotter and drier. Remnant vegetation needs to be conserved and extensive areas of regrowth must be allowed to regenerate. And where regeneration is not possible, we’ll have to plant large numbers of trees.

The Conversation

Clive McAlpine receives funding from The Australian Research Council and the Queensland Government

Jozef Syktus receives funding from the Australian Research Council and the Queensland Government

Leonie Seabrook receives funding from the Australian Research Council.

Categories: Around The Web

The Galileo gambit and other stories: the three main tactics of climate denial

Thu, 2016-08-11 06:05
Galileo was right, but that doesn't mean his fans are. Justus Sustermans/Wikimedia Commons

The recently elected One Nation senator from Queensland, Malcolm Roberts, fervently rejects the established scientific fact that human greenhouse gas emissions cause climate change, invoking a fairly familiar trope of paranoid theories to propound this belief.

Roberts variously claims that the United Nations is trying to impose world government on us through climate policy, and that CSIRO and the Bureau of Meteorology are corrupt institutions that, one presumes, have fabricated the climate extremes that we increasingly observe all over the world.

In the world of Malcolm Roberts, these agencies are marionettes of a “cabal” of “the major banking families in the world”. Given the parallels with certain strands of anti-Jewish sentiment, it’s perhaps an unfortunate coincidence that Roberts has reportedly relied on a notorious Holocaust denier to support this theory.

It might be tempting to dismiss his utterances as conspiratorial ramblings. But they can teach us a great deal about the psychology of science denial. They also provide us with a broad spectrum of diagnostics to spot pseudoscience posing as science.

The necessity of conspiracism

First, the appeal to a conspiracy among scientists, bankers and governments is never just a slip of the tongue but a pervasive and necessary ingredient of the denial of well-established science. The tobacco industry referred to medical research on lung cancer as being conducted by an “oligopolistic cartel” that “manufactures alleged evidence”. Some people accuse the US Central Intelligence Agency (CIA) of creating and spreading AIDS, and much anti-vaccination content on the web is suffused with conspiratorial allegations of totalitarianism.

This conspiratorial mumbo jumbo inevitably arises when people deny facts that are supported by an overwhelming body of evidence and are no longer the subject of genuine debate in the scientific community, having already been tested thoroughly. As evidence mounts, there comes a point at which inconvenient scientific findings can only be explained away by recourse to huge, nebulous and nefarious agendas such as the World Government or Stalinism.

If you are addicted to nicotine but terrified of the effort required to give up smoking, it might be comforting instead to accuse medical researchers of being oligopolists (whatever that means).

Likewise, if you are a former coal miner, like Malcolm Roberts, it is perhaps easier to accuse climate scientists of colluding to create a world government (whatever that is) than to accept the need to take coal out of our economy.

There is now ample research showing the link between science denial and conspiracism. This link is supported by independent studies from around the world.

Indeed, the link is so established that conspiracist language is one of the best diagnostic tools you can use to spot pseudoscience and science denial.

The Galileo gambit

How else can science dissenters attempt to justify their contrarian position? Another tactic is to appeal to heroic historical dissenters, the usual hero of choice being Galileo Galilei, who overturned the orthodoxy that everything revolves around the Earth.

This appeal is so common in pseudoscientific quackery that it is known as the Galileo gambit. The essence of this argument is:

They laughed at Galileo, and he was right.

They laugh at me, therefore I am right.

A primary logical difficulty with this argument is that plenty of people are laughed at because their positions are absurd. Being dismissed by scientists doesn’t automatically entitle you to a Nobel Prize.

Another logical difficulty with this argument is that it implies that no scientific opinion can ever be valid unless it is rejected by the vast majority of scientists. Earth must be flat because no scientist other than a Googling Galileo in Gnowangerup says so. Tobacco must be good for you because only tobacco-industry operatives believe it. And climate change must be a hoax because only the heroic Malcolm Roberts and his Galileo Movement have seen through the conspiracy.

Yes, Senator-elect Roberts is the project leader of the Galileo Movement, which denies the scientific consensus on climate change, favouring instead the opinions of a pair of retired engineers and the radio personality Alan Jones.

Any invocation of Galileo’s name in the context of purported scientific dissent is a red flag that you’re being fed pseudoscience and denial.

The sounds of science

The rejection of well-established science is often couched in sciency-sounding terms. The word “evidence” has assumed a particular prominence in pseudoscientific circles, perhaps because it sounds respectable and evokes images of Hercule Poirot tenaciously investigating dastardly deeds.

Since being elected, Roberts has again aired his claim that there is “no empirical evidence” for climate change.

But “show us the evidence” has become the war cry of all forms of science denial, from anti-vaccination activists to creationists, despite the existence of abundant evidence already.

This co-opting of the language of science is a useful rhetorical device. Appealing to evidence (or a lack thereof) seems reasonable enough at first glance. Who wouldn’t want evidence, after all?

It is only once you know the genuine state of the science that such appeals are revealed to be specious. Literally thousands of peer-reviewed scientific articles and the national scientific academies of 80 countries support the pervasive scientific consensus on climate change. Or, as the environmental writer George Monbiot has put it:

It is hard to convey just how selective you have to be to dismiss the evidence for climate change. You must climb over a mountain of evidence to pick up a crumb: a crumb which then disintegrates in the palm of your hand. You must ignore an entire canon of science, the statements of the world’s most eminent scientific institutions and thousands of papers published in the foremost scientific journals.

Accordingly, my colleagues and I recently showed that in a blind test – the gold standard of experimental research – contrarian talking points about climate indicators were uniformly judged to be misleading and fraudulent by expert statisticians and data analysts.

Conspiracism, the Galileo gambit and the use of sciency-sounding language to mislead are the three principal characteristics of science denial. Whenever one or more of them is present, you can be confident you’re listening to a debate about politics or ideology, not science.

The Conversation

Stephan Lewandowsky receives funding from the Australian Research Council, the Royal Society, and the Psychonomic Society.

Categories: Around The Web

Hunting, fishing and farming remain the biggest threats to wildlife

Thu, 2016-08-11 06:05
Snow leopards are just one of the species still threatened by hunting. from www.shutterstock.com

History might judge the Paris climate agreement to be a watershed for all humanity. If nations succeed in halting runaway climate change, this will have enormous positive implications for life on Earth.

Yet as the world applauds a momentous shift toward carbon neutrality and hope for species threatened by climate change, we can’t ignore the even bigger threats to the world’s wildlife and ecosystems.

Climate change threatens 19% of globally threatened and near-threatened species – including Australia’s critically endangered mountain pygmy possum and the southern corroboree frog. It’s a serious conservation issue.

Yet our new study, published in Nature, shows that by far the largest current hazards to biodiversity are overexploitation and agriculture.

The biggest threats to the world’s wildlife Sean Maxwell et al. The cost of overexploitation and agriculture

We assessed nearly 9,000 species listed on the International Union for the Conservation of Nature’s Red List of Threatened Species. We found that 72% are threatened by overexploitation and 62% by agriculture.

Overexploitation (the unsustainable harvest of species from the wild) is putting more species on an extinction pathway than any other threat.

And the expansion and intensification of agriculture (the production of food, fodder, fibre and fuel crops; livestock; aquaculture; and the cultivation of trees) is the second-largest driver of biodiversity loss.

Hunting and gathering is a threat to more than 1,600 species, including many large carnivores such as tigers and snow leopards.

Unsustainable logging is driving the decline of more than 4,000 species, such as Australia’s Leadbeater’s possum, while more than 1,000 species, including southern bluefin tuna, are losing out to excessive fishing pressure.

Land change for crop farming and timber plantations imperils more than 5,300 species, such as the far eastern curlew, while the northern hairy-nosed wombat is one of more than 2,400 species affected by livestock farming and aquaculture.

The far eastern curlew is threatened by farming. Curlew image from www.shutterstock.com

The threat information used to inform our study is the most comprehensive available. But it doesn’t tell the complete story.

Threats are likely to change in the future. Climate change, for example, will become increasingly problematic for many species in coming decades.

Moreover, threats to biodiversity rarely operate in isolation. More than 80% of the species we assessed are facing more than one major threat.

Through threat interactions, smaller threats can indirectly drive extinction risk. Roads and energy production, for example, are known to facilitate the emergence of overexploitation, land modification and habitat loss.

But until we have a better understanding of how threats interact, a pragmatic course of action is to limit those impacts that are currently harming the most species.

By ensuring that major threats that occur today (overexploitation, agriculture and so on) do not compromise ecosystems tomorrow, we can help to ameliorate the challenges presented by impending climate change.

Getting it right

Overexploitation and agriculture demand a variety of conservation approaches. Traditional approaches, such as well-placed protected areas and the enforcement of hunting, logging and fishing regulations, remain the strongest defence against the ravages of guns, nets and bulldozers.

Achieving a truly effective protected area network is impossible, however, when governments insist on relegating protected areas to “residual” places – those with least promise for commercial uses.

Reducing impacts from overexploitation of forests and fish is also futile unless industries that employ clearfell logging and illegal fishing vessels transition to more environmentally sustainable practices.

Just as critical as traditional approaches are incentives for hunters, fishers and farmers to conserve threatened species outside designated conservation areas.

Australia’s Leadbeater’s possum remains threatened by logging. Greens MPs/Flickr, CC BY-NC-ND

For nations like Australia, our study shows there is a growing mismatch in environmental policy and the outcomes for biodiversity. Environmental programs such as the once well-funded National Reserve System Strategy and Biodiversity Fund were important in that they helped conserve wildlife on private and public land, and were fundamental to defeating the biggest, prevailing threats to Australia’s biodiversity. But these programs either do not exist anymore or have little funding to support them at state and federal levels.

On top of this, land-clearing – without doubt one of the largest threats to biodiversity across the country – is on the increase because laws have been repealed across the country. Any benefits accrued by previous good environmental programs are being eroded.

If we are to seriously tackle the largest threats to biodiversity in Australia, we need to recognise the biggest threats. This means efforts to reduce threats from agriculture and overexploitation of forests and fish must include durable environmental regulation.

This article was co-authored by Thomas Brooks, head of science and knowledge at the International Union for the Conservation of Nature.

The Conversation

Sean Maxwell receives funding from the ARC Centre of Excellence for Environmental Decisions.

James Watson receives funding from the Australian Research Council. He is Director of the Science and Research Initiative at the Wildlife Conservation Society.

Richard Fuller receives funding from the Australian Research Council.

Categories: Around The Web

Nine years after the Pasha Bulker storm, we're finally getting a handle on East Coast Lows

Wed, 2016-08-10 16:48

In June 2007, Australia was pummelled by five East Coast Lows. The most significant of them, which struck on June 8-9, is still referred to as the “Pasha Bulker” storm, after the 76,000-tonne bulk carrier that ran aground near Newcastle. The storm caused major flooding, strong winds, high seas and A$1.6 billion in damage, making it Australia’s eighth most expensive disaster in the last 50 years.

East Coast Lows (ECLs) have been important features of the eastern seaboard for centuries, with the first case studies published back in 1954. But by June 2007 it had been ten years since the last serious scientific look at these storms. The damage suffered that month made it clear how much we still didn’t know about these weather systems, let alone about how they might behave in the future.

Instead of a whole bunch of scientists going off and doing their own thing, we formed the Eastern Seaboard Climate Change Initiative, in which local universities and state and federal governments could work together to identify the biggest scientific questions for the eastern seaboard, and start to solve them.

Nine years and a slew of research papers later, we know a lot more about ECLs than we once did. We have built a strong research network that can expand our knowledge still further and put it into practice. Today, a special issue of the Journal of Southern Hemisphere Earth Systems Science highlights some of the things we’ve learned.

What do we know?

There are seven papers in the special issue, covering a broad range of topics.

Danielle Verdon-Kidd and her colleagues look back at the Pasha Bulker storm and reflect on the scale of the impacts, as well as issues for future flood planning, such as improved education about the dangers of entering floodwaters.

A group from the Bureau of Meteorology (including myself) has also developed a new online database of East Coast Lows over the past 60 years, to help emergency managers look back on the impacts of past storms or find out how many of the big events they remember were actually ECLs.

Going back still further, Stuart Browning and Ian Goodwin have looked at what sorts of ocean and atmospheric conditions influence East Coast Lows, as these storms tend not to be as strongly affected by big climate drivers such as La Niña. This research has helped to extend the record of East Coast Lows back to the 19th century and found that the numbers of ECLs can vary quite a lot over decades and longer. Interestingly, the past few decades (up to 2014) have been a period of relatively low activity.

Anthony Kiem and his colleagues have delved into the question of how coastal rainfall patterns and impacts can change, depending on the “type” of ECL that happens. This work, as well as the work by Browning and Goodwin, highlights how important it is to consider the different types of East Coast Lows – a storm that causes heavy rain in the Northern Rivers looks very different to one that brings downpours to Gippsland, and these might also change in different ways over time. This teases out important detail that can be washed out in studies that lump all storms in together.

Before we can use climate models to assess how East Coast Lows and their impacts may change on the eastern seaboard, we need to know whether our models are doing a good job. So Alejandro Di Luca and colleagues have assessed how well the NARCliM regional climate model ensemble is able to represent East Coast Lows. They found that regional models have real benefits over global climate models, particularly for the most extreme events.

Despite these promising results, studies led by Nadeeka Parana Manage and Natalie Lockart found that there is still a way to go before the regional models produce data of the quality needed for simulating river flows and dam levels, and how future changes to storm patterns might affect these.

So what’s next?

We know a lot more than we did nine years ago about things like how the upper atmosphere influences East Coast Lows, and how severe floods and East Coast Lows have changed over the past century. We are also starting to get a handle on how they may change in the future. Climate change is expected to reduce their frequency during the cool months May-October (which is when they currently happen most often), but potentially make them more common during the warmer months.

But there are still a lot of things we don’t know. The papers in this issue are a start, but research continues and our group has many more questions left to answer. These include how ECLs have changed in the more distant past; how sea surface temperatures influence their frequency and impacts; and how changes in ECLs and other climate processes can affect our water security.

A whole bunch of research is also about to start into how ECLs interact with other climate extremes now and into the future, as part of the NSW Government’s Climate Change Impacts and Extreme Climate Events research programs and the Australian government’s National Environmental Science Program.

So read the articles, have a taste and watch this space: there are still many more questions and researchers from around the country are working together to answer them, to help us better understand the special, complex climate of the eastern seaboard of Australia.

The Conversation

Acacia Pepler receives funding from the Australian Research Council. The Eastern Seaboard Climate Change Initiative is spearheaded by the NSW Office of Environment and Heritage, and involves researchers from the Bureau of Meteorology, the University of New South Wales, Macquarie University and the University of Newcastle. The research was funded in part by the NSW Environmental Trust, NSW Department of Finance and Services, Hunter Water, and the Australian Research Council.

Categories: Around The Web

Rigs to reefs: is it better to leave disused oil platforms where they stand?

Wed, 2016-08-10 06:06
Can undersea oil rigs become homes? US Bureau of Ocean Energy Management

The global offshore oil and gas industry has installed a wide variety of infrastructure throughout our oceans, including tens of thousands of wells, thousands of platforms and many thousands of kilometres of seabed pipelines.

Many of these structures have been in service for several decades and are approaching retirement. The North Sea, for example, has more than 550 platforms and undersea production facilities, virtually all of which are set to be decommissioned in the next 30 years.

In Southeast Asia, the issue is even bigger: almost half of the region’s 1,700 offshore installations are more than 20 years old and approaching retirement.

What happens to old offshore oil and gas infrastructure?

After decommissioning and cleaning a platform, seabed structure or pipeline, its operators are faced with a choice: dismantle and remove it completely; leave it in place; or remove some of it while leaving the rest behind.

The choice depends largely on what is technically feasible, as well as what is desirable from an environmental, economic and societal perspective, and of course what is legally allowed.

The earliest relevant international law, the 1958 Geneva Convention on the Continental Shelf, requires the complete removal of disused marine infrastructure. But the United Nations Convention on the Law of the Sea, which has largely superseded it, is more lenient. It states that decisions should take into account “generally accepted international standards established … by the competent international organisation” – in this case the International Maritime Organisation (IMO).

The IMO’s 1989 guidelines allow structures to be left in place on a case-by-case basis. Due consideration must have been given to safety of navigation, rate of deterioration, risk of structural movement, environmental effects, costs, technical feasibility and risks of injury associated with removal.

The guidelines also refer to the possibility of “new use or other reasonable justification” for in situ disposal. This opens up some possibilities for how offshore platforms might take on a new life without being removed.

Is complete removal worthwhile?

Europe has so far tended to favour complete removal of offshore infrastructure, in line with international law. Safely recovering these ageing and vast structures from harsh environments is technically challenging, and the industry has developed some impressive technology such as the Pioneering Spirit, a specialised vessel constructed to lift steel platforms from the North Sea.

Impressive… but also expensive. kees torn/Wikimedia Commons, CC BY-SA

Complete removal is expensive, both to oil and gas companies and the taxpayer. It also leaves operators facing the problem of what to do with the recovered material. While some parts of the topsides of platforms can be refurbished if structurally sound, most of the material is not reusable. Some elements can be recycled, but much of it will inevitably end up in landfill.

From an environmental perspective, the notion of returning the seabed to its original state is undoubtedly born of the right intentions. But when engineered structures have been part of the marine environment for several decades, might it do more harm than good to remove them?

A new life for platforms

Artificial reefs are often deliberately placed in our oceans to provide habitat for marine life or sites for recreational diving. But many offshore oil and gas structures also fulfil these functions – for instance, by providing breeding sites for fisheries. Removing them might therefore harm these ecosystems.

Despite this, European law only allows artificial reefs to be created from new materials, rather than decommissioned infrastructure.

The United States, which has national laws that allow offshore infrastructure to be left in place, has an established a “rigs to reefs” program administered through the Bureau of Safety and Environmental Enforcement. Under this program, more than 400 decommissioned rigs have been converted to permanent reefs since 1986.

Rigs cannot simply be left to rust in the ocean; projects like this require rigorous assessment before being approved. But the assessment criteria are different and typically less stringent than for the earlier production phase of the rig’s life, largely because there is no longer a risk of spills after decommissioning.

During their initial operating life, marine structures and pipelines must meet strict criteria that limit movement or deformation. This is to ensure that machinery operates correctly and containment systems do not release hydrocarbons into the marine environment. Strict regulations also apply to the removal of hydrocarbons and residues from the system during decommissioning and cleanup.

But once decommissioned, all that is required is that the structure is sufficiently stable on the seabed and will not break apart in ways that would harm the environment or pose a danger to shipping.

Leaving disused infrastructure in the ocean also raises the critical question of who bears ultimate responsibility for it. Should ownership stay with the original operator, or be transferred to the government? This raises issues of liability for any damage that might occur in the future, and who should bear that risk remains a live question for debate and discussion.

Will it have a role after retirement? CSIRO, CC BY-NC-SA What should Australia do?

Australia’s offshore oil and gas industry is less mature than those in Europe and the United States. As a result, the fate of decommissioned offshore infrastructure is still an emerging issue.

Australia’s current regulations favour complete removal. But the National Offshore Petroleum Safety and Environmental Management Authority is exploring the possibility of supporting an in situ decommissioning policy.

This would involve amending the law to allow certain new uses, as well as to resolve issues of decommissioning standards, safety and risk, liability and ownership. The lack of any established practice gives Australia a unique chance to show innovative leadership on this issue.

Developing an Australian version of the “rigs to reefs” policy would require input from engineers, natural scientists, environmental managers, oil and gas economists, lawyers and others, to work out precisely what is possible and preferable in different locations.

There is little doubt that pressures on the ocean environment will only increase. Growing populations will increase demand on fisheries and probably lead to the development of large offshore aquaculture projects, as well as escalation of shipping and ocean-based transport. Similarly, the demand for energy may drive broad implementation of wave energy and other marine renewables.

With the growing variety of industries set to use the oceans in future, now is the right time to take a wide-ranging look at how best to handle the structures that are already there.

The Conversation

Susan Gourvenec works for a research centre that receives funding from the State and Federal governments as well from as a range of oil and gas operators and contractors through joint research projects or contract testing.

Erika Techera receives funding from the Australian Research Council for a joint project related to marine species and oceans governance. She is a member of the Oceans Science Council of Australia (OSCA).

Categories: Around The Web

Australia needs better policy to end the alarming increase in land clearing

Tue, 2016-08-09 06:08

Land-clearing laws are being fiercely debated in both Queensland and New South Wales. These two states are responsible for the majority of cleared land in Australia’s recent history.

The latest assessment from Queensland shows that 296,000 hectares of vegetation was cleared in 2014-15. More than a third of this is remnant vegetation that has never been cleared before.

To put this in perspective, around 580,000ha of forest was cleared in Brazil over the same time. While this is twice the area recently cleared in Queensland, it’s worth remembering that the rate of clearing has been much higher in the past, before legislation first came into effect more than a decade ago.

Land-clearing rates were higher before laws were introduced. Queensland government, Land cover change in Queensland 2014–15, CC BY

In NSW, around 23,000ha of vegetation has been cleared for cropping and pasture since 2010 and 59% of this is “unexplained”: clearing of regrowth vegetation, for routine agricultural management, exempt from legislation, or illegal.

The science is clear about the detrimental effects of land clearing on the climate, native wildlife and soil health. More than 400 international and Australian scientists recently signed a declaration highlighting their concern about the rate of forest loss in Australia. Such a degree of coordination between scientists hasn’t been seen since the original Brigalow Declaration in 2003.
How to address effectively the issue of land clearing remains fiendishly complex. Land clearing is such a political issue in Australia, as any policy changes affect many people in the community.

Recently proposed policy changes in both NSW and Queensland have proved to be contentious. Farmers and environmental groups in NSW have highlighted concerns with Premier Mike Baird’s proposed Biodiversity Bill. Last week, Queensland farmers unhappy with Premier Annastacia Palaszczuk’s proposal to re-strengthen land-clearing laws marched on Parliament House to protest the changes.

Australia has the ability and resources to reduce land clearing, if it chooses. How might we do it?

1. Stop the policy flip-flop

Since the 1970s, state and federal governments have introduced at least 40 regulations, incentive schemes and policy frameworks related to vegetation management. One of the key concerns reported by farmers is the “policy flip-flop”, in which successive governments introduce land-clearing laws that are strong, then relaxed, then strong again.

These frequent policy changes create huge uncertainties for farmers who want to make long-term business decisions. It also means that government resources are almost constantly devoted to designing new policies, rather than ensuring that existing policies are effective.

For land clearing to be controlled over the long term, more resources need to be allocated to encouraging, supporting and enforcing compliance with vegetation laws.

2. We do need regulation

Strict controls on vegetation clearing are often deeply unpopular with landholders. Relying too heavily on regulation can also lead to poor compliance and unnecessary costs. However, the reality is that some form of “top-down” regulation will always be needed to protect native vegetation in the long term. History has shown this to be the case.

Before it introduced land-clearing controls in the 1980s, the South Australian government provided landholders with financial incentives to conserve native vegetation. Unfortunately, clearing did not decline, as the scheme attracted only landholders who had already planned not to clear their vegetation.

A combination of regulation and long-term financial incentives is needed. Unfortunately, most incentive schemes are small compared to the value of farming, and are usually short-term – such as the five-year package announced as part of the NSW Biodiversity Bill.

3. Put a price on carbon

Encouraging long-term protection of native vegetation requires a market signal, and a carbon price would do this. The federal government’s Emissions Reduction Fund doesn’t provide bang for buck, and there’s evidence it is being used to conserve vegetation that would never have been cleared anyway.

Increased clearing in Queensland may have effectively cancelled out the carbon emissions saved under the Direct Action plan.

We know it is possible for carbon farming to be a win-win for the climate and wildlife. Many parts of Australia need only a moderate carbon price to make restoring and conserving native vegetation a profitable business enterprise. Long-term policy certainty and a consistent message from federal and state governments are needed.

4. Self-regulation where appropriate

Over the past decade, there has been trend towards self-regulation of vegetation management. For low-risk activities, it makes sense for landholders to be able to manage vegetation by simply notifying the government, rather than applying for a permit. This reduces costs for the landholder, and frees up government resources to monitor for compliance and regulate high-risk clearing (where the proposed area to be cleared is large, or may impact threatened species habitat).

Self-assessable vegetation clearing codes been introduced in New South Wales and Queensland, but have been criticised for enabling broad-scale clearing. Clearly, more work is needed here to get the balance right between managing environmental risks and minimising regulation costs.

5. Rebuilding trust

The debate over land clearing in Australia is so heated and highly polarised that it can be difficult to see a path forward. There appears to be very little trust between some landholders and state governments, leading in some cases to tragic consequences.

Providing some long-term policy certainty and consistency between federal and state government messages will go a long way in helping to rebuild mutual trust. A price on carbon would allow landholders to generate income from sequestering carbon alongside farm businesses.

Reducing regulation in circumstances where the environmental risks are low while ensuring resources are devoted to supporting compliance can reduce costs for both landholders and governments without jeopardising the environment.

Australia needs to get land-clearing policy right, and soon. While the debate rages on, more vegetation is lost – and ultimately we all lose.

The Conversation

Megan C Evans receives funding from the National Environmental Science Programme (NESP) Threatened Species Recovery Hub, an Australian Postgraduate Award and a CSIRO top-up scholarship. She is a signatory to the recent Scientist's Declaration on Accelerating forest, woodland and grassland destruction in Australia.

Categories: Around The Web

Death by dingo: outsourcing pest control raises uncomfortable questions

Mon, 2016-08-08 06:08
Female dingo in Oxley Wild Rivers NP, New South Wales Guy Ballard, CC BY-NC

From cats and foxes to goats and cane toads, invasive animals are one of Australia’s biggest environmental problems. Over the past few weeks, one Queensland council has been trying a new approach to controlling goats on an offshore island: introducing dingoes from the mainland.

But since dingoes were moved to Pelorus Island to kill goats, passions have overflowed for and against.

Queensland RSPCA chief executive Mark Townend was notable among the opposition. He stressed that his organisation is not against feral animal control, but opposes the “very cruel method” of using dingoes to achieve it.

Some people have also strongly objected to the implantation of poison capsules, containing sodium fluroacetate, into the Pelorus Island dingoes. This is intended to kill them after they have killed the goats.

In a rapidly changing world, people are trying new ways to save wildlife and help the environment. These strategies are forcing us to ask new, and sometimes difficult, ethical questions.

Changing world

Humans have created a global patchwork of degraded environments. Australia has the world’s worst mammal extinction record, as well as booming populations of invasive herbivores and ongoing problems with native habitat loss.

When it comes to managing invasive animals, Australians have often been quick to use lethal “pest” control. Methods include poisoning, trapping and shooting for livestock and wildlife protection, in attempts to reduce populations until impacts reach acceptable levels.

This approach does not sit well with many Australians and it has become common to see vocal opposition to wildlife culling programs that are perceived as indiscriminate, unnecessary or unacceptably cruel.

At the same time, there is increasing support for reintroducing native predators to help fix environmental problems.

Some experts in Australia have suggested that Australia should use dingoes to control invasive herbivores, red foxes and feral cats, following the example of our North American counterparts.

Some call it “rewilding”. Others simply see it as biological control. Regardless, a range of private and public land managers believe that dingoes should be Australia’s primary agents of ecosystem restoration.

Putting aside the debate about the likely success of such schemes, the issue of public acceptance remains. Proponents claim that using dingoes for environmental benefit is both “virtuous” and “free”. But the objections to the use of dingoes for feral goat control shows that there is still widespread dissatisfaction with getting dingoes to do our dirty work.

The ethics of killing

Every control technique has a welfare impact on wildlife, causing some pain and suffering.

We can measure objectively how well a technique works, but assessments of humaneness are always subjective. When people choose lethal control, we condemn animals to death.

Although it has been marketed as “compassionate conservation”, using dingoes for pest control means that other animals will be hunted, maimed and killed. This applies to the animals we want removed, such as goats, as well as to other wildlife that shares the habitat.

The future of using dingoes and other predators as pest controllers hinges on whether or not the public finds it acceptable. Do people draw a line between “natural” ways of killing invasive animals (with dingoes), and more artificial means (such as poison)?

We know, for instance, that dingo attacks can be extremely distressing for livestock producers.

Queensland RSPCA’s opposition to the Pelorus Island trial doesn’t mean an end to rewilding, but it is significant. It reminds us that welfare is an important issue for invasive animal control, regardless of whether humans or animals are doing the killing.

Some people find the notion of dingo-based pest control acceptable, even appealing, but others do not. Just as society expects that we carefully assess the use of poisons, traps or bullets, so too we should consider the welfare impacts of outsourcing death to dingoes and making them our tools for ecosystem management.

The Conversation

Guy Ballard is Project Leader for the Wild Canids in Agri-Ecosystems project, funded by the Invasive Animals Cooperative Research Centre.

Peter Fleming receives funding from Australian and NSW Governments, and the Invasive Animals CRC. He is not affiliated with any political or industry organisation or think tank.

Categories: Around The Web

Without action, Asia-Pacific ecosystems could lose a third of their value by 2050

Mon, 2016-08-08 06:08

Ecosystem services – the natural processes that allow Earth to sustain life and provide us with everything we have and see – are facing an uncertain future.

Between 1997 and 2011, the global value of ecosystem services declined by up to US$20 trillion per year as a result of changing land use. To put that in context, the world’s entire GDP is currently just under US$74 trillion.

Our research shows that, in the Asia-Pacific region, this downward trend is likely to continue unless there are significant policy changes. By 2050, we predict that ecosystem service values could drop by 34% from their 2011 base value of US$13 trillion.

But, more optimistically, we also forecast that ecosystems could grow in value by 24% by mid-century, if policies are put in place to safeguard these crucial environmental values.

An Asian century (of ecosystems)

The Asia-Pacific region has historically followed the global trend in ecosystem depletion. But the future doesn’t have to be like the past. The decisions we make as a society will determine what our world will look like in that future.

With that in mind, our research focused on a range of land-use scenarios to try to forecast the consequences of various social, environmental and economic policies.

We used these scenarios to derive estimates of land-use change (urban, cropland, forest, grassland, wetland, desert), population, GDP and other variables such as inequality up to the year 2050. Changes in total value of ecosystem services in these scenarios were estimated to be due to two factors: the change in area covered by each ecosystem type; and the change in the “unit value” – the total value of all the marketed and non-marketed ecosystem services, per area, per year of each ecosystem type due to degradation or restoration.

In the Asia-Pacific region, Afghanistan shows the greatest potential losses and gains, as do other countries that are more susceptible to desertification. At the same time, these countries also have the greatest potential for reversing land degradation.

On the other hand, in this region, countries like Japan and New Zealand have the least potential for fluctuations in their ecosystem service values. This is because they are already highly developed and potentially have more stable climates.

Australia’s prospects

Australia, second only to China in ecosystem services value, also shows an extensive range of values among our four scenarios. Starting with a terrestrial ecosystem services value of US$3.4 trillion per year in 2011 (roughly three times Australia’s GDP that year), we forecast that by 2050 ecosystem services could grow in value by as much as 21%, or decrease by up to 29%.

This translates to either a gain of US$700 billion per year or a loss of US$980 billion per year – a figure that’s not far short of Australia’s current annual GDP.

The scenarios used incorporate a range of world views and policies, and the impacts of these on the entire, integrated system, including population, energy use, equity, environmental change, climate change and more. Our research features a country-by-country breakdown of the outcomes of each scenario, although it is impossible to separate out the impact of individual policies, especially given the differences in each country.

The consequences and solutions

The loss of ecosystem services will be felt most strongly by the poorest in any society, as they depend most directly on ecosystem services. They are the first to feel the effects when those services begin to disappear, and the least able to replace or ameliorate the loss. Increasing ecosystem services, on the other hand, would increase sustainable human well-being.

Around the world, the focus on ecosystem services has been growing quickly. Recent major policy reforms in this direction include a White House memo directing US federal agencies to incorporate ecosystem services into their planning, investment and regulations.

Other countries have also begun to incorporate ecosystem services in their policies. The European Union has mandated all member countries to produce national ecosystem service assessments, for use in policy and decision-making.

At the international level, the United Nations has set up an Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, analogous in structure and function to the Intergovernmental Panel on Climate Change. The international Ecosystem Services Partnership has also been established to co-ordinate and facilitate the exchange of information and expertise across the world.

We have taken ecosystem services for granted for far too long. The UN Sustainable Development Goals, adopted last year by all UN countries, include specific calls to promote sustainable use of terrestrial ecosystems, to halt and reverse land degradation, to ensure clean water and food security, as well as to safeguard life both on land and in the oceans.

If we are taking these goals seriously, we need to put natural capital and ecosystem services “on the books” as a major contributor to sustainable well-being.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

State of the Climate 2015: global warming and El Niño sent records tumbling

Thu, 2016-08-04 15:17

The State of the Climate in 2015 report, led by the US National Oceanic and Atmospheric Administration, was released yesterday. Unfortunately, it paints a grim picture of the world’s climate last year.

For a second consecutive year the globe experienced its hottest year on record, beating the 2014 record by more than 0.1℃. From May 2015 onwards, each month set a temperature record for that particular month, a pattern that has yet to end.

The record-breaking temperature anomaly in 2015 (around 1℃ higher, on average, than what would be expected in a world without humans) was in large part due to human-caused climate change. A small fraction of the heat was because of a major El Niño event, which developed midway through 2015 and ran into this year.

During El Niño events we see warmer sea surface temperatures in the central and eastern Pacific Ocean. A resulting transfer of heat from the ocean into the lower atmosphere causes a temporary warming effect. In La Niña seasons, the opposite happens.

Overall, about 0.05-0.1℃ of the global temperature anomaly for 2015 was due to El Niño. The bulk of the remainder was due to climate change. So even if we hadn’t had an El Niño last year, 2015 would still have been one of the hottest years on record.

Of the 16 hottest years ever recorded, 15 have happened this century.

Extreme events around the world…

At regional scales we also saw many extreme events last year. The downward trend in Arctic sea ice continued, with the lowest annual maximum extent on record. Alaska’s winter was almost non-existent, with many Arctic mammals and fish being forced to change their behaviour and shift their habitats.

Many extreme heatwaves occurred in 2015. These included a deadly hot spell in India and Pakistan and severe heat events in Europe and North America. Combined, these events killed thousands of people.

In Europe, various summer heat records were set in Spain, the Netherlands, France and Britain, while Germany posted an all-time record temperature.

Seasonal-scale extreme heat occurred over many parts of the globe. There were many more warm days and nights than normal over much of Europe in summer, and in Russia and North America in spring.

Extreme events occurred around the world in 2015. NOAA NCEI

Across the world there were more tropical cyclones than normal, mainly due to increased cyclone activity across the Pacific basin, and many significant flood events. On the other hand, large areas suffered severe drought (14% of the land surface, up from 8% in 2014).

The Ethiopian drought devastated crops and affected millions of people. Parts of South America experienced the worst drought in 80 years. The western US drought continued, despite the fact that El Niño events usually bring this region some reprieve.

…including in Australia

In Australia, probably the most significant climate extreme we had was the record heat in October.

The country experienced its biggest monthly temperature anomaly on record – almost 3℃ above the historical national average. The frequency of very warm days was also well above average. This unusual early heat triggered bushfires across the southeast.

Even given the El Niño event (which normally warms up Australia in spring and summer), the maximum temperature records that were set were, for example, at least six times more likely in Melbourne than they would have been in the absence of human-caused climate change.

For 2015 as a whole, Australia experienced its fifth-warmest year on record. Nine out of 12 months were warmer than average.

A continuation of climate change trends

Besides the record heat, the world saw many other unwanted records tumble in 2015, providing ever more extensive evidence for the effect that humans are having on the climate. Greenhouse gas concentrations (the primary cause of our changing climate) rose to record high levels, with carbon dioxide concentrations passing the 400 parts per million mark at many sites. The year’s margin of increase in atmospheric carbon dioxide concentrations was also the largest on record.

Our influence on the climate can also be seen through record high globally-averaged sea levels and the highest globally-averaged sea surface and upper ocean temperatures on record.

The trend towards more heat extremes and fewer cold ones also continued. In fact, 2015 had about three times as many very warm days as very cold ones globally compared with the historical average.

A plethora of records was broken, with a human fingerprint being clear in many cases.

What’s next?

We already know that 2016 is very likely to overtake 2015 globally as the hottest year on record. As the El Niño peaked earlier this year we saw many extreme events around the world and in Australia. This included the devastating coral bleaching on the Great Barrier Reef, which would have been virtually impossible without human-caused climate change.

Unfortunately, in many ways, the climate of 2015 is not likely to stand out as especially unusual in a few years’ time. More record hot years are likely, with associated extreme weather events, as greenhouse gas concentrations continue to climb.

Only with rapid and substantial cuts to these emissions will it be possible to limit global warming to well below 2℃, a key aim of the Paris climate agreement, and reduce the likelihood of yet more climate records tumbling.

The Conversation

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Sarah Perkins-Kirkpatrick receives funding from the Australian Research Council

Categories: Around The Web

FactCheck Q&A: as the climate changes, are 750 million refugees predicted to move away from flooding?

Thu, 2016-08-04 13:38
How does Peter Singer's figure of 750 million fit within the range of estimates on 'climate change refugees'? Q&A

The Conversation is fact-checking claims made on Q&A, broadcast Mondays on the ABC at 9.35pm. Thank you to everyone who sent us quotes for checking via Twitter using hashtags #FactCheck and #QandA, on Facebook or by email.

Excerpt from Q&A, August 2, 2016, watch from 1.12.

PETER SINGER: That is going to basically inundate every coastal city around the world, including, of course, all Australian major cities are coastal. It is going - estimated to cause something like 750 million refugees just moving away from that flooding. Never mind those who also because refugees because (indistinct)…

VIRGINIA TRIOLI: Some of those claims are contested, of course?

PETER SINGER: Well, they are contested but do you want to take the chance, right? – Peter Singer, Ira W. DeCamp Professor of Bioethics, Princeton University, speaking on Q&A with host Virginia Trioli, August 2, 2016.

Ethicist Peter Singer told Q&A that climate change-related sea level rises are “estimated to cause something like 750 million refugees just moving away from that flooding”.

It is beyond the scope of a FactCheck to say with any certainty what will happen in the future. And there is no single official data source on the numbers of people who migrate because of the impacts of climate change, partly because there is no legal definition of a “climate change refugee”. Furthermore, most such displacement occurs within countries, not across international borders, and is always due to a number of different factors. Finally, there is no systematic monitoring of such movement.

That said, we can check how Singer’s figure of 750 million fits within the range of estimates that exist on this question.

Checking the source

When asked by The Conversation for sources to support his statement, Peter Singer said:

Factchecking always welcome! My source for the figure is Climate Central and in terms of the possible extent of sea level rises, please see this paper by Hansen et al.

The figure I gave is near the top end of the Climate Central range, but remember that I agreed with Virginia Trioli that this is contested. I argued that if it is even a small chance, the stakes are too high to be worth taking the risk.

Climate Central is a group of scientists and journalists researching and reporting climate change and its effects. In 2015, the group said that:

Carbon emissions causing 4°C of warming — what business-as-usual points toward today — could lock in enough sea level rise to submerge land currently home to 470 to 760 million people, with unstoppable rise unfolding over centuries.

Predictions vary and uncertainties abound, but climate scientists say it is possible we may reach 4°C of warming by 2100 if insufficient effort is made to reign in emissions.

As Singer acknowledges, his figure of 750 million is at the upper end of estimates – and he readily agreed that estimates are contested.

Without detracting from Singer’s broader point about the human consequences of climate change, it is worth taking a closer look at the context, assumptions and methodologies behind some of these alarming-sounding figures.

What does Singer’s source say about climate refugees?

When Climate Central released its Mapping Choices report in 2015, the headline it used on its website was “New Report and Maps: Rising Seas Threaten Land Home to Half a Billion”.

But to be clear, Climate Central’s full report did not say that 750 million people would need to move away due to rising sea levels – in fact, unlike Singer, it didn’t use the term “refugees” at all.

Instead, it said only that under a 4°C warming scenario, there could be “enough sea level rise to submerge land currently home to 470 to 760 million people” (emphasis added).

Many people would indeed move in that scenario – but past experience from around the world means we can be confident that many would also stay and try to live with a changed environment.

The Climate Central report acknowledges that its estimates do not take adaptation strategies into account, noting:

Results do not account for present or future shoreline defences, such as levees, that might be built, nor for future population growth, decline or relocation.

A vast range of estimates – and plenty of guesswork

Some of the numerical estimates on climate-related displacement are based on crude methodologies, as explained in my 2012 book, Climate Change, Forced Migration, and International Law.

For example, in 1993 social scientist Norman Myers wrote a paper suggesting that 150 million people could be displaced by climate change by the the mid-21st century. He had identified areas expected to be affected by sea-level rise, and then calculated the anticipated population of those areas in 2050. In subsequent work and interviews, he said the figure could be closer to 200 million or 250 million. Estimates ranging from 50 million to 600 million to even a billion have been cited by some.

The Observer published an article in 2010 headlined “Climate change will cost a billion people their homes, says report”.

However, that report misconstrued a paper by Dr François Gemenne – whose work is empirically based and well-reasoned – that referred to the Intergovernmental Panel on Climate Change’s (IPCC) comment that freshwater availability in a changing climate may adversely affect more than a billion people by the 2050s. That’s a different story from the one told in The Observer’s headline.

Many of these upper end estimates – and the methodologies used to calculate them – have been criticised by other researchers, who note that very big estimates often fail to account for adaptation.

The IPCC itself has said that:

Estimates of the number of people who may become environmental migrants are, at best, guesswork.

How much weather-related displacement of people have we seen so far?

Peter Singer’s comment was about future impacts of climate change. But what do we know about current and past climate-related movement?

The best statistics on this are published by the Internal Displacement Monitoring Centre (IDMC), the leading source of information on internal displacement whose role has been endorsed by the UN. It said in its Global Estimates 2015: People displaced by disasters report that:

Since 2008, an average of 22.5 million people have been displaced by climate- or weather-related disasters [each year].

Internal Displacement Monitoring Centre (IDMC)

These figures were also recognised in the Nansen Initiative’s Agenda for the Protection of Cross-Border Displaced Persons in the context of Disasters and Climate Change, endorsed by 109 States (including Australia) in late 2015, and by the UN Secretary-General’s report on refugees and migrants prepared for a high-level summit on large movements of refugees and migrants to be held in New York in September 2016.

Verdict

Are rising seas “estimated to cause something like 750 million refugees” to have to move, as Peter Singer said? Not according to the source he provided, which actually found that sea level rises under a 4°C warming scenario could submerge land currently home to 470 to 760 million people; the report didn’t say that all or most would subsequently become refugees.

As Singer acknowledged, his figure of 750 million people being affected by climate change-related flooding in future is at the upper end of estimates – and is contested. The methodologies and assumptions underpinning some of the upper end estimates have been critiqued by scholars, as they often do not adequately account for adaptation. – Jane McAdam

Review

In general, I and others in the migration field would strongly agree with the author’s sound critique of Singer’s assertion.

Human mobility in the context of climate change is complex. Limits to a more nuanced understanding of this issue may be due to a lack of agreement on the legal definitions and the methodological choices made to project numbers of environmental migrants, as well as - importantly - an understatement of the agency and adaptive capacities of people.

Communities in coastal and low-lying areas that may be affected by sea-level rise in the future are affected today by recurrent natural hazards, coastal erosion, land subsidence, and saltwater contamination of arable land.

Empirical studies, including from the United Nations University, have explored how migration contributes to livelihoods and household adaptation strategies.

Experts tend to agree that the types of movements that might fall under that moniker “climate migrant” are varied and complex. Robust estimates by the Internal Displacement Monitoring Centre fall short of accounting for people living in prolonged displacement, displaced across borders (generally agreed to be a minority), or migrating away from their homes due to the long-term effects of climate change (erratic weather, droughts, and the gradual loss of land). The last grouping may be the largest – and would be considered labour migration under current definitions.

The author’s section on weather-related displacement rightly adds an important dimension to a focus on sea-level rise, which is by no means the only cause of movement. An additional important point: climate change experts have largely been reluctant to attribute any individual weather event to climate change, thus making it difficult to attribute displacement due to climate- or weather-related disasters to climate change. – Julia Blocher

Have you ever seen a “fact” worth checking? The Conversation’s FactCheck asks academic experts to test claims and see how true they are. We then ask a second academic to review an anonymous copy of the article. You can request a check at checkit@theconversation.edu.au. Please include the statement you would like us to check, the date it was made, and a link if possible.

The Conversation

Jane McAdam receives funding from the Australian Research Council and the Research Council of Norway.

Julia Blocher has previously received funding through the project “High-End cLimate Impacts and eXtremes” (HELIX - http://helixclimate.eu/home), funded by the EU Seventh Framework Programme for research (FP7). She is an associate member of the Hugo Observatory at the University of Liege, an interdisciplinary research group exploring migration phenomena related to environmental factors and climate change. The Hugo Observatory is directed by Dr. François Gemenne, who is referred to by the other author in this article.

Categories: Around The Web

Government offers hope by telling CSIRO to reinvest in climate research

Thu, 2016-08-04 13:05
Public funding is vital for programs like CSIRO's research vessel RV Investigator, which is too expensive for universities to run. CSIRO, CC BY

The new instruction from Science Minister Greg Hunt to restore climate science as a “core activity” at Australia’s peak science body, the CSIRO, is a ray of hope for public good science.

Yesterday, Hunt told Fairfax Media he had issued a directive to CSIRO executives to add 15 jobs and A$37 million over ten years to CSIRO’s climate science research program.

The move follows months of uncertainty over CSIRO’s climate research capability, after chief executive Larry Marshall announced in February that 350 jobs would be lost from CSIRO, including cuts to the oceans and atmosphere division.

After widespread condemnation, losses to climate science capacity have reportedly been significantly reduced, although it is still unclear exactly how many and where the losses will be felt.

So what does the new development mean for CSIRO and Australia’s climate science?

The role of CSIRO

Reinstating 15 jobs is certainly a step in the right direction, even if they don’t make up for the previous cuts. But perhaps even more significant is the statement of intent – that the government wants climate science, and wants it to be done by CSIRO.

This is important because these government-funded agencies are well placed to carry out sustained observations and the accompanying development of climate models. Here in the university sector we focus mostly on “blue sky”, discovery-based research and training the next generation of researchers and PhDs. These are very important roles, but we can’t run marine research vessels or decades-long observation programs, because university research generally relies on three-year grant cycles.

The minister’s announcement is a very important cultural acknowledgement from the government that it needs to ensure that its publicly funded agencies underpin those important areas of climate monitoring and modelling.

Key investments

There are two key areas in which Australia needs to invest.

The first is sustained observations of the southern hemisphere’s oceans and atmosphere. As one of the few nations in the region with the capacity to monitor this vast area, Australia arguably has an obligation to make these measurements.

The second is developing next-generation climate models for Australia and the world. Northern hemisphere modelling groups, even though they do global modelling, have pressures from their own governments to focus on high-quality simulations of their own regions. Without Australia doing the same, there’s not the same pressure to have superbly accurate forecasts for this part of the world.

These two areas need to be secured via an appropriate scale of investment in climate science. Where this new money should go depends on exactly where the cuts have been made and what needs to be restored.

Government steps up

For some time now, CSIRO’s executive has been making moves away from public good research and towards an agenda of “innovation”.

While investment in public good climate research might not make you money this year or next, it can save vast amounts of money by, for example, avoiding poor investment in infrastructure. It is vital science that is needed to secure a resilient economy, a resilient environment and social well-being for all Australians.

This type of research is often undersold. Unfortunately, the culture in CSIRO over the past year seems to have been to sacrifice some of that public good science and focus on more lucrative research. This is important and beneficial science as well, but you can’t drop the public good.

Hunt’s new comments are important because they show the government is taking renewed responsibility for how CSIRO invests in research that helps the public.

This isn’t just about climate science; it’s about any area of public-good research that delivers what the community needs for societal well-being.

Restoring reputations

This is an important step towards restoring Australia’s international reputation in climate science. The science is always judged by the excellence of the work being done and papers published, which will take a while to materialise, but this announcement will be applauded around the world.

The cuts were condemned by thousands of international researchers as well as the World Climate Research Program of the World Meteorological Society and the director of a NASA-led atmospheric monitoring network.

CSIRO’s international reputation in climate science has been going down the gurgler ever since Royal Society Fellow Trevor McDougall, one of the most influential oceanographers Australia has produced, was cut in 2012 to worldwide condemnation. The recent cuts went further.

We often criticise ministers for what they do wrong, but the latest announcement is a real cause for hope. Until now the government had taken a hands-off approach, arguing that CSIRO is an independent statutory body that shouldn’t be interfered with.

That’s now been thrown out. This is public money, and the government is saying we need to get public-good value from it.

The Conversation

Matthew England receives funding from the Australian Research Council.

Categories: Around The Web

The solution to Australia's gas crisis is not more gas

Thu, 2016-08-04 06:10
Gas exports are driving massive growth in Australia's gas demand. Ken Hodge/Flickr, CC BY

Concern about higher and more volatile gas prices in southern and eastern Australia is spreading. Recent gas price spikes in South Australia have impacted on electricity prices and raised concerns about future prices for industry and households.

Average gas prices for large industrial consumers rose by 60% between 2010 and 2015, while household prices climbed by 20%. But prices vary a lot from state to state.

In industry, most gas is used for process heat, while in homes, space and water heating are the big gas consumers.

Gas also provides around 22% of Australian electricity, and around 45% in South Australia. The dramatic increase in liquefied natural gas (LNG) exports from Queensland has provoked fears of much bigger future price hikes. It has also made it difficult for major industrial users to negotiate reasonably priced new contracts.

Many are proposing the obvious, but wrong, solution: develop more gas production resources. But this path fails for several reasons.

We don’t need more gas

First, as global citizens, we must recognise that most of our existing economic fossil fuel resources must stay in the ground. Developing more gas supply will just make it harder for Australia to transition to low-carbon energy over the next few decades.

Second, the problem is not about gas supply. It is about the allocation of gas and management of demand for gas and electricity. The recently opened Queensland LNG export plants are tripling eastern Australian gas demand.

What industry could cope with that scale of change without a few hiccups?

Eastern Australia has plenty of gas. The problem is that most of it is being exported at prices lower than some Australians are paying. And the price volatility resulting from the present shambles is making life difficult for some Australian industries.

gas consumption AEMO

Third, this approach would involve falling into the trap set by the gas industry, to force governments to override community opposition to coal seam gas projects. This would be socially divisive and is unnecessary.

We also need to protect our gas industry from its own shortsighted and narrow world view. Gas companies are already facing financial challenges.

Winds of change

Our responses to the problems with gas need to be carefully considered, to recognise a reality that has evolved over many years and to factor in the global context.

Consider a few facts facing the gas industry in Australia.

Australian gas users are, on the whole, very inefficient in the way they use gas. Sustained low prices have meant we still have inefficient 50-year-old boilers, outdated process technologies and wasteful management of gas use. Gas hot water services still have pilot lights (which waste energy) and poor insulation. One study has suggested that east coast gas demand could decline if we focused on efficient gas use.

Gas demand peaks in cold weather, due to the combination of gas and electric heating, which adds to higher gas use in industry and households. This drives higher prices in winter.

Improving energy efficiency, particularly high-efficiency electric technologies, combined with renewable energy and storage, means it is increasingly attractive for households and some businesses to disconnect from gas, or at least shift significant gas demand.

The electricity industry has also discouraged gas-fired cogeneration plants (plants that produce electricity and heat for industrial processes) by undercutting prices and using its market power to make it difficult to connect and sell electricity into the grid.

This is despite cogeneration being more than 25% more efficient than our most efficient large gas-fired power generators, as it produces process heat as well as electricity instead of letting heat escape into the atmosphere. It is more than twice as efficient as many of our gas turbine power stations and coal power stations.

In the recent South Australian electricity and gas crisis, the state’s most efficient gas power station was not even used until the government intervened, because it was relatively too expensive under the current market structure.

Finally, LNG export plants have locked themselves into long-term gas export contracts linked to the price of oil. The decline in oil prices has slashed their returns, and their share prices have fallen heavily.

They have created a serious problem for themselves and the Australian economy by failing to predict global oil price trends.

So what should we do?

In the short term, the government could help our gas industry to free up some of the gas now being exported.

There is a global glut of gas, so it should be possible to buy back some gas from the buyers of our LNG. Since this would not need to go through the LNG plants and shipping, it could be made available at a significantly cheaper price than its export price.

I don’t really think this is necessary if we are smart, but it provides a way of stabilising gas prices for local industry.

It is interesting to note that the government strongly rejected suggestions that some of our gas be “quarantined” for local use when concerns were emerging.

The core strategy will be multi-pronged.

First, an aggressive gas-efficiency and fuel-switching strategy must be implemented as quickly as possible. Some gas retailers are already moving, as they have realised they would be better off with efficient customers that still use some gas, instead of losing those customers if they shut down.

State energy efficiency schemes, such as the Victorian Energy Efficiency Target and NSW Energy Savings Scheme, have recently been broadened to include gas, as well as small to medium businesses, so they could be expanded.

Storage of gas, electricity and heat can smooth demand to reduce price volatility. Pumped hydro and mini-hydro systems in water supply pipes between large dams and local storages can generate electricity at times of high demand, rather than relying on gas-fired power stations.

Electricity market reform could reduce electricity demand and gas use by encouraging gas cogeneration (as well as renewable energy). This is because cogeneration is a very efficient way to use gas.

The Australian Renewable Energy Agency (ARENA) has recently published a major report on options for renewable energy to replace gas in industry. This could also be implemented.

Energy policymakers have made it clear they consider the gas market to be in serious failure mode. Rapid action could break the market power of old players.

It is really time that the gas industry developed and published a roadmap showing how it can be part of a zero-emission Australian economy.

The Council of Australian Governments Energy Council meets on August 19. Let’s hope it considers effective options, instead of band-aid solutions that will make the wounds fester.

The Conversation

Alan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy.

Categories: Around The Web

Pages