The Conversation

Subscribe to The Conversation feed
Updated: 2 hours 7 min ago

Climate change will create new ecosystems, so let's help plants move

Tue, 2016-08-23 14:58
Australia's treeless alps are vulnerable to the spread of woody shrubs. Alps image from www.shutterstock.com

Australia’s ecosystems are already showing the signs of climate change, from the recent death of mangrove forests in northern Australia, to the decline in birds in eastern Australia, to the inability of mountain ash forests to recover from frequent fires. The frequency and size of these changes will only continue to increase in the next few years.

This poses a major challenge for our national parks and reserves. For the past 200 years the emphasis in reserves has been on protection.

But protection is impossible when the environment is massively changing. Adaptation then becomes more important. If we are to help wildlife and ecosystems survive in the future, we’ll have to rethink our parks and reserves.

A weedier world

Climate change is predicted to have a substantial effect on our plants and animals, changing the distribution and population of species. Some areas will become unfavourable to their current inhabitants, allowing other, often weedy, species to expand. There will likely be widespread losses in some ecosystems as extreme climate events take their toll, either directly by killing plants and animals, or indirectly by changing fire regimes.

While we can model some of these changes, we don’t know exactly how ecosystems will respond to climate change.

Australia has an extensive natural reserve system, and models suggest that much of this system is expected to be altered radically in the next few decades, resulting in the formation of totally new ecosystems and/or shifts in ecosystems.

Yet with rapid climate change, it is likely that ecosystems will fail to keep up. Seeds are the only way for plants to move, and seeds can only travel so far. The distribution of plants might only shift by a few metres a year, whereas the velocity of climate change is expected to be much faster.

As a result, our ecosystems are likely to become dominated by a low diversity of native and exotic invasive species. These weedy species can spread long distances and take advantage of vacant spaces. Yet the exact nature of changes is unknown, particularly where evolutionary changes and physiological adaptation will assist some species but fail others.

Conservation managers are concerned because with increasing weediness will come a loss of biodiversity as well as declines in the overall health of ecosystems. Plant cover will decrease, triggering erosion in catchments that provide our water reservoirs. Rare animal species will be lost because a loss of plant cover makes them more susceptible to predators. A cascade of changes is likely.

From conservation to adaptation

While climate change threats are acknowledged in reports, we continue to focus on conserving the state of our natural environments, devoting scarce resources to keeping out weedy species, viewing vegetation communities as static, and using offsets to protect these static communities.

One way of preparing for the future is to start the process of deliberately moving species (and their genes) around the landscape in a careful and contained manner, accepting that rapid climate change will prevent this process from occurring quickly enough without some intervention.

Overseas plots covering several hectares have already been established that aim to achieve this at a large scale. For instance, in western North America there is a plot network that covers 48 sites and focuses on 15 tree species planted across a three-year period that covers temperature variation of 3-4°C.

In Australia, a small section of our reserve system, preferably areas that have already been damaged and/or disturbed, could be set aside for such an approach. As long as these plots are set up at a sufficiently large scale, they can act as nursery stock for the future. As fire frequency increases and exceeds some plants' survival capabilities, the surviving genes and species in these plots would then serve as sources for future generations. This approach is particularly important for species that set seed rarely.

Our best guesses about what will flourish in an area in the future will be wrong in some cases, right in others, but ongoing evolution by natural selection in the plots will help to sort out what really can survive at a particular location and contribute to biodiversity. With a network of plots established across a range of natural communities, our protected areas will become more adaptable for a future where many species and communities (along with the benefits they provide) could otherwise be lost entirely.

As in the case of North America, it would be good to see plots set up along environmental gradients. These might include from wet to dry heading inland, and from cold to warm heading north-south or with changing altitude.

One place to start might be the Australian Alps. We could set aside an area at higher altitude and plant low-altitude grasses and herbs. These may help current plants compete against woody shrubs that are expected to move towards our mountain summits.

Lower down, we might plant more fire-tolerant species in mountain ash forests. Near the coast, we might plant species from further inland that are better at handling drier conditions.

The overall plot network should be seen as part of our national research infrastructure for biodiversity management. In this way, we can build a valuable resource for the future that can serve the general community and complement our current ecosystem monitoring efforts.

The Conversation

Ary Hoffmann receives funding from the Terrestrial Ecosystem Research Network and the Australian Research Council. He is a member of the IUCN Climate Change Specialist Group.

Categories: Around The Web

Australia's new focus on gas could be playing with fire

Tue, 2016-08-23 06:14

Gas is back on Australia’s agenda in a big way. Last week’s meeting of state and federal energy ministers in particular saw an extraordinary focus on gas in the electricity sector.

While the meeting promised major reform for the energy sector, the federal energy and environment minister, Josh Frydenberg, highlighted the need for more gas supplies and “the growing importance of gas as a transition fuel as we move to incorporate more renewables into the system”.

Gas is certainly a lower-carbon energy source than coal, but gas prices have soared as Australia begins shipping gas overseas.

So what might this mean for energy and climate policy?

Rising gas

In 2013-14 natural gas-fired generation rose to account for 22% of Australia’s electricity generation, although the figure falls to 12% in the National Electricity Market (NEM), which excludes Western Australia and the Northern Territory, both of which use a large amount of gas.

Among the NEM states, South Australia relies the most on gas-powered generation. This means that gas generators generally set the state’s average electricity price, which has usually been higher than those in the eastern states. Average electricity prices in Victoria, New South Wales and Queensland tend to be set more often by coal power generators than by gas.

Over the past couple of decades, the construction of interstate transmission lines has helped to smooth out the different prices among states by allowing exports from those with excess, and cheaper, power to those with shortfalls or more expensive power. On balance, the process has helped to provide more affordable and reliable power across the country.

For some years views of the role and future of gas in Australia have been mixed.

But in the United States, abundant natural gas at low prices prompted industry and politicians to welcome gas as a bridge between today’s coal-intensive electric power generation and a future low-carbon grid. The share of natural gas-fired electric generation capacity more than doubled from 19% in 1990 to 40% in 2014, while the share of actual generation from natural gas rose from 12% to 28% over the same period. Last year it accounted for a third of all US electricity generation.

Soaring prices

Yet in Australia, the renewable energy target has forced our energy supply towards renewable energy, namely wind and solar. Together with the absence of a carbon price and the high price of gas produced by the lucrative export market, there have been few reasons for growth in the role of gas to generate power in Australia. This all changed last month.

In July the average wholesale electricity price in South Australia was A$229 per megawatt-hour, compared with around A$60 in the other NEM states. The state’s spot price soared to A$8,898 on the evening of July 7. Low wind output, the darkness of night, high cold-weather electricity demand and the absence of coal plants after several shutdowns all handed strong pricing power to a few gas generators.

The price volatility attracted much alarm, although the Australian Energy Market Operator noted there were no system security or reliability issues, nor departures from normal market rules and procedures. Climate Councillor and former Origin Energy executive Andrew Stock concluded that “increasing reliance on high-priced gas is not a viable solution to reduce power prices or to tackle climate change”.

He argued that more gas power would push up prices even more, increase reliance on the state’s ageing obsolete gas-fuelled fleet and increase greenhouse emissions, including risks of fugitive methane emissions.

On the side of gas, Origin Energy chief executive Grant King pointed out: “South Australia’s electricity demand was met in full. The reality is that, while spot prices ran up, 99.99% of customers in South Australia did not pay one more cent for their electricity. So, from a reliability and affordability point of view, the market delivered.”

Similarly, Tristan Edis from the advisory group Green Markets noted: “In reality the wholesale electricity market as it is currently designed is doing precisely what you would want it to do to accommodate increasing amounts of renewable energy while also ensuring reliable supply of electricity.”

What energy system do we want?

The role of gas is now a conundrum, particularly if, as seems to be the case, Australia’s energy ministers see gas playing a bigger role in shoring up the electricity market.

How this would work is far from clear. Current energy and climate change policies combined with relatively high gas price forecasts suggest that the proportion of gas in the power generation mix is unlikely to rise significantly.

Yet gas plants that can provide backup for intermittent renewable sources such as wind and solar may very well be needed. How much will be needed, for how long and how it will be paid for will depend on how quickly a superior mix of generation and storage technologies with very low emissions emerges and what policy mix drives the transition.

One consequence of these changes must be recognised. Whatever mix of wind, solar and gas power begins to replace our coal-dominated supply sector will cost more. Without a carbon price, electricity is generated from existing sources at less than A$50 per megawatt-hour, while wind, solar and gas all cost at least more than A$80 per megawatt-hour.

In responding to the real or perceived recent crises in South Australia (and Tasmania), our political leaders need to abandon wishful thinking and laying blame to focus on delivering and explaining the energy system that we want and need.

The Conversation

Tony Wood owns shares in companies including in energy and resources through his superannuation fund.

Categories: Around The Web

New Zealand is letting economics rule its environmental policies

Mon, 2016-08-22 14:36
New water policies could cause even more harm to the already damaged Tukituki River. Phillip Capper/Wikimedia Commons, CC BY

Balancing the environment with development is tricky. One way for policymakers to include the value of ecosystems in development is to set limits for pollution and other environmental impacts, known as environmental bottom lines (EBLs). These can be a helpful way of embedding into an economy the value of ecosystems. They also help protect natural assets in order to maintain a sustainable cash flow.

Unfortunately, bottom lines also risk developments meeting limits without actually helping the environment. Bottom lines form a significant part of environmental policy in New Zealand, in particular in the areas of freshwater and greenhouse gas emissions.

Bottom lines should not have as much influence in New Zealand policy as they do. So how can we make better policy that actually helps the environment?

Setting a low bar

The New Zealand government is reviewing its National Policy Statement for Freshwater Management and is emphasising the need to maximise an economic return on fresh water as a commodity.

In addition, the statement identifies various bottom lines for local councils (such as maximum acceptable concentrations of pollutants and/or minimum water quality attributes), as well as mechanisms to protect minimum flows.

The combination of listing bottom lines while looking for the best economic return can lead to perverse outcomes. For example, the proposed Hawke’s Bay Ruataniwha Water Storage Scheme would protect water supply for intensifying farming, but increases the risk of worsening the already ecologically crippled Tukituki River.

The bottom-line philosophy is so entrenched that environmental groups recently celebrated a ruling that developers could not pollute a river so badly that it would kill off organisms. A bare minimum standard must be met, but it is not something we should aspire to celebrate.

On the other hand, many regional councils are trying to do better than this by specifying goals for improving water quality in certain areas. The Rotorua Lakes and Lake Taupo are examples of central and local government working together to improve conditions.

Lake Taupo Sids1/Wikimedia Commons, CC BY

But without clear central government support, those councils that want to go beyond the bottom line and make more significant environmental improvements may end up facing legal action brought by those suffering real or imagined erosion of their property rights.

The same is true of greenhouse gases, particularly those related to transport development. The current benefit-cost approach to investment in roads is not assessed against national emission reduction targets. This leads (as one example) to nationally signficant road projects being approved without accounting for transport emissions increases.

While better roads increase fuel efficiency and so lower emissions per vehicle, they also generate more car use, meaning a net increase in emissions. Road transport emissions have increased 72% between 1990 and 2014.

True, the government has voiced support for electric cars and use of biofuels and also funded more public transport, walking and cycling, which will help reduce emissions. But overall the lack of joined-up thinking and a bottom-line approach – we will pollute, but only this much – protects economic growth rather than the environment.

While water quality and greenhouse emissions are less bad than they might have been with no policies at all, the bottom-line concept implies that ecosystems can be maintained at some measurable minimum acceptable standard, with the option of improvement when conditions allow.

Unless matched with clear timelines and goals to improve ecological health, the result is a continued trading down of ecosystem assets in order to boost economic ones.

Positive developments

An alternative to the bottom-line mindset would be to implement environmental policies that call for net positive ecological outcomes – so-called “positive development”.

This integrates ecological decline and improvement into economic decision-making. The human and ecological history of a place would be accounted for. You would look not only at whether the materials for, say, a building came from sustainable sources, but whether you were contributing to improving ecosystems.

For example, protecting and enhancing biodiversity is done in Australia and New Zealand to offset development impacts. The preference is not just to minimise harm, but to improve things.

In the same way that economic investments need to demonstrate a positive financial outcome, so positive development will require a demonstration of how human activity will contribute to improving ecological health – water quality, biodiversity, local and global air quality, and so on.

Attached to resource consents, it could mean failure to demonstrate net ecological benefit means no permit. This shifts things from, say, just rehabilitating a mine site to requiring demonstrated improvement in its post-mining ecological value, or contributing to improving ecological values elsewhere.

As explained by Janis Birkeland in her 2008 book Positive Development, this approach goes beyond reducing use of materials, carbon and energy (the kind of outcomes attached to such initiatives as green buildings), to requiring improvements in total ecological health over the life cycle of a proposed development.

Applied to water quality, it would require developers to show how they would improve water quality and associated ecological values, rather than merely meeting minimum defined standards. And in terms of climate change, it would require proof that transport funding would result in a decline in emissions, rather than simply limiting the rate of increase.

What is needed is a government that is willing to go beyond requiring that development minimises harm to requiring that it does actual good.

The Conversation

Stephen Knight-Lenihan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Can buying up fishing licences save Australia's sharks?

Mon, 2016-08-22 06:03
Scalloped hammerhead, a species that is listed as endanged by the International Union for Conservation of Nature Mark Priest, CC BY-SA

The World Wildlife Fund (WWF) recently raised over A$200,000 to buy shark fishing licences in Queensland’s waters. They estimate the licences, for operating nets in and around the Great Barrier Reef Marine Park, could have been used to catch 10,000 sharks each year.

Retiring these licences is a new development in Australian shark conservation, but may also limit locally caught seafood.

But do Australia’s sharks need saving, or can we eat them? It depends on where you look.

Sustainable sharks

Sharks in general are much more vulnerable to overfishing than other fish. Compared to most fish, sharks have far fewer offspring over their lifetimes. As a result, shark populations cannot tolerate the same levels of fishing that fish can sustain.

Globally, there is great reason for concern over the status of sharks. About a quarter of all sharks and rays are threatened with extinction. The high value of shark fins in Asian markets drives a large and often unsustainable shark fishery that reaches across the globe.

Australia has an important role to play in combating this trend. Many species that are globally threatened can find refuge in the Great Barrier Reef Marine Park, which has an extensive system of protected areas and comparatively low fishing effort. Despite this potential safe haven, some species in Australia still rest on an ecological knife edge.

A white-tip reef shark in the Great Barrier Reef Marine Park. Christopher Brown

For example, the great and scalloped hammerheads (which the WWF says will benefit from the licence purchase) are both by-catch species in the Australian fishery and are listed by the International Union for Conservation of Nature as endangered.

Australian fishermen don’t head out to catch hammerheads intentionally; most people do not consider the meat palatable. However, their hammer-shaped head is easily entangled in nets. Therefore hammerheads may be highly susceptible to any increase in fishing pressure.

Commercial fishers are legally required to have a licence. By buying the licences, WWF can limit the number of active nets in the water.

However, not all shark species are as vulnerable to fishing as the iconic hammerhead. Several shark species in Australia are well-managed. For instance, the spot tail shark is fast-growing and has many young, making it relatively resilient to fishing pressure. Many Australians regularly enjoy these species with a side of chips.

Species targeted by Queensland’s shark fishery are likely sustainable. The latest fishery assessment published by the Queensland government in 2014 found that catches of most shark species were well within safe limits.

Supporting our local shark fisheries is therefore far better than importing shark from overseas where fisheries may be poorly managed.

But it is not all good news in Australia. Both the assessment and an independent review found that while Queensland’s shark catch likely is sustainable, we need to be cautious about allowing any increases.

Importantly, Queensland’s 2014 shark assessment relies on very limited data. A crucial fishery observer program was cut in 2012. The limited data mean that regulations for Queensland’s shark catches are set conservatively low. Any increase in catch is risky without an assessment based on higher-quality data.

Scientists use tag-and-release programs to track the movements and population size of sharks. But more direct fisheries data are needed. Samantha Munroe A win for fishers and fish

Buying up licences in an uncertain fishery may be an effective way to prevent the decline of vulnerable species. Although buying licences is a new move for marine conservation groups in Australia, elsewhere it has proven an effective strategy for conservation and fisheries.

For instance, in California the conservation group Nature Conservancy bought fishing licences for rockfish, some species of which are endangered.

The Nature Conservancy now leases those licences back to fishers that promote sustainable fishing methods. The fishers themselves can charge a higher price for sustainable local catches of fish. What started as a move purely for conservation has had benefits for those employed in fisheries.

The lesson here is that conservation organisations can be the most productive when they work with, not against, fisheries. The recent shark licence purchase in Australia could be a great opportunity for fishers and conservation organisations to work together to maintain healthy ecosystems and fisheries.

But if Australians are serious about protecting sharks, there are other steps we still need to take. Queensland should reinstate the fishery observer program so we have reliable data to assess shark populations. For instance, currently we don’t know how many sharks are caught as by-catch in other fisheries.

A lemon shark seeks its fish prey in the shallow waters on Australia’s Great Barrier Reef. Lemon sharks are caught by our fisheries, but are not a target species. Megan Saunders

Shark control programs designed to protect bathers are also a threat to endangered shark populations. However, data on deaths from shark control in Queensland were not accounted for in the government’s catch limits.

Accounting for these missing deaths could make a serious dent in our sustainable catch, an independent review found.

There is an opportunity to address these issues in Queensland’s upcoming fisheries management reform. Have your say here.

If conservation groups can work with fisheries, a more consistent and sustainable shark-fishing strategy may emerge. Australians can continue to be proud of our efforts to protect marine life, but can still enjoy shark for dinner.

The Conversation

Christopher Brown receives funding from The Nature Conservancy and the Australian Research Council. He is affiliated with the Australian Marine Sciences Association and the Society for Conservation Biology.

Samantha Munroe receives funding from the Save Our Seas Foundation and the Australian Research Council. She is also a member of the Oceania Chondrichthyan Society.

Categories: Around The Web

A task for Australia's energy ministers: remove barriers to better buildings

Fri, 2016-08-19 06:13
Better, cleaner buildings could deliver a quarter of Australia's greenhouse gas reductions. Buildings image from www.shutterstock.com

Energy upgrades in Australia’s buildings could deliver a quarter of Australia’s 2030 emissions reduction target. Improving energy performance through improved building design, heating and cooling systems, lighting and other equipment and appliances could also deliver more than half of our National Energy Productivity Target.

Progress has been slow, however, and our research shows that delay leads to lost opportunities and billions in wasted energy costs.

The new federal environment and energy minister, Josh Frydenberg, has an opportunity here to demonstrate the potential of his new merged role. Today in Canberra, Australia’s energy ministers are meeting for the first time since the election through the COAG Energy Council.

One item on the agenda will be the National Energy Productivity Plan (NEPP). It aims to improve energy productivity 40% by 2030. This involves increasing the economic value produced from each unit of energy consumed.

The NEPP contains a number of good measures relating to buildings. However, without stronger governance arrangements, more transparency and stronger and clearer public communication and engagement, there is a risk that these policy measures will simply slip between the cracks of multiple agencies, portfolios and jurisdictions in the building sector.

What can better buildings achieve?

Our research found buildings could help meet our climate and energy goals, as you can see in the charts below.

We found that improving energy efficiency in buildings could deliver 10% of our emissions target. Distributed energy (primarily rooftop solar) could achieve an extra 18%.

Potential contribution of built environment opportunities to 2030 national emissions target (MtCO2e) ClimateWorks Australia, May 2016

The energy efficiency improvements could reduce energy use by 202 petajoules, or half of what would be needed to achieve the energy productivity target.

Potential contribution of built environment energy efficiency opportunities to 2030 National Energy Productivity Target (PJ) ClimateWorks Australia, May 2016 The cost of delay

Despite the massive opportunity to reduce emissions from the building sector, overall progress to date has been slow.

Market leaders, particularly in the commercial office market, have achieved a radical change in their energy productivity and are recognised as global leaders in sustainable buildings. There are many examples of very high-performing or net-zero-emission buildings around Australia.

However, the market as a whole has improved its energy performance only 2% over the past decade for commercial buildings, and 5% for residential buildings. We are not currently on track.

Our report found that continuing to delay action to reduce emissions from buildings means we would lose a substantial amount of cost-effective options to improve energy performance. Many emissions reduction opportunities exist only for a certain period of time. For example, installing inefficient equipment instead of more efficient options effectively locks in excessive emissions for many decades into the future.

Just five years of delay could lead to A$24 billion in wasted energy costs and more than 170 million tonnes of lost emissions reductions by 2050. This is a very substantial loss, considering the current national emissions target aims to reduce emissions by 272 million tonnes by 2030.

Without additional action buildings would eventually consume more than half of Australia’s “carbon budget” by 2050. That would leave less than half for all other sectors of the economy, including emissions-intensive industries, transport, land and agriculture.

Cost of delay (MtCO2e) ClimateWorks Australia, May 2016 Stronger policy

To realise the emissions reduction potential in the building sector, strong policy will be required to tackle the barriers to better energy performance for buildings. Our report recommended five key solutions as part of an integrated policy suite.

First, develop a national plan to co-ordinate policy and emissions-reductions measures to extend gains made by market leaders across the entire building sector.

Second, introduce mandatory minimum standards for buildings, equipment and appliances aligned with the long-term goal of net zero emissions.

Third, develop incentives and programs to motivate and support higher energy performance in the short to medium term.

Fourth, reform the energy market to ensure it supports cost-effective energy efficiency and distributed energy.

Finally, we need a range of supporting data, information, training and education measures to enable informed consumer choice and support innovation, commercialisation and deployment of new technologies and business models.

Implementing these policy measures would set Australia on a pathway to zero-carbon buildings and unlock the large potential for buildings to deliver improved health outcomes and more liveable and productive cities.

Unblocking barriers

Unfortunately, the opportunity to reduce emissions from buildings is blocked by strong barriers that require co-ordination between the Commonwealth, states and territories.

To address the complexity of this task, the NEPP needs stronger governance arrangements, including a specified target or targets for buildings, to complement the overall 40% NEPP target, and more regular public reporting (there is no public review until 2020).

Stronger and clearer communication and engagement around the target and buildings’ energy performance within it would also help provide confidence and drive innovation and activity among households and businesses.

In addition, we need better co-ordination between the members of the Energy Council, and between the council and other government forums and agencies.

For example, the National Construction Code, which regulates minimum standards for new buildings and major refurbishments, is a critical policy lever. However, the code is overseen by the Building Ministers Forum, not the Energy Council, while a range of different state and territory bodies oversee enforcement of the standards.

There are similar issues around harmonising of different energy performance ratings across jurisdictions, co-ordinating training and accreditation of professionals throughout the building design and construction sector, and energy market reform to establish a level playing field for energy efficiency and decentralised renewable energy.

Co-ordination of these issues should be a major focus for the Energy Council. The new minister for environment and energy – as the minister responsible for delivering on both our national emissions reduction targets and on the productivity plan – is now in a unique position to lead these efforts. We encourage the COAG Energy Council to support him in this.

The Conversation

Eli Court is Implementation Manager at ClimateWorks Australia which receives funding from philanthropy and project-based income from federal, state and local government and private sector organisations. ClimateWorks received funding from the Australian Sustainable Built Environment Council for the Low Carbon, High Performance report referenced in this article.

Categories: Around The Web

Arrium's requiem - the events of July 7th

Thu, 2016-08-18 18:12

Mark July 7th as a red letter day in Australia’s tortuous path to decarbonisation - a day of special significance and opportunity.

The causes and consequences of the wild gyrations on the South Australian electricity wholesale market that day, will be scrutinised for months, worrying regulators, politicians, businesses and commentators alike. The events, and how we interpret them, will have ongoing implications for future business investment decisions, for the survival of struggling businesses such as Arrium, and for how we meet the challenge of decarbonisation.

The events of July 7th will, no doubt, sharpen the minds of our energy ministers who are meeting Friday (18th August) in Canberra at the COAG Energy Council. Thankfully, recent statements by the Federal Minister Josh Frydenberg lend hope to the idea that rationality will trump ideology in any COAG outcome. However recent history suggests it will take some time before the bipartisanship emerges essential to realising the opportunity of our red letter day. Ever the optimist, I remain hopeful.

And in that hope, Dylan McConnell and I have prepared a rather lengthy analysis of those events in a report titled Winds of Change - An analysis of recent changes in the South Australian electricity market, available at this link. Here I summarise some key points we consider in that analysis.

What happened July 7th

July 7th was a calm, cold winter day across South Australia, as it was exactly one year before.

Demand for electricity reached a high of over 2183 megawatts in the early evening well above the typical South Australian average of around 1300-1400 megawatts. The calm conditions meant the output of the 1575 megawatts of installed wind capacity fell to almost zero by mid afternoon and contributed no more than 13 megawatts throughout the high demand evening period. With upgrades on the Heywood interconnecter into Victoria severely limiting the ability to import power, gas generators and a little bit of diesel were all that were available. With Engie’s Pelican Point station effectively mothballed (having earlier on-sold its gas supply into the gas market), AGL (Torrens A an Torrens B) and, to a lesser extent, Origin (Osborne and Quarantine stations) were in a pivotal supplier positions at various stages across the day, meaning they were needed to meet demand. The capacity bid into to the market topped out at 2413 megawatts.

The relevant data is captured in the images below. The first shows the dispatch by fuel type over the period 6th July through to 8th July. The second shows the dispatch by generator/wind farm averaged across 7th July. The third shows the contributions made by interconnector, and different fuels, along with wholesale prices for the period midday through to 11 pm on 7th July.

South Australian electricity market dispatch coloured by fuel type for the period 6th July - 8th July, 2016.

Dispatch by power station and fuel type averaged across the day for July 7th 2016 in South Australia. Stations dispatching less that 10 megawatts are not shown. Other is distillate.

Time series for South Australia on July 7th 2016 from midday onwards. The top panel shows shows the 5-minute dispatch price (note logarithmic scale). Panel b show interconnector flows, with V-SA representing Heywood and V-S-MNSP1 representing Murraylink. The dark line and shaded region shows the net imports, which averaged 151 MW over the period. Panels c, d and e show the output of gas-fired generators, wind and distillate generation during this period. In all panels, vertical tick marks at the top of each panel show period where the 5-minute price exceeded $9,000/MWh

Across the day, SA dispatch prices (that are resolved at the 5-minute interval) exceeded $10,000 per megawatt hour (MWh) on 24 occasions, and the volume weighted price price for the day was just above $1400/MWh. The peak settlement price (resolved at the 30-minute interval) of just below $9,000/MWh occurred between 7:00pm and 7:30pm. For reference the average wholesale price in South Australia is about $60/MWh

Settlement prices were above $2000/MWh for most of the afternoon and evening. In the extreme trading interval between 7:00pm and 7:30pm wind was dispatching only 13.5 megawatts and other generators displayed erratic dispatch patterns. For example, the output from the AGL Torrens Island plants reduced by 90 megawatts soon after 7 pm while the prices remained near the price cap of $14,000/MWh.

To provide a reference frame, it is useful to compare the events of this year, with the same period of last year, as shown below. The comparison is made all the more useful because July 7th in both years were similarly calm, with negligible wind generation, and a very similar demand profile since both were weekdays (peak demand on July 7th 2015 of 2133 megawatts). However, in 2015, Alinta’s brown coal Northern Power Station in Point Augusta was still operating, contributing around 300 megawatts, and the Heywood interconnect was fully operational allowing imports to average around 530 megawatts, and up to 620 megawatts at peak. Together, they meant that for what were essentially equivalent conditions, some 600 megawatts less gas was needed on July 7th in 2015, compared to the same day a year later.

For the period midday through 11:00pm prices on July 7th 2015 averaged only $112/Mwh, with only one 5 minute price spike reaching above $500/MWh

South Australian electricity market dispatch coloured by fuel type for the period 6th July - 8th July, 2015. Dispatch by power station and fuel type averaged across the day for July 7th 2016 in South Australia.

Time series for South Australia on July 7th 2015 from midday onwards. The top panel shows shows the 5-minute dispatch price (note logarithmic scale). Panel b show interconnector flows, with V-SA representing Heywood and V-S-MNSP1 representing Murraylink. The dark line and shaded region shows the net imports, which averaged 151 MW over the period. Panels c, d and e show the output of gas-fired generators, wind and distillate generation during this period.

The key notable difference between the winters of 2015 and 2016 was the price of gas, which had literally gone through the roof, by 375% for the day of July 7th (335% for the week). But that was nothing compared to the wholesale electricity price outcomes which had gone stratospheric, rising some 2000% over the intervals considered here.

Market power

Our wholesale energy-only market is deliberately structured so that scarcity events are valued way above the cost of fuel, which typically would amount to only a few $‘00/MWh for even the most expensive diesel or gas generator. This ensures that enough generation capacity is available to meet scarce high demand periods. For example a gas peaking generator may be needed only a few days a year, and so needs to recoup prices in the $'000’s/MWh for the times it is dispatching in order to cover its long run costs.

To avert risks and ensure supply, participants normally engage via the contract market, rather than directly through wholesale market. A variety of standard contracting arrangements are available. The one that reduces risks of extreme price spikes is the cap contract typically set with a strike price of $300/MWh. The cap contract provides a form of insurance to mitigate exposure to high price events, where the buyer pays a contract price to the supplier independent of the wholesale price outcome, and in return gets refunded for any wholesale price events above $300/MWh.

The importance is that it is rumoured that some large energy users in South Australia were either uncontracted or under-contracted on their supply this year. Any energy intensive business exposed to the wholesale market on July 7th would have been in for one almighty shock.

The extent of contracting varies across the regions that make up the National Electricity Market (NEM), with South Australia reportedly with lower liquidity than other regions. That is consistent with issues to do with market power, or perceptions thereof, and there are certainly strong indications that market power is becoming a big problem in South Australia.

As noted above, key generators were in the positions of pivotal supply on July 7th - by our estimates, AGL for around 87% of the day and Origin for 11%.

There are very good reasons for prices to rise during scarcity events, but how much they rise is dependent on competition. Pivotal suppliers are able to exercise market power to extract so called monopoly rents, and there is certainly some circumstantial evidence that suggests as much, perhaps partly motivated by a desire to force buyers back onto the contract market. Who knows?

A useful index for market concentration is the Herfindahl-Hirschman index (HHI). As the sum of the squares of the market percentage shares, it can rise to 10,000 for 100% concentration. The ACCC sets an HHI index at 2000 to flag competition concerns. The UK’s Office of Gas and Electricity Markets (OFGEM) regards an HHI exceeding 1000 in an electricity market as concentrated and above 2000 as very concentrated. With a current HHI value of 1243, the OFGEM considers the UK wholesale electricity market somewhat concentrated. According to the Australian Energy Regulator, the regional NEM markets had HHI indices in the range 1700-2000 in 2015. With the closure of Alinta’s Northern and the mothballing of Engie’s Pelican Point station, the effective HHI index in South Australia in July this year was probably around 3300-3400, making it exceedingly concentrated.

It is important to note that the market concentration in South Australia is consequence of a sequence of events, some dating back as far back as 2000 when the state government put its generation assets up for sale. Then minister Rob Lucas refused a request from Origin to split the two gas fired generating stations at Torrens Island (Torrens A and Torrens B), on advice from Morgan Stanley deciding to sell them as a bundle to TXU. One can only surmise the combination was worth more than the sum of the parts, presumably because it would give its new owner greater power. South Australians may now be paying the dues owed on that decision.

But other factors have also conspired. AGL, who acquired Torrens island from TXU in 2007, was not responsible for Alinta’s decision to close Northern in May, or Engie’s mothballing of Pelican Point earlier in the year.

However, by virtue of these events AGL has found itself in a position of unprecedented market power. How it and other participants responded will no doubt be examined in exruciating detail in coming months, not in the least because AGL could find itself with similar power if Hazelwood or Yallourn were to exit Victoria.

What we can see is that margins on South Australian gas generation units have increased dramatically in recent times rising from about $17/MWhr to two to three times that, in a measure called the spark spread. As the figure below shows despite the gas market price rises affecting all regions, South Australia was the only region to show an anomalous rise in the gas margin. A rise in margins with a rise in volumes, is not the trade off one expects in an efficient market.

Top panel - illustrates the spark spread for gas generation in South Australia. The margins for Queensland, New South Wales and South Australia are compared in the bottom panel, since February 2015. Prior to June 2016, the spark spread was relative constant and broadly consistent across the three regions, with the exception of the late summer and early autumn period when Queensland spark spread was elevated by a factor of about four. Prior to June 2016, the South Australian spark spread averaged $17/MWhr. That value is comparable to the spark spread in other, completely unrelated jurisdictions such as the United Kingdom. We assume a typical Combined Cycle Gas Turbine with an assumed thermal efficiency of 50%, analysis by Dylan McConnell.

So what should we make of this?

A number of interdependent factors are playing out in the evolving South Australian electricity sector. The aggregate effect manifest dramatically on July 7th reflects a complex interplay between legacy issues (including existing asset ownership), the increasing penetration of wind generation, competing developments in the gas market, and the absence of coordination of transitional arrangements.

The rise in wind generation in South Australia since 2006 has impacted in several ways. It has put downward pressure on wholesale prices, which declined in real terms in the period 2008 through 2015 (while also generating net Large-scale Renewable Energy Target certificates to the annual value of about $120 million). It is of note that over the last five years the wholesale market prices in South Australia have been pretty much on par with those in Queensland, which has no wind assets.

However, in so doing, wind generation has contributed to decisions to close brown coal generators, and increased South Australian dependence on imports and, in times of low wind output, gas. As one of the largest stations on the NEM in terms of its capacity relative to regional demand, the closure of Northern Power Station in May, 2016, has tightened the demand-supply settings, and that has reset the wholesale price clock upwards.

The dramatic increases in gas prices in the winter of 2016, driven by dynamics in the export market have added at least $140-$200 million to the annual cost of South Australian electricity supply. It is important to note however, due to its historic reliance on gas generation, South Australia has always been exposed to movements in gas prices even without the investment in wind. In pure energy terms, gas generation has been declining over the last decade, due in significant part to the addition of wind to the generation mix.

Despite the closure of the Northern Power station, gas dispatch has remained at near record low levels in seasonally-adjusted terms. What has changed dramatically since Northern’s closure is the concentration of market power. While there are legitimate reasons for power station owners to increase prices to reflect scarcity value, our analysis suggests that recent increases in wholesale prices have been well in excess of the reasonable market response, and reflect the extraction of monopoly rents. Such ‘opportunism’ has been encouraged by poor coordination of the system adjustments, such as mid-winter upgrades to Heywood interconnector.

Multiple options exist that address both of these issues to greater or lesser extent. OCGT is a cheap option to increase capacity and supply in peak periods, but does not improve competition or supply outside these periods and is still partially dependent on gas prices. Storage options can both increase capacity in peak periods, and increase competition through daily arbitrage opportunities. CST further reduces the consumption and reliance on gas, while providing capacity. Additional interconnection (above and beyond the current 190 MW expansion of Heywood) may also prove to be a viable solution.

In the longer term, South Australian experience points to the need to diversify low emissions generation and storage portfolios. As we necessarily decarbonise the national electricity system and increase renewable energy penetration, technologies such as storage and solar thermal will become increasingly necessary to provide for both peak capacity and reliability of supply.

The South Australian experience provides a salutary forewarning of the havoc that can ensue from lack of coordinated system planning in times of transition. It bears on the question of disorderly exit that will be faced in all markets requiring substantial decarbonisation, in part because of the scale of the fossil power stations that are displaced.

Finally, South Australia highlights the potential benefits for system wide oversight of transitional arrangements to avoid market power issues. The recent price rises in South Australia would have been much less extreme had Northern’s closure not occurred prior to completion of the upgrade to the Heywood interconnect at a time when Pelican Point was effectively mothballed, and demand was rising to meet the winter peak. As it transpired, the coincidence of all these factors contributed to a rapid and unprecedented rise in the concentration of market power.

And for the COAG meeting need we emphasize that the benefits of market competition can only be realised if markets are competitive.

For the full details see our report “Winds of Change - An analysis of recent changes in the South Australian electricity market” available here, for which any credit should go to coauthor Dylan McConnell.

The Conversation Disclosure

Mike Sandiford receives funding from the Australian Research Council for geological work.

Categories: Around The Web

A Trump presidency would spell disaster for the Paris climate agreement

Thu, 2016-08-18 06:28

The upcoming US presidential election could make or break the Paris climate agreement.

Unlike the previous Kyoto Protocol, the entire Paris Agreement (which is yet to enter into force) was shaped to allow the US to legally join through a presidential-executive agreement. The lack of binding targets for emissions cuts or financing means that the agreement just needs President Barack Obama’s approval, rather than a majority vote in the US Senate.

It was a politically expedient move that I predicted in a paper earlier last year. Clearly the world has learned, for better or for worse, from the experience of the Kyoto Protocol, which the US never ratified due to the politically divided Senate.

But watering down the treaty to allow US participation is a risky strategy. Rather than relying on strong rules or ambition, the Paris Agreement depends on legitimacy through universal participation. With enough countries on board there is hope that it could change investment and policy patterns across the world.

That legitimacy hinges on US participation, and Obama will not determine the continued involvement of the US. The November election will decide what role the US plays in the agreement.

The Trump card

I suggested in a recent paper that a presidency under a Republican candidate such as Donald Trump could be fatal to the Paris Agreement. The damage could be done on two counts: the US withdrawing from the agreement and/or rescinding its domestic actions and targets.

Trump has already been vocal about his intention to “cancel” or “renegotiate” the agreement. However, some have claimed that having the agreement enter into force before November would bind Trump to the agreement for at least three years (due to one clause in the agreement).

Entry into force essentially means the agreement becomes operational and has legal force under international law. This would require 55 countries accounting for at least 55% of global greenhouse gas emissions (so far 22 countries representing 1% of global emissions have signed). However, there are three problems with this simple analysis.

First, it is unlikely that the Paris Agreement will enter into force before the inauguration of the next US president. While US ratification of Paris only requires the approval of Obama, for other countries the process is much more strenuous and time-consuming. Having 55 countries and at least three of the biggest emitters in the world ratify the agreement in the next six months is a high expectation. The Kyoto Protocol took eight years to go from agreement to entry into force.

Second, Trump could simply drop out of the United Nations Framework Convention on Climate Change, the overarching treaty under which Paris was created. This would take only a year and would lead to automatic withdrawal from Paris. Dropping out of the entire climate negotiations would generally seem like an extreme move. However, for a loose cannon like Trump it may be just another day in the White House.

Third, Trump would not need to withdraw officially to throw the agreement into chaos. Refusing to send a US delegation to the negotiations, or simply reneging on the US’s national climate target would do just as much, if not more, damage than withdrawing.

And let’s be clear, a Trump presidency would mean the US would miss its domestic climate targets.

Analysis by Climate Action Tracker suggests that the US would need additional measures to meet its pledge of reducing emissions by 26-28% on 2005 levels by 2025. This is still the case even if the Obama administration’s Clean Power Plan is carried out.

Trump will be further weakening, rather than strengthening, climate action. The Republican platform on energy can be roughly summarised as “drill baby, drill”. It promises to approve the Keystone XL pipeline, maximise use of domestic fossil fuel reserves as part of an “all of the above energy policy” and rein in the powers of the Environmental Protection Agency (EPA).

Is there anything the Paris Agreement could do to stop a renegade US under Trump?

Unfortunately not. The agreement lacks any measures to deal with countries outside the agreement and has only a “non-adversarial and non-punitive” compliance mechanism. Paris has far fewer teeth than the Kyoto Protocol.

A rogue US missing its targets with no consequences could be a fatal blow to the legitimacy of Paris – it would showcase to the world just how weak the agreement truly is.

Clinton’s climate

It’s clear that Trump would be an unmitigated disaster for the Paris Agreement, but what would Clinton mean for the climate?

The impact of a presidency under Democratic candidate Hillary Clinton is more difficult to predict. In the short term it is likely to be business-as-usual for the climate talks.

Clinton is a supporter of the Paris agreement, having declared in her keynote speech to the Democratic National Convention: “I’m proud that we shaped a global climate agreement – now we have to hold every country accountable to their commitments, including ourselves.” Under Clinton, the US would remain a party to the agreement or legally adopt it if Obama had not yet done so.

The bigger question is whether Clinton will drive action to ensure the US increases its targets. The current Democratic platform gives hope that she will.

The platform calls for a second world war-style mobilisation to address climate change. It explicitly calls for a price on carbon and aims for 50% of US electricity generation to be from “clean energy” sources within a decade.

However, Clinton is by no means bound by the party platform, which has been moulded to appeal to supporters of the more pro-climate Bernie Sanders within the party. Clinton’s climate credentials have been called into question, particularly with her controversial support of fracking.

It is also uncertain how much Clinton could do without congressional support. Arguably, Obama is already pushing the limits of presidential powers. Indeed, the Clean Power Plan is being contested in the Supreme Court.

A Clinton administration will likely do little to hinder climate action, but it also looks unlikely to take the drastic action needed to put the world on track to limiting global warming to 1.5℃ or 2℃.

Ultimately, the reason for Paris’s success may prove to be its undoing. Relying on the goodwill of a single president is a short-sighted gamble. Come November, the world may once again have a heavy price to pay for investing so much hope in US leadership.

The Conversation

Luke Kemp has previously received funding from the Australian and German governments.

Categories: Around The Web

Neonicotinoids linked to wild bee and butterfly declines in Europe and US

Wed, 2016-08-17 10:48
Honeybees aren't the only wildlife affected by pesticides – wild bees and butterflies also feel the effect. Wild bee image from www.shutterstock.com

Two separate studies from the United States and England, both published today, show evidence that populations of butterflies and wild bees have declined in association with increased neonicotinoid use.

Neonicotinoids, or neonics, are pesticides applied to crops as seed treatments or sprays. Neonics have high selective toxicity for insects, meaning they are more toxic to insects than mammals. When insects eat the treated plants, the pesticides affect the insects' health, behaviour and reproductive success.

While there have been few studies in the natural environment until now, concerns about the ecological impact of neonics, including their possible link to bee declines, led the European Union to restrict their use in 2013. EU scientists are currently reviewing the ban, with recommendations expected next year.

What do the new studies tell us?

In the US study, researchers looked at 40 years of butterfly data in northern California. They found that populations declined dramatically in the late 1990s. Smaller butterfly species that produced fewer generations each year were the most affected.

These declines were associated with increasing use of neonics across the region, beginning in the mid-1990s. The data for neonics usage was obtained from US government pesticide use databases.

This study is an important contribution to our understanding of how neonics affect non-target insects in the wild.

The butterfly data was collected from four sites monitored by the same person (an expert entomologist) for up to four decades. This level of data integrity is quite rare in modern ecological studies. Long-term consistency in the data collection means that many of the effects of different observer skills or collection efforts have been minimised.

The second study looked at wild bee populations in the UK. The researchers focused on oilseed rape that had been seed-treated with the pesticide. Rape is a common source of neonics in agricultural environments and is also a highly attractive floral resource for many wild bees and other pollinator insects.

This study also uses high-quality data. It is based on 18 years of data for 62 bee species collected by the Bees, Wasps and Ants Recording Society, a specialist UK entomological society, as well as pesticide use data from the UK government. The researchers show that, over time, the negative effects of neonics exposure for wild bees outweighed the benefits of the crop as a food resource.

They provide the first evidence that neonic seed treatments are associated with national-scale declines in wild bees at the community level. Populations of species that are known to forage regularly on rape were affected three times more than species not seen on rape flowers.

Out of the lab

These studies are important contributions to science. Most previous studies showing negative effects of neonics on non-target insects have been conducted under short-term field conditions, or controlled conditions in a laboratory, using commercially bred bees (mostly European honey bees or bumblebees).

The evidence from these studies shows that while individual bees may not die immediately after exposure to the pesticides, sub-lethal effects on behaviour and health can affect their ability to pollinate crops, and impact the success of the colony as a whole.

These short-term, controlled studies tell us a lot about the biological and physiological effects of neonics on managed colonies of particular bee species. But they can’t tell us how neonics affect wild insects under natural conditions, or how consistent exposure might affect populations of other insect species over time.

The evidence from the studies published today comes from decades of data collected under natural conditions before and after neonics were introduced to the environment. It shows that neonics could affect the long-term persistence of wild pollinator communities.

Importantly, these studies also show that the biological traits of different species influence how neonics affect them. This means that results from studies of one species (e.g. commercially bred honey bees) aren’t helpful to understand impacts on other wild species.

What does this mean for Australia?

There is very little evidence of how neonics affect bees, or other non-target insects, in Australia. The Australian Pesticides and Veterinary Management Authority published a report in 2014 summarising the impact of neonics on honey bees in Australia.

The report concluded that there was a lack of consensus on causes of honey bee declines in Europe and the US. It also stated that insecticides are not a significant issue in Australia, where honey bee populations haven’t declined as we have seen overseas.

However, it’s important to remember that a lack of scientific consensus on this issue exists because very little ecological evidence is available for scientists to answer these questions conclusively.

These new studies provide evidence from specific regions in the US and UK, so we can’t extrapolate the results to Australian conditions with certainty. However, they do leave us with an important reminder that long-term monitoring is essential when trying to understand ecological systems.

In Australia, neonics are approved for use as a seed treatment in a number of crops that are attractive to honey bees and other pollinator insects. This includes canola, corn, sunflower, cotton, kale and clover.

There are still major knowledge gaps in our understanding of these insecticides, but a recent review of evidence found that neonics can persist for years in the environment and affect biodiversity through multiple pathways.

Australia has over 1,800 native bee species, many of which are providing free pollination services to many Australian crops. Thousands of other beneficial insect species living on farms – like flies, wasps, beetles and butterflies – can also be important pollinators and natural enemies. However, it is impossible to know how neonics might affect them without more comprehensive ecological research.

The Conversation

Manu Saunders is affiliated with the Institute for Land Water & Society at Charles Sturt University. She is co-founder of the Wild Pollinator Count, a non-profit organisation aimed at wild pollinator conservation.

Categories: Around The Web

The rise of citizen science is great news for our native wildlife

Wed, 2016-08-17 06:10

Australia is renowned for its iconic wildlife. A bilby digging for food in the desert on a moonlit night, a dinosaur-like cassowary disappearing into the shadows of the rainforest, or a platypus diving for yabbies in a farm dam. But such images, though evocative, are rarely seen by most Australians.

As mammalogist Hedley Finlayson wrote in 1935:

The mammals of the area are so obscure in their ways of life and, except for a few species, so strictly nocturnal, as to be almost spectral.

For some species, our time to see them is rapidly running out. We know that unfortunately many native animals face considerable threats from habitat loss, introduced cats and foxes, and climate change, among others.

More than ever before, we need accurate and up-to-date information about where our wildlife persists and in what numbers, to help ensure their survival. But how do we achieve this in a place the sheer size of Australia, and with its often cryptic inhabitants?

How can we survey wildlife across Australia’s vast and remote landscapes? Euan Ritchie Technology to the rescue

Fortunately, technology is coming to the rescue. Remotely triggered camera traps, for example, are revolutionising what scientists can learn about our furry, feathered, scaly, slippery and often elusive friends.

These motion-sensitive cameras can snap images of animals moving in the environment during both day and night. They enable researchers to keep an eye on their study sites 24 hours a day for months, or even years, at a time.

The only downside is that scientists can end up with millions of camera images to look at. Not all of these will even have an animal in the frame (plants moving in the wind can also trigger the cameras).

This is where everyday Australians can help: by becoming citizen scientists. In the the age of citizen science, increasing numbers of the public are generously giving their time to help scientists process these often enormous datasets and, in doing so, becoming scientists themselves.

A camera trap records a leaping frog while a dingo takes a drink at the waterhole in the background. Jenny Davis What is citizen science?

Simply defined, citizen science is members of the public contributing to the collection and/or analysis of information for scientific purposes.

But, at its best, it’s much more than that: citizen science can empower individuals and communities, demystify science and create wonderful education opportunities. Examples of successful citizen science projects include Snapshot Serengeti, Birds in Backyards, School Of Ants, Redmap (which counts Australian sealife), DigiVol (analysing museum data) and Melbourne Water’s frog census.

Through the public’s efforts, we’ve learnt much more about the state of Africa’s mammals in the Serengeti, what types of ants and birds we share our cities and towns with, changes to the distribution of marine species, and the health of our waterways and their croaking inhabitants.

In a world where there is so much doom and gloom about the state of our environment, these projects are genuinely inspiring. Citizen science is helping science and conservation, reconnecting people with nature and sparking imaginations and passions in the process.

Australian wildlife in the spotlight

A fantastic example of this is Wildlife Spotter, which launched August 1 as part of National Science Week.

Researchers are asking for the public’s help to identify animals in over one million camera trap images. These images come from six regions (Tasmanian nature reserves, far north Queensland, south central Victoria, Northern Territory arid zone, and New South Wales coastal forests and mallee lands). Whether using their device on the couch, tram or at the pub, citizen scientists can transport themselves to remote Australian locations and help identify bettongs, devils, dingoes, quolls, bandicoots and more along the way.

A Torresian Crow decides what to do with a recently shed snake skin. Jenny Davis

By building up a detailed picture of what animals are living in the wild and our cities, and in what numbers, Wildlife Spotter will help answer important questions including:

  • How many endangered bettongs are left?

  • How well do native predators like quolls and devils compete with cats for food?

  • Just how common are common wombats?

  • How do endangered southern brown bandicoots manage to survive on Melbourne’s urban fringe in the presence of introduced foxes, cats and rats?

  • What animals visit desert waterholes in Watarrka National Park (Kings Canyon)?

  • What predators are raiding the nests of the mighty mound-building malleefowl?

An endangered southern brown bandicoot forages in vegetation on the outskirts of Melbourne. Sarah Maclagan

So, if you’ve got a few minutes to spare, love Australian wildlife and are keen to get involved with some important conservation-based science, why not check out Wildlife Spotter? Already, more than 22,000 people have identified over 650,000 individual animals. You too could join in the spotting and help protect our precious native wildlife.

The Conversation

Euan Ritchie receives funding from the ARC, and is involved with the South-central Wildlife Spotter project.

Jenny Davis receives funding from the ARC and is involved in the NT Wildlife Spotter project.

Sarah Maclagan is involved with the South-central Wildlife Spotter project.

Jenny Martin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Solar households to lose subsidies, but it's a bright future for the industry

Tue, 2016-08-16 14:56
Windbacks to solar subsidies may encourage larger systems. Solar panel image from www.shutterstock.com

Solar households in Victoria, South Australia and New South Wales will this year cease to be paid for power they export into the electricity grid. In South Australia, some households will lose 16 cents per kilowatt-hour (c/kWh) from September 31. Some Victorian households will lose 25 c/kWh, and all NSW households will stop receiving payments from December 31.

These “feed-in tariffs” were employed to kick-start the Australian solar photovoltaic (PV) industry. They offered high payments for electricity fed back into the grid from roof-mounted PV systems. These varied from state to state and time to time.

For many householders, these special tariffs are ending. Their feed-in tariffs will fall precipitously to 4-8 c/kWh, which is the typical rate available to new PV systems. In some cases households may lose over A$1,000 in income over a year.

But while the windback may hurt some households, it may ultimately be a good sign for the industry.

What can households do?

At present, householders with high feed-in tariffs are encouraged to export as much electricity to the grid as possible. These people will soon have an incentive to use this electricity and thereby displace expensive grid electricity. This will minimise loss of income.

Reverse-cycle air conditioning (for space heating and cooling) uses a lot of power that can be programmed to operate during daylight hours when solar panels are most likely to be generating electricity. The same applies to heating water, either by direct heating or through use of a heat pump. For heating water, solar PV is now competitive with gas, solar thermal and electricity from the grid.

Batteries, both stationary (for house services) and mobile (for electric cars), will also help control electricity use in the future.

A boost for the industry?

The ending of generous feed-in tariffs is likely to modestly encourage the solar PV industry. This is because many existing systems have a rating of only 1.5 kilowatts (kW), which could not have been increased without loss of the generous feed-in tariff.

Many householders will now choose to increase the size of their PV system to 5-10kW – in effect a new system given the disparity in average PV sizing between then and now.

A new large-scale PV market is also opening on commercial rooftops. Many businesses have daytime electrical needs that are better matched to solar availability than are domestic dwellings.

This allows businesses to consume the large amounts of the power their panels produce and hence minimise high commercial electricity tariffs. The constraining factors in this market are often not technical or economic, and include the fact that many businesses rent from landlords and tend to have short terms for investment. Business models are being developed to circumvent these constraints.

The rooftop PV market also now has large potential in competing with retail electricity prices. The total cost of a domestic 10kW PV system is about A$15,000. Over a 25-year lifetime this would yield an energy cost of 7 c/kWh.

This is about one-quarter of the typical Australian retail electricity tariff, about half of the off-peak electricity tariff, and similar to the typical retail gas tariff. Rooftop PV delivers energy services to the home more cheaply than anything else and has the capacity to drive natural gas out of domestic and commercial markets.

According to the Australian Bureau of Statistics, there are 9 million dwellings in Australia, and the floor area of new residential dwellings averaged 200 square metres over the past 20 years. Some of these dwellings are in multi-storey blocks, others have shaded roofs and, of course, south-facing roofs are less suitable than other orientations for PV.

However, if half the dwellings had one-third of their roofs covered in 20% efficient PV panels then 60 gigawatts (GW) could be accommodated. For perspective, this would cover 40% of Australian electricity demand. Commercial rooftops are a large additional market.

Solar getting big

Virtually all PV systems in Australia are roof-mounted. However, this is about to change because ground-mounted PV systems are becoming competitive with wind energy. We can see the falling cost of solar in the Queensland Solar 120 scheme, the Australian Capital Territory wind and PV reverse auctions and the Australian Renewable Energy Agency Large Scale Solar program , which all point to the declining cost of PV and wind.

Together, wind and PV constitute virtually all new generation capacity in Australia and half of the new generation capacity installed worldwide each year.

The total cost of a 10-50 megawatt PV system (1,000 times bigger than a 10kW system) is in the range A$2,100/kW (AC). A 25-year lifetime yields an energy cost of 8 c/kWh. This is only a little above the cost of wind energy and is fully competitive with new coal or gas generators.

Hundreds of 10-50MW PV systems can be distributed throughout sunny inland Australia close to towns and high-capacity powerlines. Australia’s 2020 renewable energy target is likely to be met with a large PV component, in addition to wind.

Wide distribution of PV and wind from north Queensland to Tasmania minimises the effect of local weather and takes full advantage of the complementary nature of the two leading renewable energy technologies.

The declining cost of PV and wind, coupled with the ready availability of pumped hydro storage, allows a high renewable electricity fraction (70-100%) to be achieved at modest cost by 2030.

The Conversation

Andrew Blakers receives research funding from the Australian Renewable Energy Agency, the Australian Indonesian Centre, the Australian Research Council, Excellerate Australia and private companies.

Categories: Around The Web

Our planet is heating - the empirical evidence

Tue, 2016-08-16 14:31

In an entertaining and somewhat chaotic episode of ABC’s Q&A (Monday 15th August) pitching science superstar Brian Cox against climate contrarian and global conspiracy theorist and now senator Malcolm Roberts, the question of cause and effect and empirical data was raised repeatedly in regard to climate change.

Watching I pondered on a question - what would it take me to change my mind? After all, I should dearly love to be convinced that climate was not changing, or if it were, it were not due to our unrelenting emissions of CO2 and other greenhouse gases. That would make things just so much easier, all round.

So what would make me change my mind?

There are two elements to this question. The first is the observational basis, and the question of empirical data. The second relates to cause and effect, and the question of the greenhouse effect.

On the second, I will only add that the history of our planet is not easily reconciled without recourse to a strong greenhouse effect. If you have any doubt then you simply need to read my former colleague Ian Plimer.
As I have pointed out before, in his 2001 award-winning book “A Short History of Planet Earth”, Ian has numerous references to the greenhouse effect especially in relation to what all young geologists learn as the faint young sun paradox:

“The early sun had a luminosity of some 30 per cent less than now and, over time, luminosity has increased in a steady state.”

“The low luminosity of the early sun was such that the Earth’s average surface temperature would have been below 0C from 4500 to 2000 million years ago. But there is evidence of running water and oceans as far back as 3800 million years ago.”

The question is, what kept the early Earth from freezing over?

Plimer goes on to explain: “This paradox is solved if the Earth had an enhanced greenhouse with an atmosphere of a lot of carbon dioxide and methane.”

With Ian often touted as one of the grand priests of climate contrarians, I doubt that Malcolm would consider him part of the cabal of global climate change conspiracists, though that would be ironic.

As a geologist, I need to be able to reconcile the geological record of a watery planet from time immemorial with the faint young sun hypothesis. And, as Ian points out, with nothing else on the menu, the greenhouse effect is all we have.

If the menu changes, then I will reconsider.

How about the empirical data?

Along with Brian Cox, I find it implausible that an organisation like NASA, with a record of putting a man on the moon, could or would fabricate data to the extent Malcolm Roberts insinuates. It sounds such palpable nonense, it is something you might expect from an anti-vaxer.

However, a clear message from the Q&A episode is there is no way to convince Malcolm Roberts that the meteorological temperature data has not been manipulated to achieve a predetermined outcome. So he simply is not going to accept those data as being empirical.

However, the relevant data does not just include the records taken by meteorological authorities. It also includes the the record preserved beneath our feet in the temperature logs from many thousands of boreholes across all inhabited continents. And the importance of those logs is that they are reproducible. In fact Malcolm can go out an re-measure them himself, if he needs convincing they are “emprical”.

The idea that the subsurface is an effective palaeo-thermometer is a simple one that we use in our every day life, or used to at least prior to refrigeration, as it provides the logic for the cellar.

When we perturb the temperature at the surface of the earth, for example as the air temperature rises during the day, it sends a heat pulse downwards into the earth. The distance the pulse travels is related to its duration. As the day turns to night and the surface cools, a cooling pulse will follow, lagging behind, but eventually cancelling, the daily heating. The diurnal surface temperature perturbations produce a wave like train of heating and cooling that can felt with diminishing amplitude down to a skin depth less than a metre beneath the surface before all information is cancelled out, and the extremes of both day and night are lost.

Surface temperatures also change on a seasonal basis from summer to winter and back again, and those temperatures propagate even further to depths of around 10 metres before completely cancelling [1].

On even longer cycles the temperature anomalies propagate much further, and may reach down to a kilometre or more. For example, we know that over the last million years the temperature on the earth has cycled in an out of numerous ice ages, on a cycle of about 100,000 years. Cycles on that timescale can propagate more than one kilometre into the earth, as we see in deep boreholes, such as the Blanche borehole near the giant Olympic Dam mine in South Australia. From our analysis of the Blanche temperature logs we infer a surface temperature amplitude of around 8°C over the glacial cycle.

So what do we see in the depth range of 20-100 metres that is sensitive to the last 100 years, and most relevant to the question of changing climate?

The image below shows the temperature log from a borehole that we purpose drilled in Gippsland as part of AuScope AGOS program.

Temperature log in the upper 70 metres of the Tynong AGOS borehole drilled and cored to a depth of 500 metres. The temperature logs shown here were obtained by Kate Gordon, as a student at the University of Melbourne.

The temperature profile shows various stages. Above the water table at about 15 metres depth, due to infiltration of groundwater in the vadose zone, the temperatures in the borehole rapidly equilibrate to seasonal surface temperature changes. In the winter, when this temperature log was obtained, the temperatures in this shallow zone trend towards the ambient temperature around 12°C. In summer, they rise to over 20°C. Beneath the vadose zone, the temperature in the borehole responds to the conduction of heat influenced by two dominant factors, the changing surface temperature on time-scales of decades to many hundred of years, and the heat flow from the deeper hot interior of the earth. During a rapid surface warming cycle lasting more than several decades the normal temperature gradient in which temperatures increase with depth can be reversed, so that we get a characteristic rollover (with a minimum here seen at about 30 depth).

Inversion of the Tynong temperature log for surface temperature change over the last 700 years, with uncertainties at the 95% confidence interval. The inversion, which is based on Fourier’s law of heat conduction, shows that we can be confident that the Tynong AGOS borehole temperature record is responding to a long-term heating cycle of 0.3-1.3°C over the last century at the 95% confidence level. The inversion shown here was performed by Kate Gordon.

In geophysics we use the techniques of inversion to identify causative signals, and their uncertainties, in records such as the Tynong borehole log, as well as in the estimation of the value of buried ore bodies and hydrocarbon resources. As shown in the second image, the inversion of the Tynong temperature log for surface temperature change over the last 700 years, with uncertainties at the 95% confidence interval, is compelling. Not surprisingly as we go back in time the uncertainties become larger. However, the inversion, which is based on Fourier’s law of heat conduction, shows that we can be confident that the Tynong AGOS borehole temperature record is responding to a long-term heating cycle of 0.3-1.3°C over the last century at the 95% confidence level.

If there were just one borehole that showed this record, it would not mean much. However the characteristic shallow rollover is present in all the boreholes we have explored, and has been reported in many thousands of boreholes from all around the world.

The only way we know to sensibly interpret such empirical evidence is that ground beneath our feet, down to a depth of around 50 metres or so is now heating from above. The physics that explains these observations dates back to Joseph Fourier, over 200 years ago, so its not exactly new or even contentious. In effect the solid earth below is now absorbing heat from the atmosphere above, counter to the normal process of loosing heat to it. However, if Malcolm can bring to the table an alternative physics to explain these observations, while not falling foul of all the other empirical observations that Fourier’s law of heat conduction admits, then I am happy to consider, and put it to the test. (I suspect Brian Cox would be too, since all good physicists would relish the discovery of a new law of such importance as Fourier’s law).

Perhaps the hyper-skeptical Malcolm thinks that somehow the global cabal of climate scientists has got into all these thousands of boreholes with an electrical heater to propagate the heat signal that artificially simulates surface heating. More fool me.

But, if he does, then I am perfectly happy to arrange to drill a new borehole and, along with him, measure the temperature profile, making sure we don’t let those pesky climate scientists get at the hole with their heating coils before we have done so.

And I’ll bet him we can reproduce the signal from Tynong shown above.

But I’ll only do it on the condition that Malcolm agrees, that when we do (reproduce the signal), he will publicly acknowledge the empirical evidence of a warming world entirely consist with NASA’s surface temperature record.

Malcolm, are you on? Will you take on my bet, and use the Earth’s crust as the arbiter? and perhaps Brian will stream live to the BBC?

The Conversation Disclosure

Mike Sandiford receives funding from the Australian Research Council to investigate the thermal structure of the Australian crust.

Categories: Around The Web

Fishing, not oil, is at the heart of the South China Sea dispute

Tue, 2016-08-16 06:08

Contrary to the view that the South China Sea disputes are driven by a regional hunger for seabed energy resources, the real and immediate prizes at stake are the region’s fisheries and marine environments that support them.

It is also through the fisheries dimensions to the conflict that the repercussions of the recent ruling of the arbitration tribunal in the Philippines-China case are likely to be most acutely felt.

It seems that oil is sexier than fish, or at least the lure of seabed energy resources has a more powerful motivating effect on policymakers, commentators and the media alike. However, the resources really at stake are the fisheries of the South China Sea and the marine environment that sustains them.

The real resource at stake

For a relatively small (around 3 million square kilometres) patch of the oceans, the South China Sea delivers an astonishing abundance of fish. The area is home to at least 3,365 known species of marine fishes, and in 2012, an estimated 12% of the world’s total fishing catch, worth US$21.8 billion, came from this region.

These living resources are worth more than money; they are fundamental to the food security of coastal populations numbering in the hundreds of millions.

Indeed, a recent study showed that the countries fringing the South China Sea are among the most reliant in the world on fish as source of nutrients. This makes their populations especially susceptible to malnutrition as fish catches decline.

These fisheries also employ at least 3.7 million people (almost certainly an underestimate given the level of unreported and illegal fishing in the region).

This is arguably one of the most important services the South China Sea fisheries provide to the global community – keeping nearly 4 million young global citizens busy, who would otherwise have few employment options.

But these vital resources are under enormous pressure.

A disaster in the making

The South China Sea’s fisheries are seriously over-exploited.

Last year, two of us contributed to a report finding that 55% of global marine fishing vessels operate in the South China Sea. We also found that fish stocks have declined 70% to 95% since the 1950s.

Over the past 30 years, the number of fish caught each hour has declined by a third, meaning fishers are putting in more effort for less fish.

This has been accelerated by destructive fishing practices such as the use of dynamite and cyanide on reefs, coupled with artificial island-building. The coral reefs of the South China Sea have been declining at a rate of 16% per decade.

Even so, the total amount of fish caught has increased. But the proportion of large species has declined while the proportion of smaller species and juvenile fish has increased. This has disastrous implications for the future of fishing in the South China Sea.

We found that, by 2045, under business as usual, each of the species groups studied would suffer stock decreases of a further 9% to 59%.

The ‘maritime militia’

Access to these fisheries is an enduring concern for nations surrounding the South China Sea, and fishing incidents play an enduring role in the dispute.

Chinese/Taiwanese fishing fleets dominate the South China Sea by numbers. This is due to the insatiable domestic demand for fish coupled with heavy state subsidies to enable Chinese fishers build larger vessels with longer range.

Competition between rival fishing fleets for a dwindling resource in a region of overlapping maritime claims inevitably leads to fisheries conflicts. Fishing boats have been apprehended for alleged illegal fishing leading to incidents between rival patrol boats on the water, such as the one in March 2016 between Chinese and Indonesian vessels.

Fishing boats are not just used to catch fish. Fishing vessels have long been used as proxies to assert maritime claims.

China’s fishing fleets have been characterised as a “maritime militia” in this context. Numerous incidents have involved Chinese fishing vessels operating (just) within China’s so-called nine-dashed line claim but in close proximity to other coastal states in areas they consider to be part of their exclusive economic zones (EEZs).

The disputed South China Sea area. Author/American Journal of International Law

The Chinese Coast Guard has increasingly played an important role in providing logistical support such as refueling as well as intervening to protect Chinese vessels from arrest by the maritime enforcement efforts of other South China Sea coastal states.

Fisheries as flashpoint

The July 2016 ruling in the dispute between the Philippines and China demolishes any legal basis to China’s claim to extended maritime zones in the southern South China Sea and any right to resources.

The consequence of this is that the Philippines and, by extension, Malaysia, Brunei and Indonesia are free to claim rights over the sea to 200 nautical miles from their coasts as part of their EEZs.

This also creates a pocket of high seas outside any national claim in the central part of the South China Sea.

There are signs that this has emboldened coastal states to take a stronger stance against what they will undoubtedly regard as illegal fishing on China’s part in “their” waters.

Indonesia already has a strong track record of doing so, blowing up and sinking 23 apprehended illegal fishing vessels in April and live-streaming the explosions to maximise publicity. It appears that Malaysia is following suit, threatening to sink illegal fishing vessels and turn them into artificial reefs.

The snag is that China has vociferously rejected the ruling. There is every indication that the Chinese will continue to operate within the nine-dashed line and Chinese maritime forces will seek to protect China’s claims there.

This gloomy view is underscored by the fact that China has recently opened a fishing port on the island of Hainan with space for 800 fishing vessels, a figure projected to rise to 2,000. The new port is predicted to play an important role in “safeguarding China’s fishing rights in the South China Sea”, according to a local official.

On August 2, the Chinese Supreme People’s Court signalled that China had the right to prosecute foreigners “illegally entering Chinese waters” – including areas claimed by China but which, in line with the tribunal’s ruling, are part of the surrounding states' EEZs – and jail them for up to a year.

Ominously, the following day Chinese Defence Minister Chang Wanquan warned that China should prepare for a “people’s war at sea” in order to “safeguard sovereignty”. This sets the scene for increased fisheries conflicts.

Ways forward

The South China Sea is crying out for the creation of a multilateral management, such as through a marine protected area or the revival of a decades-old idea of turning parts of the South China Sea, perhaps the central high seas pocket, into an international marine peace park.

Such options would serve to protect the vulnerable coral reef ecosystems of the region and help to conserve its valuable marine living resources.

A co-operative solution that bypasses the current disputes over the South China Sea may seem far-fetched. Without such action, however, its fisheries face collapse, with dire consequences for the region. Ultimately, the fishers and fishes are going to be the losers if the dispute continues.

The Conversation

Clive Schofield served as an independent expert witness (provided by the Philippines) to the Arbitration Tribunal in the case between the Republic of the Philippines and the People’s Republic of China.

Rashid Sumaila receives funding from research councils in Canada, Belmont, Genome Canada/BC, ADM Capital Foundation, Hong Kong, Pew Charitable Trusts.

William Cheung received funding from ADM Capital Foundation to co-produce the report Boom or Bust - Future Fish in the South China Sea.

Categories: Around The Web

We have almost certainly blown the 1.5-degree global warming target

Mon, 2016-08-15 14:13

The United Nations climate change conference held last year in Paris had the aim of tackling future climate change. After the deadlocks and weak measures that arose at previous meetings, such as Copenhagen in 2009, the Paris summit was different. The resulting Paris Agreement committed to:

Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognising that this would significantly reduce the risks and impacts of climate change.

The agreement was widely met with cautious optimism. Certainly, some of the media were pleased with the outcome while acknowledging the deal’s limitations.

Many climate scientists were pleased to see a more ambitious target being pursued, but what many people fail to realise is that actually staying within a 1.5℃ global warming limit is nigh on impossible.

There seems to be a strong disconnect between what the public and climate scientists think is achievable. The problem is not helped by the media’s apparent reluctance to treat it as a true crisis.

The 1.5℃ limit is nearly impossible

In 2015, we saw global average temperatures a little over 1℃ above pre-industrial levels, and 2016 will very likely be even hotter. In February and March of this year, temperatures were 1.38℃ above pre-industrial averages.

Admittedly, these are individual months and years with a strong El Niño influence (which makes global temperatures more likely to be warmer), but the point is we’re already well on track to reach 1.5℃ pretty soon.

So when will we actually reach 1.5℃ of global warming?

On our current emissions trajectory we will likely reach 1.5℃ within the next couple of decades (2024 is our best estimate). The less ambitious 2℃ target would be surpassed not much later.

This means we probably have only about a decade before we break through the ambitious 1.5℃ global warming target agreed to by the world’s nations in Paris.

A University of Melbourne research group recently published these spiral graphs showing just how close we are getting to 1.5℃ warming. Realistically, we have very little time left to limit warming to 2℃, let alone 1.5℃.

This is especially true when you bear in mind that even if we stopped all greenhouse gas emissions right now, we would likely experience about another half-degree of warming as the oceans “catch up” with the atmosphere.

Parallels with climate change scepticism

The public seriously underestimates the level of consensus among climate scientists that human activities have caused the majority of global warming in recent history. Similarly, there appears to be a lack of public awareness about just how urgent the problem is.

Many people think we have plenty of time to act on climate change and that we can avoid the worst impacts by slowly and steadily reducing greenhouse gas emissions over the next few decades.

This is simply not the case. Rapid and drastic cuts to emissions are needed as soon as possible.

In conjunction, we must also urgently find ways to remove greenhouse gases already in the atmosphere. At present, this is not yet viable on a large scale.

Is 1.5℃ even enough to avoid “dangerous” climate change?

The 1.5℃ and 2℃ targets are designed to avoid the worst impacts of climate change. It’s certainly true that the more we warm the planet, the worse the impacts are likely to be. However, we are already experiencing dangerous consequences of climate change, with clear impacts on society and the environment.

For example, a recent study found that many of the excess deaths reported during the summer 2003 heatwave in Europe could be attributed to human-induced climate change.

Also, research has shown that the warm seas associated with the bleaching of the Great Barrier Reef in March 2016 would have been almost impossible without climate change.

Climate change is already increasing the frequency of extreme weather events, from heatwaves in Australia to heavy rainfall in Britain.

These events are just a taste of the effects of climate change. Worse is almost certainly set to come as we continue to warm the planet.

It’s highly unlikely we will achieve the targets set out in the Paris Agreement, but that doesn’t mean governments should give up. It is vital that we do as much as we can to limit global warming.

The more we do now, the less severe the impacts will be, regardless of targets. The simple take-home message is that immediate, drastic climate action will mean far fewer deaths and less environmental damage in the future.

This article is adapted from a blog post that originally appeared here.

The Conversation

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Benjamin J. Henley receives funding from an ARC Linkage Project and is associated with the ARC Centre of Excellence for Climate System Science.

Categories: Around The Web

Survey: two-thirds of Great Barrier Reef tourists want to 'see it before it's gone'

Mon, 2016-08-15 06:16
Check it out while you can. Tourism Queensland/Wikimedia Commons, CC BY-SA

The health of the Great Barrier Reef (GBR) is declining – a fact that has not been lost on the world’s media.

The issue has made international headlines and attracted comment from public figures such as US President Barack Obama and British businessman Richard Branson.

Some media outlets and tourism operators have sought to downplay the effects, presumably to try to mitigate the impact on tourism. The industry provides roughly 65,000 jobs and contributes more than A$5 billion a year to the Australian economy.

But our research suggests that the ailing health of the GBR has in fact given tourists a new reason to visit, albeit one that doesn’t exactly promise a long-term future.

When we surveyed hundreds of GBR tourists last year, 69% of them said they had opted to visit the reef “before it is gone” – and that was before the latest bleaching generated fresh international headlines about its plight.

‘Last chance’ tourism

“Last chance tourism” (LCT) is a phenomenon whereby tourists choose to visit a destination that is perceived to be in danger, with the express intention of seeing it before it’s gone.

The media obviously play a large role in this phenomenon – the more threatened the public perceives a destination to be, the bigger the market for LCT.

There’s a vicious cycle at play here: tourists travel to see a destination before it disappears, but in so doing they contribute to its demise, either directly through on-site pressures or, in the case of climate-threatened sites such as the GBR, through greenhouse gas emissions. These added pressures increase the vulnerability of the destination and in turn push up the demand for LCT still further.

The GBR often features on lists of tourist destinations to see before they disappear, alongside places such as Glacier National Park, the Maldives and the Galapagos Islands.

While the media have proclaimed the reef to be an LCT destination, it has not previously been empirically confirmed that tourists are indeed motivated to visit specifically because of its vulnerable status.

Surveying reef tourists

We wanted to find out how many of the GBR’s holidaymakers are “last chance” tourists. To that end, we surveyed 235 tourists visiting three major tourism hotspots, Port Douglas, Cairns and Airlie Beach, to identify their leading motivations for visiting.

We gave them a suggested list of 15 reasons, including “to see the reef before it is gone”; “to rest and relax”; “to discover new places and things”, and others. We then asked them to rate the importance of each reason on a five-point scale, from “not at all” to “extremely”.

We found that 69% of tourists were either “very” or “extremely” motivated to see the reef before it was gone. This reason attracted the highest proportion of “extremely” responses (37.9%) of any of the 15 reasons.

This reason was also ranked the fourth-highest by average score on the five-point scale. The top three motivations by average score were: “to discover new places and things”; “to rest and relax; and "to get away from the demands of everyday life”.

Our results also confirmed that the media have played a large role in shaping tourists' perceptions of the GBR. The internet was the most used information source (68.9% of people) and television the third (54.4%), with word of mouth coming in second (57%).

Airlie Beach, a great spot for some last-chance tourism. Damien Dempsey/Wikimedia Commons, CC BY

Our findings suggest that the GBR’s tribulations could offer a short-term tourism boost, as visitors flock to see this threatened natural wonder. But, in the long term, the increased tourism might exacerbate the pressure on this already vulnerable region – potentially even hastening the collapse of this ecosystem and the tourism industry that relies on its health.

This paradox is deepened further when we consider that many of the tourists in our survey who said they were visiting the reef to “see it before it is gone” nevertheless had low levels of concern about their own impacts on the region.

Where to from here?

We undertook our survey in 2015, before this year’s bleaching event, described as the most severe in the GBR’s history.

This raises another question: is there a threshold beyond which the GBR is seen as “too far gone” to visit? If so, might future more frequent or severe bleaching episodes take us past that threshold?

As the most important source of information for tourists visiting the GBR, the media in particular need to acknowledge their own important role in informing the public. Media outlets need to portray the reef’s current status as accurately as possible. The media’s power and influence also afford them a great opportunity to help advocate for the GBR’s protection.

Educating tourists about the threats facing the GBR is an important way forward, particularly as our research identified major gaps in tourists' understanding of the specific threats facing the GBR and the impacts of their own behaviour. Many survey respondents, for instance, expressed low levels of concern about agricultural runoff, despite this being one of the biggest threats facing the GBR.

Of course, tourism is just one element in a complex web of issues that affect the GBR and needs to be part of a wider consideration of the reef’s future.

The only thing that is certain is that more needs to be done to ensure this critical ecosystem can survive, so that tourists who think this is the last chance to see it can hopefully be proved wrong.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

How the entire nation of Nauru almost moved to Queensland

Mon, 2016-08-15 06:16
Nauru's parliament would have been rebuilt in Queensland, but with less power. CdaMVvWgS/Wikimedia Commons

Nauru is best known to most Australians as the remote Pacific island where asylum seekers who arrive by boat are sent. What is less well known is that in the 1960s, the Australian government planned to relocate the entire population of Nauru to an island off the Queensland coast.

The irony of this is striking, especially in light of continuing revelations that highlight the non-suitability of Nauru as a host country for refugees. It also provides a cautionary tale for those considering wholesale population relocation as a “solution” for Pacific island communities threatened by the impacts of climate change.

Extensive phosphate mining on Nauru by Australia, Britain and New Zealand during the 20th century devastated much of the country. The landscape was so damaged that scientists considered it would be uninhabitable by the mid-1990s. With the exorbitant cost of rehabilitating the island, relocation was considered the only option.

In 1962, Australia’s prime minister Robert Menzies acknowledged that the three nations had a “clear obligation … to provide a satisfactory future for the Nauruans”, given the large commercial and agricultural benefits they had derived from Nauru’s phosphate. This meant “either finding an island for the Nauruans or receiving them into one of the three countries, or all of the three countries”.

That same year, Australia appointed a Director of Nauruan Resettlement to comb the South Pacific looking for “spare islands offering a fair prospect”. Possible relocation sites in and around Fiji, Papua New Guinea, the Solomon Islands, and Australia’s Northern Territory were explored, but were ultimately deemed inappropriate. There weren’t enough job opportunities and there were tensions with the locals.

Fraser Island in Queensland was also considered, but the Australian government decided it didn’t offer sufficiently strong economic prospects to support the population. The Nauruans thought this was a convenient excuse (and archival materials show that the timber industry was fiercely opposed).

The Curtis solution

In 1963, Curtis Island near Gladstone was offered as an alternative. Land there was privately held, but the Australian government planned to acquire it and grant the Nauruans the freehold title. Pastoral, agricultural, fishing and commercial activities were to be established, and all the costs of resettlement, including housing and infrastructure, were to be met by the partner governments at an estimated cost of 10 million pounds – around A$274 million in today’s terms.

But the Nauruans refused to go. They did not want to be assimilated into White Australia and lose their distinctive identity as a people. Many also saw resettlement as a quick-fix solution by the governments that had devastated their homeland, and a cheap option compared with full rehabilitation of the island.

Australia also refused to relinquish sovereignty over Curtis Island. While the Nauruans could become Australian citizens, and would have the right to “manage their own local administration” through a council “with wide powers of local government”, the island would officially remain part of Australia.

Frustrated by what it perceived as a genuine and generous attempt to meet the wishes of the Nauruan people, the Menzies government insisted it wouldn’t change its mind.

So the Nauruans stayed put.

Nauru’s phosphate industry has left the landscape scarred and useless for agriculture. CdaMVvWgS/Wikimedia Commons

The issue briefly resurfaced in 2003 when Australia’s foreign minister Alexander Downer once again suggested wholesale relocation as a possible strategy, given that Nauru was “bankrupt and widely regarded as having no viable future”. Nauru’s president dismissed the proposal, reiterating that relocating the population to Australia would undermine the country’s identity and culture.

Planned relocations in the Pacific

Today, “planned relocation” is touted as a possible solution for low-lying Pacific island countries, such as Kiribati and Tuvalu, which are threatened by sea-level rise and other long-term climate impacts.

But past experiences in the Pacific, such as the relocation of the Banabans in 1945 from present-day Kiribati to Fiji, show the potentially deep, intergenerational psychological consequences of planned relocation. This is why most Pacific islanders see it as an option of last resort. Unless relocation plans result from a respectful, considered and consultative process, in which different options and views are seriously considered, they will always be highly fraught.

Nauru today is at the highest level of vulnerability on the Environmental Vulnerability Index. The past destruction wrought by phosphate mining has rendered the island incapable of supporting any local agriculture or industry, with 90% of the land covered by limestone pinnacles.

It has a very high unemployment rate, scarce labour opportunities, and virtually no private sector – hence why the millions of dollars on offer to operate Australia’s offshore processing centres was so attractive. These factors also illustrate why the permanent resettlement of refugees on Nauru is unrealistic and unsustainable.

Nauru’s future seems sadly rooted in an unhealthy relationship of co-dependency with Australia, as its territory is once again exploited, at the expense of the vulnerable. And as the story of Curtis Island shows, there are no simple solutions, whether well-intentioned or not.

This is an overview of a longer article published in Australian Geographer.

The Conversation

Jane McAdam receives funding from the Australian Research Council and the Research Council of Norway. She is engaged in several international policy processes aimed at developing strategies to address human mobility in the context of climate change and disasters.

Categories: Around The Web

People power is the secret to reliable, clean energy

Fri, 2016-08-12 17:06
Australia will likely have to close more coal power stations to meet climate targets Coal power image from www.shutterstock.com

Australia’s energy watchdog, the Australian Energy Market Operator (AEMO), has issued a stark warning: more wind and solar power will demand new approaches to avoid interruptions to electricity supply.

In its annual Electricity Statement of Opportunities, released this week, AEMO indicated that the overall outlook for reliability has improved. So far, so good.

However, South Australia, Victoria, and New South Wales are potentially at greater risk of interruptions within ten years if the current trend of shutting down old coal-fired power stations accelerates, as we can expect from Australia’s efforts to meet national and international climate targets.

AEMO projections of supply 2016 AEMO Electricity Statement of Opportunities

The threat of power blackouts is reliable headline fodder as seen in yesterday’s Australian Financial Review. But the solution to this very real challenge is not to cling to ageing fossil fuel power stations.

Rather, as AEMO Chief Operating Officer Mike Cleary put it:

possible solutions could include an increased interconnection across [the electricity market], battery storage, and demand side management services.

While there is much excitement about battery technology, it is the oft-forgotten human dimension that offers the greatest potential. We consumers, the so-called “demand side” of the market, can play a crucial role in reducing the strain on the electricity network, which will in turn make for more reliable power.

The biggest variability that the electricity sector has to contend with is not intermittent solar or wind generation output, but the ups and downs of power demand.

People power

Helping business and household consumers manage their demand for power (or “demand management”) is a win-win scenario – lower costs for electricity and a stronger electricity system. Demand management and energy efficiency are key elements in lifting Australia’s energy productivity. Lifting energy productivity means we do not need to slow down the transition to a low-carbon economy.

Research from the University of Technology Sydney’s Institute for Sustainable Futures (ISF) that supported GetUp!’s Homegrown Power Plan highlights that we can not only retire coal power to achieve our climate targets, but also shift entirely to 100% renewable electricity generation by 2030.

However, to do this affordably we need to get smarter about saving energy and supporting the grid. GetUp!’s plan factored in a target to double Australia’s energy productivity by 2030, as advocated by the Australian Alliance to Save Energy and ClimateWorks.

Despite the potential, neither AEMO nor any other institution is tasked with assessing demand management opportunities that will strengthen the network and promote renewables. Work is needed to understand this demand-side opportunity, just as AEMO’s latest report provided for electricity supply.

It may also be time to revisit AEMO’s 2013 modelling on 100% renewables that did not factor in major energy productivity gains.

Switching up

The importance of demand management has been recognised since the dawn of the National Electricity Market in 1992. But this potential has never been properly tapped.

Happily, there are signs that this is finally changing. For example, the Australian Energy Regulator has announced a process to design a Demand Management Incentive Scheme. This will provide an incentive for electricity networks to help consumers reduce demand and cut energy costs.

The more progressive energy systems worldwide have already incorporated energy productivity into their energy policies and strategies. Germany is implementing its National Action Plan for Energy Efficiency as one of the twin pillars of its Energiewende (energy transition). And a 23% reduction in buildings' energy consumption by 2030 is one of the three key targets to achieve New York’s “Reforming the Energy Vision”.

The International Energy Agency (IEA) has also recommended energy efficiency improvements as its first measure to achieve peak energy emissions by 2020, in tandem with a US$130 billion increase in renewables investment. This “bridge” scenario was the IEA’s contribution to the 2015 Paris climate summit.

Policy measures recommended under the IEA Bridge Scenario OECD/IEA 2015 World Energy Outlook Special Report 2015: Energy and Climate Change, IEA Publishing Global energy-related GHG emissions reduction by policy measure in the Bridge Scenario relative to the INDC Scenario OECD/IEA 2015 World Energy Outlook Special Report 2015: Energy and Climate Change, IEA Publishing. Time for Australia to get serious

It is now time for Australia to embrace the link between demand management, energy productivity and renewable energy. We need these to work together so that we can achieve our carbon reduction goals while protecting electricity security and economic growth.

We have taken a good first step, releasing a comprehensive National Energy Productivity Plan at the end of 2015. It is not quite as ambitious as proposed by the Homegrown Power Plan (it only seeks a 40% improvement between 2015 and 2030) but it is a step in the right direction.

What are missing, as RMIT energy researcher Alan Pears points out, are the resources to make it happen: no additional funding has been allocated to deliver the 34 recommended measures.

We can unlock Australia’s energy productivity potential. And we can have a clean, affordable and reliable electricity system. But this will not happen by accident.

Let’s encourage our utilities to engage energy consumers in providing the solutions. In other words, power to the people.

The Conversation

The Institute for Sustainable Futures at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

The Institute for Sustainable Futures at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

Categories: Around The Web

The $8.2 billion water bill to clean up the Barrier Reef by 2025 – and where to start

Fri, 2016-08-12 06:04

In 2015, the Australian and Queensland governments agreed on targets to greatly reduce the sediment and nutrient pollutants flowing onto the Great Barrier Reef.

What we do on land has a real impact out on the reef: sediments can smother the corals, while high nutrient levels help to trigger more regular and larger outbreaks of crown-of-thorns starfish. This damage leaves the Great Barrier Reef even more vulnerable to climate change, storms, cyclones and other impacts.

Dealing with water quality alone isn’t enough to protect the reef, as many others have pointed out before. But it is an essential ingredient in making it more resilient.

The water quality targets call for sediment runoff to be reduced by up to 50% below 2009 levels by 2025, and for nitrogen levels to be cut by up to 80% over the same period. But so far, detailed information about the costs of achieving these targets has not been available.

Both the Australian and Queensland governments have committed more funding to improve water quality on the reef. In addition, the Queensland government established the Great Barrier Reef Water Science Taskforce, a panel of 21 experts from science, industry, conservation and government, led by Queensland Chief Scientist Geoff Garrett and funded by Queensland’s Department of Environment and Heritage Protection.

New work commissioned by the taskforce now gives us an idea of the likely cost of meeting those reef water quality targets.

This groundbreaking study, which drew on the expertise of water quality researchers, economists and “paddock to reef” modellers, has found that investing A$8.2 billion would get us to those targets by the 2025 deadline, albeit with a little more to be done in the Wet Tropics.

That A$8.2 billion cost is half the size of the estimates of between A$16 billion and A$17 billion discussed in a draft-for-comment report produced in May 2016, which were reported by the ABC and other media.

Those draft figures did not take into account the reductions in pollution already achieved between 2009 and 2013. They also included full steps of measures that then exceeded the targets. A full review process identified these, and now this modelling gives a more accurate estimate of what it would cost to deliver the targets using the knowledge and technology available today.

A future for farming

Importantly, the research confirms that a well-managed agricultural sector can continue to coexist with a healthy reef through improvements to land management practices.

Even more heartening is the report’s finding that we can get halfway to the nitrogen and sediment targets by spending around A$600 million in the most cost-effective areas. This is very important because prioritising these areas enables significant improvement while allowing time to focus on finding solutions that will more cost-effectively close the remaining gap.

Among those priority solutions are improving land and farm management practices, such as adopting best management practices among cane growers to reduce fertiliser loss, and in grazing to reduce soil loss.

While these actions have been the focus of many water quality programs to date, much more can be done. For example, we can have a significant impact on pollutants in the Great Barrier Reef water catchments by achieving much higher levels of adoption and larger improvements to practices such as maintaining grass cover in grazing areas and reducing and better targeting fertiliser use in cane and other cropping settings. These activities will be a focus of the two major integrated projects that will result from the taskforce’s recommendations.

A new agenda

The new study, produced by environmental consultancy Alluvium and a range of other researchers (and for which I was one of the external peer reviewers), is significant because nothing on this scale involving the Great Barrier Reef and policy costings has been done before.

Guidelines already released by the taskforce tell us a lot about what we need to do to protect the reef. Each of its ten recommendations now has formal government agreement and implementation has begun.

Alluvium’s consultants and other experts who contributed to the study – including researchers from CQ University and James Cook University – were asked to investigate how much could be achieved, and at what price, by action in the following seven areas:

  1. Land management practice change for cane and grazing

  2. Improved irrigation practices

  3. Gully remediation

  4. Streambank repair

  5. Wetland construction

  6. Changes to land use

  7. Urban stormwater management

Those seven areas for potential action were chosen on the basis of modelling data and expert opinion as the most feasible to achieve the level of change required to achieve the targets. By modelling the cost of delivering these areas and the change to nutrient and sediments entering the reef, the consultants were able to identify which activities were cheapest through to the most expensive across five catchment areas (Wet Tropics, Burdekin, Mackay-Whitsunday, Fitzroy and Burnett Mary).

Alluvium’s study confirmed the water science taskforce’s recommendation that investing in some catchments and activities along the Great Barrier Reef is likely to prove more valuable than in others, in both an environmental and economic sense.

Some actions have much lower costs and are more certain; these should be implemented first. Other actions are much more expensive. Of the total A$8.2 billion cost of meeting the targets, two-thirds (A$5.59 billion) could be spent on addressing gully remediation in just one water catchment (the Fitzroy region). Projects with such high costs are impractical and highly unlikely to be implemented at the scale required.

The Alluvium study suggests we would be wise not to invest too heavily in some costly repair measures such as wetland construction for nutrient removal just yet – at least until we have exhausted all of the cheaper options, tried to find other cost-effective ways of reaching the targets, and encouraged innovative landholders and other entrepreneurs to try their hand at finding ways to reduce costs.

The value of a healthier reef

The A$8.2 billion funding requirement between now and 2025 is large, but let’s look at it in context. It’s still significantly less than the A$13 billion that the Australian government is investing in the Murray-Darling Basin.

It would also be an important investment in protecting the more than A$5 billion a year that the reef generates for the Australian economy and for Queensland communities.

The immediate focus should be on better allocating available funds and looking for more effective solutions to meet the targets to protect the reef. More work is still needed to ensure we do so.

If we start by targeting the most cost-effective A$1 billion-worth of measures, that should get us more than halfway towards achieving the 2025 targets. The challenge now is to develop new ideas and solutions to deliver those expensive last steps in improving water quality. The Alluvium report provides a valuable tool long-term to ensure the most cost-effective interventions are chosen to protect the Great Barrier Reef.

This article was written with contributions from Geoff Garrett, Stuart Whitten, Steve Skull, Euan Morton, Tony Weber and Christine Williams.

Read more of The Conversation’s Great Barrier Reef coverage, including articles by experts including Jon Brodie and Ove Hoegh-Guldberg.

The Conversation

John Rolfe has previously received funding from the National Environmental Research Program and the National Environmental Science Program for economic studies evaluating the costs and benefits of reef protection.

Categories: Around The Web

Stopping land clearing and replanting trees could help keep Australia cool in a warmer future

Thu, 2016-08-11 15:53
Increasing land clearing could leave Australia hotter and drier. Wilderness Society

Land clearing is on the rise in Queensland and New South Wales, with land clearing laws being fiercely debated.

In Queensland in 2013–14, 278,000 hectares of native vegetation were cleared (1.2 times the size of the Australian Capital Territory). A further 296,000ha were cleared in 2014–15. These are the highest rates of deforestation in the developed world.

Land clearing on this scale is bad for a whole host of reasons. But our research shows that it is also likely to make parts of Australia warmer and drier, adding to the effects of climate change.

How do trees change the climate?

Land clearing releases greenhouse gases into the atmosphere, but the effect of land clearing on climate goes well beyond carbon emissions. It causes warming locally, regionally and even globally, and it changes rainfall by altering the circulation of heat and moisture.

Trees evaporate more water than any other vegetation type – up to 10 times more than crops and pastures. This is because trees have root systems that can access moisture deep within the soil. Crops and pastures have 70% of their roots in the top 30cm of the soil, while trees and other woody plants have 43% of their roots in the deeper part of the soil.

The increased evaporation and rough surface of trees creates moist, turbulent layers in the lower atmosphere. This reduces temperatures and contributes to cloud formation and increased rainfall. The increased rainfall then provides more moisture to soils and vegetation.

The clearing of deep-rooted native vegetation for shallow-rooted crops and pastures diminishes this process, resulting in a warmer and drier climate.

We can see this process at work along the “bunny fence” in southwest Western Australia, where there is a moister atmosphere and more clouds over native vegetation compared with nearby farming areas during summer.

Studies in Amazonia also indicate that as deforestation expands rainfall declines. A tipping point may be reached when deforestation reaches 30-50%, after which rainfall is substantially reduced. Complete deforestation results in the greatest decline in rainfall.

More trees, cooler moister climate

We wanted to know how land clearing could affect Australia’s climate in the future. We did this by modelling two scenarios for different amounts of land clearing, using models developed by CSIRO.

In the first scenario, crops and pasture expand in the semi-arid regions of eastern and southwest Australia. The second scenario limits crops and pastures to highly productive lands, and partially restores less productive lands to savanna woodlands.

We found that restoring trees to parts of Australia would reduce surface temperatures by up to 1.6℃, especially in western Queensland and NSW.

We also found that more trees reduced the overall climate-induced warming from 4.1℃ to 3.2℃ between 2050 and 2100.

Replanting trees could increase summer rainfall by 10% overall and by up to 15.2% in the southwest. We found soil moisture would increase by around 20% in replanted regions.

Our study doesn’t mean replanting all farmed land with trees, just areas that are less productive and less cost-effective to farm intensively. In our scenario, the areas that are restored in western Queensland and NSW would need a tree density of around 40%, which would allow a grassy understorey to be maintained. This would allow some production to continue such as cattle grazing at lower numbers or carbon farming.

Political and social challenges

Limiting land clearing represents a major challenge for Australia’s policymakers and farming communities.

The growing pressure to clear reflects a narrow economic focus on achieving short- to medium-term returns by expanding agriculture to meet the growing global demand for food and fibre.

However, temperatures are already increasing and rainfall is decreasing over large areas of eastern and southwest Australia. Tree clearing coupled with climate change will make growing crops and raising livestock even harder.

Balancing farming with managing climate change would give land owners on marginal land new options for income generation, while the most efficient agricultural land would remain in production. This would need a combination of regulation and long-term financial incentives.

The climate benefits of limiting land clearing must play a bigger part in land management as Australia’s climate becomes hotter and drier. Remnant vegetation needs to be conserved and extensive areas of regrowth must be allowed to regenerate. And where regeneration is not possible, we’ll have to plant large numbers of trees.

The Conversation

Clive McAlpine receives funding from The Australian Research Council and the Queensland Government

Jozef Syktus receives funding from the Australian Research Council and the Queensland Government

Leonie Seabrook receives funding from the Australian Research Council.

Categories: Around The Web

The Galileo gambit and other stories: the three main tactics of climate denial

Thu, 2016-08-11 06:05
Galileo was right, but that doesn't mean his fans are. Justus Sustermans/Wikimedia Commons

The recently elected One Nation senator from Queensland, Malcolm Roberts, fervently rejects the established scientific fact that human greenhouse gas emissions cause climate change, invoking a fairly familiar trope of paranoid theories to propound this belief.

Roberts variously claims that the United Nations is trying to impose world government on us through climate policy, and that CSIRO and the Bureau of Meteorology are corrupt institutions that, one presumes, have fabricated the climate extremes that we increasingly observe all over the world.

In the world of Malcolm Roberts, these agencies are marionettes of a “cabal” of “the major banking families in the world”. Given the parallels with certain strands of anti-Jewish sentiment, it’s perhaps an unfortunate coincidence that Roberts has reportedly relied on a notorious Holocaust denier to support this theory.

It might be tempting to dismiss his utterances as conspiratorial ramblings. But they can teach us a great deal about the psychology of science denial. They also provide us with a broad spectrum of diagnostics to spot pseudoscience posing as science.

The necessity of conspiracism

First, the appeal to a conspiracy among scientists, bankers and governments is never just a slip of the tongue but a pervasive and necessary ingredient of the denial of well-established science. The tobacco industry referred to medical research on lung cancer as being conducted by an “oligopolistic cartel” that “manufactures alleged evidence”. Some people accuse the US Central Intelligence Agency (CIA) of creating and spreading AIDS, and much anti-vaccination content on the web is suffused with conspiratorial allegations of totalitarianism.

This conspiratorial mumbo jumbo inevitably arises when people deny facts that are supported by an overwhelming body of evidence and are no longer the subject of genuine debate in the scientific community, having already been tested thoroughly. As evidence mounts, there comes a point at which inconvenient scientific findings can only be explained away by recourse to huge, nebulous and nefarious agendas such as the World Government or Stalinism.

If you are addicted to nicotine but terrified of the effort required to give up smoking, it might be comforting instead to accuse medical researchers of being oligopolists (whatever that means).

Likewise, if you are a former coal miner, like Malcolm Roberts, it is perhaps easier to accuse climate scientists of colluding to create a world government (whatever that is) than to accept the need to take coal out of our economy.

There is now ample research showing the link between science denial and conspiracism. This link is supported by independent studies from around the world.

Indeed, the link is so established that conspiracist language is one of the best diagnostic tools you can use to spot pseudoscience and science denial.

The Galileo gambit

How else can science dissenters attempt to justify their contrarian position? Another tactic is to appeal to heroic historical dissenters, the usual hero of choice being Galileo Galilei, who overturned the orthodoxy that everything revolves around the Earth.

This appeal is so common in pseudoscientific quackery that it is known as the Galileo gambit. The essence of this argument is:

They laughed at Galileo, and he was right.

They laugh at me, therefore I am right.

A primary logical difficulty with this argument is that plenty of people are laughed at because their positions are absurd. Being dismissed by scientists doesn’t automatically entitle you to a Nobel Prize.

Another logical difficulty with this argument is that it implies that no scientific opinion can ever be valid unless it is rejected by the vast majority of scientists. Earth must be flat because no scientist other than a Googling Galileo in Gnowangerup says so. Tobacco must be good for you because only tobacco-industry operatives believe it. And climate change must be a hoax because only the heroic Malcolm Roberts and his Galileo Movement have seen through the conspiracy.

Yes, Senator-elect Roberts is the project leader of the Galileo Movement, which denies the scientific consensus on climate change, favouring instead the opinions of a pair of retired engineers and the radio personality Alan Jones.

Any invocation of Galileo’s name in the context of purported scientific dissent is a red flag that you’re being fed pseudoscience and denial.

The sounds of science

The rejection of well-established science is often couched in sciency-sounding terms. The word “evidence” has assumed a particular prominence in pseudoscientific circles, perhaps because it sounds respectable and evokes images of Hercule Poirot tenaciously investigating dastardly deeds.

Since being elected, Roberts has again aired his claim that there is “no empirical evidence” for climate change.

But “show us the evidence” has become the war cry of all forms of science denial, from anti-vaccination activists to creationists, despite the existence of abundant evidence already.

This co-opting of the language of science is a useful rhetorical device. Appealing to evidence (or a lack thereof) seems reasonable enough at first glance. Who wouldn’t want evidence, after all?

It is only once you know the genuine state of the science that such appeals are revealed to be specious. Literally thousands of peer-reviewed scientific articles and the national scientific academies of 80 countries support the pervasive scientific consensus on climate change. Or, as the environmental writer George Monbiot has put it:

It is hard to convey just how selective you have to be to dismiss the evidence for climate change. You must climb over a mountain of evidence to pick up a crumb: a crumb which then disintegrates in the palm of your hand. You must ignore an entire canon of science, the statements of the world’s most eminent scientific institutions and thousands of papers published in the foremost scientific journals.

Accordingly, my colleagues and I recently showed that in a blind test – the gold standard of experimental research – contrarian talking points about climate indicators were uniformly judged to be misleading and fraudulent by expert statisticians and data analysts.

Conspiracism, the Galileo gambit and the use of sciency-sounding language to mislead are the three principal characteristics of science denial. Whenever one or more of them is present, you can be confident you’re listening to a debate about politics or ideology, not science.

The Conversation

Stephan Lewandowsky receives funding from the Australian Research Council, the Royal Society, and the Psychonomic Society.

Categories: Around The Web

Hunting, fishing and farming remain the biggest threats to wildlife

Thu, 2016-08-11 06:05
Snow leopards are just one of the species still threatened by hunting. from www.shutterstock.com

History might judge the Paris climate agreement to be a watershed for all humanity. If nations succeed in halting runaway climate change, this will have enormous positive implications for life on Earth.

Yet as the world applauds a momentous shift toward carbon neutrality and hope for species threatened by climate change, we can’t ignore the even bigger threats to the world’s wildlife and ecosystems.

Climate change threatens 19% of globally threatened and near-threatened species – including Australia’s critically endangered mountain pygmy possum and the southern corroboree frog. It’s a serious conservation issue.

Yet our new study, published in Nature, shows that by far the largest current hazards to biodiversity are overexploitation and agriculture.

The biggest threats to the world’s wildlife Sean Maxwell et al. The cost of overexploitation and agriculture

We assessed nearly 9,000 species listed on the International Union for the Conservation of Nature’s Red List of Threatened Species. We found that 72% are threatened by overexploitation and 62% by agriculture.

Overexploitation (the unsustainable harvest of species from the wild) is putting more species on an extinction pathway than any other threat.

And the expansion and intensification of agriculture (the production of food, fodder, fibre and fuel crops; livestock; aquaculture; and the cultivation of trees) is the second-largest driver of biodiversity loss.

Hunting and gathering is a threat to more than 1,600 species, including many large carnivores such as tigers and snow leopards.

Unsustainable logging is driving the decline of more than 4,000 species, such as Australia’s Leadbeater’s possum, while more than 1,000 species, including southern bluefin tuna, are losing out to excessive fishing pressure.

Land change for crop farming and timber plantations imperils more than 5,300 species, such as the far eastern curlew, while the northern hairy-nosed wombat is one of more than 2,400 species affected by livestock farming and aquaculture.

The far eastern curlew is threatened by farming. Curlew image from www.shutterstock.com

The threat information used to inform our study is the most comprehensive available. But it doesn’t tell the complete story.

Threats are likely to change in the future. Climate change, for example, will become increasingly problematic for many species in coming decades.

Moreover, threats to biodiversity rarely operate in isolation. More than 80% of the species we assessed are facing more than one major threat.

Through threat interactions, smaller threats can indirectly drive extinction risk. Roads and energy production, for example, are known to facilitate the emergence of overexploitation, land modification and habitat loss.

But until we have a better understanding of how threats interact, a pragmatic course of action is to limit those impacts that are currently harming the most species.

By ensuring that major threats that occur today (overexploitation, agriculture and so on) do not compromise ecosystems tomorrow, we can help to ameliorate the challenges presented by impending climate change.

Getting it right

Overexploitation and agriculture demand a variety of conservation approaches. Traditional approaches, such as well-placed protected areas and the enforcement of hunting, logging and fishing regulations, remain the strongest defence against the ravages of guns, nets and bulldozers.

Achieving a truly effective protected area network is impossible, however, when governments insist on relegating protected areas to “residual” places – those with least promise for commercial uses.

Reducing impacts from overexploitation of forests and fish is also futile unless industries that employ clearfell logging and illegal fishing vessels transition to more environmentally sustainable practices.

Just as critical as traditional approaches are incentives for hunters, fishers and farmers to conserve threatened species outside designated conservation areas.

Australia’s Leadbeater’s possum remains threatened by logging. Greens MPs/Flickr, CC BY-NC-ND

For nations like Australia, our study shows there is a growing mismatch in environmental policy and the outcomes for biodiversity. Environmental programs such as the once well-funded National Reserve System Strategy and Biodiversity Fund were important in that they helped conserve wildlife on private and public land, and were fundamental to defeating the biggest, prevailing threats to Australia’s biodiversity. But these programs either do not exist anymore or have little funding to support them at state and federal levels.

On top of this, land-clearing – without doubt one of the largest threats to biodiversity across the country – is on the increase because laws have been repealed across the country. Any benefits accrued by previous good environmental programs are being eroded.

If we are to seriously tackle the largest threats to biodiversity in Australia, we need to recognise the biggest threats. This means efforts to reduce threats from agriculture and overexploitation of forests and fish must include durable environmental regulation.

This article was co-authored by Thomas Brooks, head of science and knowledge at the International Union for the Conservation of Nature.

The Conversation

Sean Maxwell receives funding from the ARC Centre of Excellence for Environmental Decisions.

James Watson receives funding from the Australian Research Council. He is Director of the Science and Research Initiative at the Wildlife Conservation Society.

Richard Fuller receives funding from the Australian Research Council.

Categories: Around The Web

Pages