The Conversation

Subscribe to The Conversation feed
Updated: 2 hours 1 min ago

Mount Isa contamination 'within guidelines' but residents told to clean their homes

Tue, 2017-02-21 13:48

After an 11-year wait, Mount Isa Mines has released the official report into the lead contamination that has blighted the city for decades.

The report, commissioned by the mine’s owner, Glencore, and produced by researchers at the University of Queensland, says that household dust contaminated by airborne lead from the mining and smelting operations is the dominant source of the city’s exposure.

In some aspects this marks an important shift in the industry’s acceptance of the problem. Yet the report goes on to argue that Mount Isa residents are nevertheless responsible for keeping themselves, their houses and their children free from dust, thus putting the onus back on them to avoid exposure to the contamination.

A history of excuses

This is the latest iteration in the decade-long evolution of Mount Isa Mines’ arguments rebutting research that linked the contamination to its mining and smelting operations.

Back in 2007, when owned by Xstrata, Mount Isa Mines stated that the contamination was “naturally occurring”. We have previously termed this the “miner’s myth” – the idea that contamination surrounding a mine is a product of natural geology and weathering rather than the mining activity itself.

Before Mount Isa Mines was taken over by Glencore in 2013, the company admitted that Mount Isa was affected by “industrial mineralisation” (industry-speak for contamination from emissions), but also said that the contamination was partly due to natural sources in the city’s soils and rocks.

We and our colleagues have produced more than 20 studies documenting environmental contamination and its management in the Mount Isa region, dating back to 2005 when the Leichhardt River, which supplies drinking water to Mount Isa, was found to be contaminated with lead and other metals. Since then, we have detailed contamination in local sediments, water and soils, and used isotope fingerprinting to pinpoint the likely source; none of this research was mentioned in the new report.

Despite the welcome admission that the company is indeed contaminating Mount Isa, the report caveats this by saying that the risk of direct inhalation of lead emitted into the air is low. It states that exposure arises mainly when children are exposed to lead-contaminated surfaces in their homes – chiefly carpets. For Mount Isa families, these comments do not fully encapsulate the real challenges they face in protecting themselves and their families.

Passing the buck

The report offers the following advice to residents attempting to keep their exposure as low as possible:

  • keep a “clean home environment”

  • consider replacing carpets with timber or other hard floors, and clean them with phosphate-based agents

  • wash childrens’ hands frequently and before meals, and encourage very young children not to suck non-food items

  • wash all homegrown fruit and vegetables, and peel root vegetables, before cooking and/or eating.

The implied argument is essentially that, despite the contamination, if you do the right thing (such as keeping your house clean) there is no problem.

The obvious rebuttal to this is that if there were no industrial lead in the community, there would be no problem at all. The root cause of the issue is not the natural hand-to-mouth behaviours of children but the pervasive, persistent and permanent arsenic, cadmium and lead contamination that penetrates everything they touch: clothes, toys, food, floors and furnishings.

The rates of lead dust deposition are such that that people living closest to the smelters would have to wash their backyards and indoor surfaces several times a day to keep toxic dust levels within acceptable guidelines. Cleaning one’s house more than once a day, especially if working or looking after little children, is nearly impossible to maintain even over a few days, never mind a lifetime. While the advice to keep houses, hands and surfaces is not unreasonable in itself, the evidence suggests that it is little use in preventing lead exposure.

How serious is the exposure?

Mount Isa’s schoolchildren are performing well below the national average, according to standardised testing data from the first full year of school. Similar outcomes have been seen in Broken Hill, another of Australia’s major lead mining towns. Children in North Mount Isa, the area nearest the smelter, did worse than in other areas of the city.

Mount Isa’s children have an average blood lead level of about 35 parts per billion – about three times higher than normal. A 2015 study of children from Broken Hill and Port Pirie showed that a increase in blood lead from 10 to 100 parts per billion can reduce IQ by 13.5 points. Relevantly, low exposures cause proportionally more harm, which is why it is important for children to be protected from any lead contamination at all.

The report is clear that exposure happens as a result of contamination released into the air, which later settles as dust:

The major source of lead exposure is via ingestion in the community and is from air particulates (<250µm diameter) that are on the ground from deposition as fallout.

However, it goes on to say that the mine cannot be directly faulted for this, because the average rate of airborne emissions is within the guidelines outlined in its environmental permit. The report suggests that its modelled blood lead values do not match the actual values on children because they may be exposing themselves to extra lead by ingesting dirt, or through other sources such as lead-based paint, leaded petrol, or lead-acid batteries.

But this rationale fails to take into account the short-term spikes in emissions, which cause depositions that accumulate in soils and dusts, which in turn cause elevated blood lead exposures in children. The question could easily be answered by comparing the isotopic composition of lead from blood samples with that from the mine’s emissions. Disappointingly, the Glencore report did not undertake this critical analytical step to link environmental sources to actual exposures in children.

Another setback

Authorities have been aware of lead emissions from the Mount Isa smelter since the early 1930s. It was always a fanciful notion to suggest that emissions were not finding their way across the city and into homes, and that the contamination was somehow natural.

Intensive air monitoring in the community has continued for at least the past 40 years. Blood lead surveys and internal memos, along with environmental assessments from various government agencies, have provided significant prior knowledge of the nature, extent and cause of the problem. In 2010, Queensland’s chief medical officer Jeanette Young told The Australian newspaper:

I do know the cause; it is emissions being released from the mine. If you think where it is coming from, it is coming from emissions from the smelter that are going up in the air and they are depositing across the town fairly evenly.

Thus, in this sense, the latest study merely represents confirmation of what many people already knew.

Yet despite this overdue acknowledgement of the problem, the report implies that Glencore is not taking full ownership of the issue. The overriding message to Mount Isa’s residents is that it falls to them to keep themselves free from dangerous contamination.

In this sense, this is yet another setback in improving the living conditions for the community of Mount Isa, particularly young children who are the most vulnerable to the adverse and life-long effects of lead exposure.

The Conversation

Mark Patrick Taylor is affiliated with: Broken Hill Lead Reference Group. LEAD Group Inc. (Australia). NSW Government Lead Expert Working Group - Lead exposure management for suburbs around the former Boolaroo (NSW) Pasminco smelter site, Dec 2014–ongoing. Appointed by NSW Environment Minister to review NSW EPA’s management of contaminated sites, October 2015–ongoing. Macquarie’s VegeSafe project receives funding support via voluntary donations from the public and cash and in-kind support for a broader evaluation of the use and application of field portable XRFs OIympus Australia Pty Ltd and the National Measurement Institute, North Ryde, Sydney. In addition, MP Taylor has previously provided evidence-based expert report and advice for Slater and Gordon Lawyers in regard to their court action against Mount Isa Mines.

Chenyin Dong is funded by the international Macquarie University Research Excellence Scholarship (iMQRES) and New South Wales Environmental Protection Authority scholarship (MQ9201600680).

Paul Harvey receives funding from a Macquarie University Research Excellence Scholarship (MQRES).

Categories: Around The Web

Labor's climate policy could remove the need for renewable energy targets

Tue, 2017-02-21 05:23

The federal Labor Party has sought to simplify its climate change policy. Any suggestion of expanding the Renewable Energy Target has been dropped. But there is debate over whether the new policy is actually any more straightforward as a result.

One thing Labor did confirm is its support for an emissions intensity scheme (EIS) as its central climate change policy for the electricity sector. This adds clarity to the position the party took to the 2016 election and could conceivably remove the need for a prescribed renewable energy target anyway.

An EIS effectively gives electricity generators a limit on how much carbon dioxide they can emit for each unit of electricity they produce. Power stations that exceed the baseline have to buy permits for the extra CO₂ they emit. Power stations with emissions intensities below the baseline create permits that they can sell.

An EIS increases the cost of producing electricity from emissions-intensive sources such as coal generation, while reducing the relative cost of less polluting energy sources such as renewables. The theory is that this cost differential will help to drive a switch from high-emission to low-emission sources of electricity.

The pros and cons of an EIS, compared with other forms of carbon pricing, have been debated for years. But two things are clear.

First, an EIS with bipartisan support would provide the stable carbon policy that the electricity sector needs. The sector would be able to invest with more confidence, thus contributing to security of supply into the future.

Second, an EIS would limit the upward pressure on electricity prices, for the time being at least.

These reasons explain why there was a brief groundswell of bipartisan support for an EIS in 2016, until the Turnbull government explicitly ruled it out in December.

Moving targets

Another consideration is whether, with the right policy, there will be any need for firm renewable energy targets. This may help to explain Labor’s decision to rule out enlarging the existing scheme or extending it beyond 2020.

If we had a clear policy to reduce emissions at lowest cost, whether in the form of an EIS or some other scheme, renewable energy would naturally increase to whatever level is most economically efficient under those policy settings. Whether this reaches 50% or any other level would be determined by the overall emissions-reduction target and the relative costs of various green energy technologies.

In this scenario, a separately mandated renewable energy target would be simply unnecessary and would probably just add costs with no extra environmental benefit. Note that this reasoning would apply to state-based renewable energy policies, which have become a political football amid South Australia’s recent tribulations over energy security.

An EIS is also “technology agnostic”: power companies would be free to pursue whatever technology makes the most economic sense to them. Prime Minister Malcolm Turnbull explicitly endorsed this idea earlier this month.

Finally, an EIS would integrate well with the National Electricity Market, a priority endorsed by the COAG Energy Council of federal, state and territory energy ministers. State and territory governments may find this an attractive, nationally consistent alternative that they could support.

Strengths and weaknesses

A 2016 Grattan Institute report found that an EIS could be a practical step on a pathway from the current policy mess towards a credible energy policy. Yet an EIS has its weaknesses, and some of Labor’s reported claims for such a scheme will be tested.

In the short term, electricity prices would indeed rise, although not as much as under a cap-and-trade carbon scheme. It is naive to expect that any emissions-reduction target (either the Coalition’s 26-28% or Labor’s 45%) can be met without higher electricity costs.

Another difficulty Labor will have to confront is that setting the initial emission intensity baseline and future reductions would be tricky. The verdict of the Finkel Review, which is assessing the security of the national electricity market under climate change policies, will also be crucial.

Despite media reports to the contrary, Chief Scientist Alan Finkel and his panel have not recommended an EIS. Their preliminary report drew on earlier reports noting the advantages of an EIS over an extended renewable energy target or regulated closure of fossil-fuelled power stations, but also the fact that cap-and-trade would be cheaper to implement.

Labor has this week moved towards a credible climate change policy, although it still has work to do and its 45% emissions-reduction target will still be criticised as too ambitious. Meanwhile, we’re unlikely to know the Coalition government’s full policy until after it completes the 2017 Climate Change Policy Review and receives the Finkel Review’s final report.

Australians can only hope that we are starting to see the beginnings of the common policy ground that investors and electricity consumers alike so urgently need.

The Conversation

Tony Wood holds shares in energy and resources companies through his superannuation fund.

Categories: Around The Web

The anatomy of an energy crisis – a pictorial guide, Part 2

Mon, 2017-02-20 19:37

In the second in my series on the crisis besetting the National Electricity Market (NEM) in eastern Australia, I look at the tightening balance of supply and demand.

Australia’s NEM is witnessing an unprecedented rise in spot, or wholesale, prices as market conditions tighten in response to a range of factors.

Volume weighted NEM spot prices by season form 2005 on. Note the extraordinary elevated spot prices for the summer of 2017.

As shown above, spot prices are typically highest in summer, due in large part to the way extreme heat waves stretch demand. The historical summer average across the NEM is around $50/MWhour. As recently as 2012, summer prices were as low as $30/MWhour. With only a few days to go in the 2017 summer, prices are averaging a staggering $120/MWhour on a volume-weighted basis. Many factors have played a role, including hot weather, and the drivers vary from state to state.

In South Australia, the high prices have been accompanied by a series of rolling black-outs culminating on 8th February. Spot prices are more than twice last summer, on a volume-weighted basis, and three times the summer before that. Volatility has increased markedly, as evidenced by the way the volume-weighted price has diverged from the averaged spot price.

Average spot prices (RRP) , and volume-weighted prices (VWP) for the Summer quarter in South Australia since 2000. The VWP’s, shown in the lighter shades, are higher than the RRP’s, because periods of high spot price generally correlated with increased volumes associated with high demand events. The difference in the VWP and RRP is a measure of the price volatility, which has increased from negligible in the summer of 2012 to significant in the summer of 2017. Note for 2017, the data extend only up to February 18th, the time of writing. Note also that in the summers of 2013 and 2014, the carbon tax applied at the wholesale level. In tat period, the effective price for a coal generator like to Northern was reduced by around $20/MWhour relative the market prices.

But the price rises and security issues have not been restricted to South Australia, with Queensland and New South Wales experiencing steeper rises in percentage terms. Current Queensland volume-weighted prices are averaging $200/MWhour, some 300% above the long-term summer average.

Average spot prices (RRP), and volume-weighted prices (VWP) in lighter shades, for the summer quarter in Queensland since 2000.

On the 12th February new demand records were set in Queensland, with prices averaging $700/MWhour across the day. New South Wales narrowly averted load shedding on 10th February as temperatures and spot prices soared. So far, the exception has been Victoria, where summer prices have remain relatively subdued, at levels not far above the recent average.

Average spot prices (RRP), and volume weighted prices (VWP) for the summer quarter in the four mainland regions in the NEM from 2012 on. Demand and temperature

Demand for electrical power varies over a range of time-scales, from daily, weekly to seasonal, as well as with longer-term economic trends.
A key determinant in how much power is needed on any given day is the maximum daily temperature. As shown below, the maximum daily demand marks out a characteristic boomerang shape when plotted against maximum daily temperature. The boomerang bottoms out at temperatures of around 25°C when air conditioning loads are at a minimum.

Boomerang pattern of maximum daily demand in South Australia and maximum daily temperature in Adelaide, by financial year (FY13-14 through FY16-17). Data sourced from the Bureau of Meteorology and from AEMO. Days with average spot prices above $500/MWhour (or about 10 times the NEM average) are identified by larger dots and are encircled. Recent days of exceptional spots prices across the NEM are also highlighted. The figures discriminate between weekdays and weekend, and exclude the Christmas - New Year period, where demand deviates from normal because of low industrial, commercial and public sector loads.

As illustrated above, demand increases significantly in response to heating loads as the weather cools below 20°C and cooling loads as the weather warms above 30°C. The difference in demand across the weather cycles can be substantial. For example, in South Australia the maximum daily demand varies from around 1500 megawatts on a day with a maximum temperature of 25°C to around 3000 megawatts during heatwaves when the temperatures exceed 40°C. With minimum daily loads under 1000 megawatts, This implies well over half the generation capacity in South Australia is needed to meeting peak demand in extreme days, with much of it sitting idle waiting for extreme hot weather events. To recoup costs in an energy-only market like the NEM, such peaking capacity demand extreme pricing accompany its dispatch. In reality to manage risks, such capacity is normally hedged at a cap-contract of around $300/MWhour

Similar patterns apply in other states, although in percentage terms the range is less severe. In Queensland the increase between 25 and 40 degree days is about 2000 megawatts or approx 30%.

Boomerang pattern of maximum daily demand in Queensland and maximum daily temperature in Brisbane, by financial year (FY13-14 through FY16-17). Note the extreme conditions on Sunday 12th February.

A comparison of the figures above show some subtle but important differences in the South Australia and Queensland markets. Notably, the diagrams show that annual demand in Queensland has been rising progressively over the last four years, while it has been static in South Australia. The extreme weather of Sunday 12th February set a new demand record in Queensland, and well above any previous weekend day. In contrast, the 8th February peak in South Australia was lower than previous peaks. To understand why spot prices spiked to similar levels in the different regions requires a deeper dive into the local market conditions.

South Australian market dynamics

One reason for seasonal variability in prices is the natural variability in weather conditions, and particularly the frequency and intensity of heat waves. As illustrated below, the 2017 summer in Adelaide has been rather normal in terms of weather extremes, so far with only six days above 40°C compared to seven last summer and thirteen in the 2014 summer. To date, the mean maximum is around 29.7°C , more-or-less spot on the average over the last five years. As such weather variability would not seem to be the key factor driving the recent dramatic rise in spot prices.

Proportional distribution of daily maximum temperatures in Adelaide (Kent Town) for the summer quarter, coloured by year. Data sourced from the Bureau of Meterology.

The most significant change in the South Australian market last year was the closure in May of its last coal fired-power plant - Alinta’s 520 megawatt capacity Northern Power Station. Along with questions about long-term coal supply, Alinta’s decision to close had a lot to do with the low spot prices back in 2015.

Back then, spot prices were suppressed on the back of a fall in both domestic and industrial demand as well as the addition of new wind farms into the supply mix. As shown below, the rapid uptake of solar PV in South Australia had impacted the demand for grid based services, especially during summer, limiting price volatility, and affecting generator revenue streams via a lowering of forward contract prices. In combination, the conditions made for a significant excess in generating capacity, or capacity overhang.

The plot of averaged demand by time of day, for the summer quarter, helps illustrate the way the uptake of domestic solar PV has impacted demand for grid base electricity, reducing midday demand by ~ 30% (~500 megawatts) on average. Note that as shown below, the demand of peak days is much higher, approaching 3000 megawatts.

Despite the falling average demand, and a changing load distribution, the peak demand during the recent heat wave reached above 3045 megawatts in the early evening of 8th February (at 6 pm Eastern Australian Standard Time). That was 340 megawatts lower than the all time South Australian peak of 3385 megawatts for South Australia on the 31st January 2011. The peak on February 8th was accompanied by a spot price of $13160/MWhour.

As for above, but also showing the demand profile for the extreme day of 8 February, 2017 (black dashed line), when South Australia suffered rolling black outs due to load shedding, and the all time high (red dashed line).

With the closure of Northern, any comparison with previous peak demand events should factor in any demand previously served by Northern Power Station. Before its closure Northern contributed around 420 megawatts power on average over the summer months. Without that supply available this year, the February 8th peak effectively exceeded the previous peak by around 80 megawatts in adjusted terms.

Relative or adjusted peak demand records for South Australia, accounting for the load served by Northern Power Station prior to its closure prior to May 2016. Queensland market dynamics

Queensland has experienced a hot summer with the maximum daily temperature in Brisbane reaching 37°C for the first time since 2014 years, and an average daily maximum of 31.2°C (at the time of writing). That is about one degree above the average of recent years. However, with only four days with a maximum temperature above 35°C, compared to five in the summer of 2015, weather effects seem unlikely to fully account for the extraordinary rise in spot prices this summer.

Proportional distribution of daily maximum temperatures in Brisbane for the summer quarter, coloured by year. Data sourced from the Bureau of Meteorology.

In detail the Queensland market differs from other regions in the NEM in as much as it is the only region to have experienced significant demand growth in recent years. Mapping the change of demand growth over the years, by time of day, helps reveal the drivers for market tightening, as shown below firstly in absolute terms, and then in relative terms normalised against 2014.

Queensland demand loads in megawatts by time of day for the summer quarter, for select years from 2010 to 2017. Queensland demand anomalies in megawatts by time of day for the summer quarter, normalised against the summer quarter of 2014.

Between 2009 and 2014, summer demand fell by about 400 megawatts (or 6%), with the greatest change occurring in the middle of day. This pattern is akin to the signal in South Australia shown above, and reflects how the growing deployment of domestic rooftop PV was revealed to the market as a demand reduction.

Since, 2014 demand has grown appreciably across all times of day, skewed somewhat towards the evening. Relative to 2014, demand is up by almost 800 megawatts across the board, and by as much as 1200 megawatts at 9 pm. The 800 megawatt base shift in demand can be attributed in large part to new industrial loads associated with the commissioning of the LNG export gas processing facilities at Curtis Island.

In terms of extreme events, it is notable that February 12th this year set a new Queensland demand record at 5.30 pm of 9368 megawatts (at the half hour settlement period) with a spot price of $9005. This is extraordinary given it was a Sunday, a day which normally sees demand down several percentage points, on corresponding weekdays with similar temperature conditions.

Peak demand characteristics in Queensland highlighting the events of February 12th, when a new peak demand record was set at the 5.30 pm half hour settlement period. What’s different about Victoria?

Victoria is the exception to the trend of rising spot prices, with the summer prices of 2017 not much above long term average. In part, the relatively subdued prices can be attributed to the absence of extreme heat in southern Victoria so far this summer. The mean maximum daily summer temperature in Melbourne stands at about 27°C, slightly below average of the previous five years. So far there have been no days with temperatures above 40°C, compared to eight in 2014 and four in 2016.

Proportional distribution of daily maximum temperatures at Melbourne Airport for the summer quarter, coloured by year. Data sourced from the Bureau of Meteorology.

The dominant factor in subduing the Victorian markets prices is likely to be the ongoing fall in demand. In the year to 18th February, demand in Victoria fell by 200 MW. This follows a persistent reduction in demand that has seen a fall of almost 500 megawatts over the last three years, equivalent to 9% of average demand. As shown below, the contrast with Queensland is stark, and reflects significant reductions in industrial demand stemming from the closure of the Point Henry aluminium smelter in August 2014 (Point Henry consumed up to 360 megawatts) and more recently the reduced demand from the Portland smelter on the back of damage caused by an unscheduled power outage on December 1st, 2016. While power capacity in Victoria was reduced by the closure of the 150 megawatt Anglesea coal-fired power plant in August 2015, the cumulative demand reduction over the last decade has led to substantial capacity overhang. All that is set to change with the closure of the 1600 Megawatt Hazelwood power station, slated for the end of March.

Average demand for the year ending February 18th for Victoria coloured in blue and Queensland coloured in maroon. demand. Some emerging issues

The figures shown in the previous sections reveal that peak demand events are stretching the power capacity of the NEM in unprecedented ways, for a variety of reasons. The tightening in the demand-supply balance is driving steep price rises that, if sustained, will have widespread repercussions. For example, a $20/MWhour rise in the Queensland spot price translates to a notional annual market value of $1 billion, that must eventually flow through the contract markets. With summer prices already more than $100/MWhour above last year, the additional costs to be passed onto energy consumers may well tally in the many billions of dollars.

In South Australia, the market tightening follows substantially the reduced supply stemming from the closure of the Northern Power Station.

In Queensland, the market tightening is being driven substantially by industrial loads such as the new LNG gas processing facilities. To the extent that the LNG industry is a significant driver, it is a heavy excise to pay for the privilege of exporting our gas resource. The makings for a policy nightmare, should the royalties from our LNG export be outweighed by the cumulative cost impacts passed on via our electricity markets.

It is important to note that the electricity market is designed so that prices fluctuate significantly in response to the normal capacity cycle, as capacity is added to or removed from the market following rises and falls in demand. In small markets, such as South Australia, the spot price fluctuations over the capacity cycle can be extreme, because the capacity of an individual large power plants can represent a large proportion of the native demand.

Although not large in terms of total capacity by Australian standards, Northern’s 520 megawatt power rating represented around 40% of the South Australia’s median demand. That made Northern one of the Australia’s most significant power stations in terms of its regional basis size. Its withdrawal has dramatically and abruptly reduced the capacity overhang in South Australia. Spot prices were always going to rise as a consequence, because that is the way the market was designed. In addition, Northern’s closure has also increased South Australia’s reliance on gas generation, and it has concentrated market power in the hands of remaining generators, both of which have had additional price impacts beyond the normal market tightening.

In both Queensland and South Australia, the rises in spot prices is signalling the growing tightness in the market. Under normal circumstances would serve to drive investment in new capacity. The lessons of Northern show that any new capacity in South Australia will need to be responsive to the changing pattern of demand, unless the makers rules are changed.

Further, both regions have questions about the adequacy of competition. Both are subject to the impacts of parallel developments in the gas markets, which have made gas production much more expensive. In the case of Queensland this is greatly exacerbated by the extra demand from the LNG gas production facilities.

Finally, these insights have importance for predicting how the markets the will react to the impending close of the 1600 megawatt Hazelwood Power Station in Victoria, all topics I hope to consider in following posts in this series.

The Conversation Disclosure

Mike Sandiford receives funding from the Australian Research Council and ANLECR&D (Australian National Low Emissions Coal Research & Development).

Categories: Around The Web

The 20th century saw a 23-fold increase in natural resources used for building

Mon, 2017-02-20 05:09
There has been a rapid increase in the amount of resources tied up in buildings. Shutterstock

The volume of natural resources used in buildings and transport infrastructure increased 23-fold between 1900 and 2010, according to our research. Globally, there are now 800 billion tonnes of natural resource “stock” tied up in these constructions, two-thirds of it in industrialised nations alone.

This trend is set to continue. While industrialised countries have lost some momentum, emerging economies are growing rapidly, China especially. If all countries were to catch up to the per capita level of the industrialised nations, this would quadruple the amount of natural resources tied up in the built environment.

In Australia, 70% of the buildings and infrastructure that will be used in 2050 have not yet been built. Constructing all of this will require a huge amount of natural resources and will severely impact the environment.

To avoid this, we need work to build more efficiently and waste less of our resources. Our buildings need to last longer and become the inputs of future construction projects at the end of their lifetime.

The impact of the expansion

Continuing the massive expansion of natural resource consumption would not only require vast quantities of new raw materials, it would also result in considerable environmental impact. It would require massive changes in land use for quarrying sand and gravel, and more energy for extraction, transport and processing. And, if we do not change course, more raw material use now means more waste later.

All of this will be accompanied by a large rise in carbon dioxide emissions, making it much harder to achieve the climate goals agreed in Paris. Cement production alone, for example, is responsible for about 5% of global carbon emissions.

Building sustainability

It is certainly possible to build more sustainably. This requires us to use natural resources more efficiently, reducing the amount of materials and emissions related to economic activities. One strategy for achieving this is to create a more circular economy, which emphasises re-use and recycling. A circular economy turns consumption and production into a loop.

Currently, only 12% of materials used for buildings and infrastructure come from recycling. In part, this is due to the fact that globally, four times more materials are used in building than are released as demolition waste. This has, of course, to do with the scale and speed at which some countries are building.

Yet the potential for recycling is very large. Buildings and infrastructure are ageing and in the next 20 years alone there could be as much as 270 billion tonnes of demolished material globally. This is equivalent to the volume accrued over the previous one hundred years. This material will either have to be disposed in landfill, at very high cost, or it could be reused.

As we noted, 70% of the buildings and infrastructure that will be used in Australia in 2050 have not yet been built. This signals massive investment in new materials but also very large amounts of demolition waste from today’s infrastructure.

The opportunity

There is a window of opportunity for more sustainable building if we decouple economic growth from increased use of natural resources. We can do this by improving quality and use of existing infrastructure and buildings, extending lifespans, using better design, and planning for recycle and reuse.

Better quality building materials and better design can extend the lifetime of buildings, resulting in lower maintenance costs and saving primary materials, energy and waste. Eco-industrial parks and industrial clusters as well as sharing of information about waste flows can establish new relationships among industries where the waste of one production process can become the input of another process.

This doesn’t just make environmental sense. There are potentially large economic gains to be had from more efficient use of resources. This includes increased employment, increased productivity and less need for government subsidies.

Achieving a transition to long lived buildings, infrastructure and products will require new business models and new skills. It depends on skilling and re-skilling existing and new workers in the construction and manufacturing industry. Some of these changes are not going to happen spontaneously but will benefit from well designed policy that rewards resource efficiency and sustainability.

But first, we need more information about stocks and flows of materials throughout the economy, to allow governments and business leaders to plan for the necessary innovation.

The Conversation

Heinz Schandl receives funding from United Nations Environment and the United Nations Commission for Regional Development (UNCRD). He is a member of the UN Environment International Resource Panel (IRP) and president elect of the International Society for Industrial Ecology (ISIE).

Fridolin Krausmann receives funding from the Austrian Science Foundation and the European Commission research fund.

Categories: Around The Web

Australia's electricity market is not agile and innovative enough to keep up

Fri, 2017-02-17 05:12

On the early evening of Wednesday, February 8, electricity supply to some 90,000 households and businesses in South Australia was cut off for up to an hour. Two days later, all electricity consumers in New South Wales were warned the same could happen to them. It didn’t, but apparently only because supply was cut to the Tomago aluminium smelter instead. In Queensland, it was suggested consumers might also be at risk over the two following days, even though it was a weekend, and again on Monday, February 13. What is going on?

The first point to note is that these were all very hot days. This meant that electricity demand for air conditioning and refrigeration was very high. On February 8, Adelaide recorded its highest February maximum temperature since 2014. On February 10, western Sydney recorded its highest ever February maximum, and then broke this record the very next day. Brisbane posted its highest ever February maximum on February 13.

That said, the peak electricity demand in both SA and NSW was some way below the historical maximum, which in both states occurred during a heatwave on January 31 and February 1, 2011. In Queensland it was below the record reached last month, on January 18.

Regardless of all this, shouldn’t the electricity industry be able to anticipate such extreme days, and have a plan to ensure that consumers’ needs are met at all times?

Much has already been said and written about the reasons for the industry’s failure, or near failure, to do so on these days. But almost all of this has focused on minute-by-minute details of the events themselves, without considering the bigger picture.

The wider issue is that the electricity market’s rules, written two decades ago, are not flexible enough to build a reliable grid for the 21st century.

Vast machine

In an electricity supply system, such as Australia’s National Electricity Market (NEM), the amount of electricity supplied must precisely match the amount being consumed in every second of every year, and always at the right voltage and frequency. This is a big challenge – literally, considering that the NEM covers an area stretching from Cairns in the north, to Port Lincoln in the west and beyond Hobart in the south.

Continent-sized electricity grids like this are sometimes described as the world’s largest and most complex machines. They require not only constant maintenance but also regular and careful planning to ensure they can meet new demands and incorporate new technologies, while keeping overall costs as low as possible. All of this has to happen without ever interrupting the secure and reliable supply of electricity throughout the grid.

Until the 1990s, this was the responsibility of publicly owned state electricity commissions, answerable to their state governments. But since the industry was comprehensively restructured from the mid-1990s onwards, individual states now have almost no direct responsibility for any aspect of electricity supply.

Electricity is now generated mainly by private-sector companies, while the grid itself is managed by federally appointed regulators. State governments’ role is confined to one of shared oversight and high-level policy development, through the COAG Energy Council.

This market-driven, quasi-federal regime is underpinned by the National Electricity Rules, a highly detailed and prescriptive document that runs to well over 1,000 pages. This is necessary to ensure that the grid runs safely and reliably at all times, and to minimise opportunities for profiteering.

The downside is that these rules are inflexible, hard to amend, and unable to anticipate changes in technology or economic circumstances.

Besides governing the grid’s day-to-day operations, the rules specify processes aimed at ensuring that “the market” makes the most sensible investments in new generation and transmission capacity. These investments need to be optimal in terms of technical characteristics, timing and cost.

To borrow a phrase from the prime minister, the rules are not agile and innovative enough to keep up. When they were drawn up in the mid-1990s, electricity came almost exclusively from coal and gas. Today we have a changing mix of new supply technologies, and a much more uncertain investment environment.

Neither can the rules ensure that the closure of old, unreliable and increasingly expensive coal-fired power stations will occur in a way that is most efficient for the grid as a whole, rather than most expedient for individual owners. (About 3.6 gigawatts of capacity, spread across all four mainland NEM states and equalling more than 14% of current coal power capacity, has been closed since 2011; this will increase to 5.4GW and 22% when Hazelwood closes next month.)

Finally, one of the biggest drivers of change in the NEM over the past decade has been the construction of new wind and solar generation, driven by the Renewable Energy Target (RET) scheme. Yet this scheme stands completely outside the NEM rules.

The Australian Energy Markets Commission – effectively the custodian of the rules – has been adamant that climate policy, the reason for the RET, must be treated as an external perturbation, to which the NEM must adjust while making as few changes as possible to its basic architecture. On several occasions over recent years the commission has successfully blocked proposals to broaden the terms of the rules by amending the National Electricity Objective to include an environmental goal of boosting renewable energy and reducing greenhouse emissions.

Events in every state market over the past year have shown that the electricity market’s problems run much deeper than the environmental question. Indeed, they go right to the core of the NEM’s reason for existence, which is to keep the lights on. A fundamental review is surely long overdue.

The most urgent task will be identifying what needs to be done in the short term to ensure that next summer, with Hazelwood closed, peak demands can be met without more load shedding. Possible actions may include establishing firm contracts with major users, such as aluminium smelters, to make large but brief reductions in consumption, in exchange for appropriate compensation. Another option may be paying some gas generators to be available at short notice, if required; this would not be cheap, as it would presumably require contingency gas supply contracts to be in place.

The most important tasks will address the longer term. Ultimately we need a grid that can supply enough electricity throughout the year, including the highest peaks, while ensuring security and stability at all times, and that emissions fall fast enough to help meet Australia’s climate targets.

The Conversation

Hugh Saddler is a member of the Board of the Climate Institute.

Categories: Around The Web

Global clean energy scorecard puts Australia 15th in the world

Thu, 2017-02-16 12:01
The World Bank has highlighted steps to improve sustainable energy investment.

Australia ranks equal 15th overall in a new World Bank scorecard on sustainable energy. We are tied with five other countries in the tail-end group of wealthy OECD countries – behind Canada and the United States and just one place ahead of China.

Called the Regulatory Indicators for Sustainable Energy (RISE), the initiative provides benchmarks to evaluate clean energy progress, and insights and policy guidance for Australia and other countries.

RISE rates country performance in three areas - renewable energy, energy efficiency, and access to modern energy (excluding advanced countries), using 27 indicators and 80 sub-indicators. These include things like legal frameworks, building codes, and government incentives and policies. The results of the individual indicators are turned into an overall score.

The majority of wealthy countries score well in the scorecard. But when you drill down into the individual areas, the story becomes more complex. The report notes that “about half the countries with more appropriate policy environments for sustainable energy are emerging economies,” for example.

The RISE ranking. RISE report

The report relies on data up to 2015. So it does not account for recent developments such as the Paris climate conference, the Australian National Energy Productivity Plan, the widespread failure to enforce building energy regulations, and the end of Australia’s major industrial Energy Efficiency Opportunities program under the Abbott government.

Furthermore, Australian electricity demand growth has recently re-emerged after five years of decline.

But the World Bank plans to publish updated indicators every two years, so over time the indicators should become a valuable means of tracking and influencing the evolution of global clean energy policy.

Australia

Australia’s ranking masks some good, bad and ugly subtleties. For example, Australia joins Chile and Argentina as the only OECD high-income countries without some form of carbon pricing mechanism. Even the United States, whose EPA uses a “social cost of carbon” in regulatory action, and has pricing schemes in some states, meets the RISE criteria.

Australia also ranks lower than the United States for renewable energy policy, at 24th. This is due to scoring poorly in incentives and regulatory support, carbon pricing, and mechanisms supporting network connection and appropriate pricing. But we are saved somewhat by having a legal framework for renewables, and strong management of counter-party risk. It’s not clear how recent political uncertainty, and the resulting temporary collapse of investment in large renewable energy projects, may affect the score.

I have argued in the past that Australia is missing out on billions of dollars in savings through its lack of ambition on energy efficiency. Yet we rate equal 13th on this criterion, compared with 24th on renewable energy. It seems that many other countries are forgoing even more money than us.

In energy efficiency, we score highly for incentives from electricity rate structures, building energy codes and financing mechanisms for energy efficiency. Our public sector policies and appliance minimum energy standards also score well. Our weakest areas are lack of carbon pricing and monitoring, and information for electricity consumers. National energy efficiency planning, incentives for large consumers and energy labelling all do a bit better. Of course, these ratings are relative to a low global energy efficiency benchmark.

The rest of the world

Much of the report focuses on developing countries. There is a wide spread of activity here, with some countries almost without policies, and others like Vietnam and Kazakhstan doing well, ranking equal 23rd. China ranks just behind Australia’s cluster at 21st.

RISE shows that policies driving access to modern energy seem to be achieving results. The report suggests that 1.1 billion people do not have access to electricity, down from an estimated 1.4 billion a few years ago. A significant contributor to this seems to be the declining cost of solar panels and other renewable energy sources, and greater emphasis on micro-grids in rural areas.

The report highlights the importance of strategies that integrate renewables and efficiency. But it doesn’t mention an obvious example. The viability of rural renewable energy solutions is being greatly assisted by the declining cost and large efficiency improvement in technologies such as LED lighting, mobile phones and tablet computers. The overall outcome is much improved access to services, social and economic development with much smaller and cheaper renewable energy and storage systems.

The takeaway

Screen Shot at am. RISE report

RISE finds that clean energy policy is progressing across most countries. However, energy efficiency policy is well behind renewable energy. “This is another missed opportunity”, say the report’s authors, “given that energy efficiency measures are among the most cost-effective means of reducing a country’s carbon footprint.” They also note that energy efficiency policy tends to be fairly superficial.

Australia’s ranking on renewable energy policy is mediocre, while our better energy efficiency ranking is relative to global under-performance. The Finkel Review and Climate Policy Review offer opportunities to integrate renewables and energy efficiency into energy market frameworks. The under-resourced National Energy Productivity Plan could be cranked up to deliver billions of dollars more in energy savings, while reducing pressure on electricity supply infrastructure and making it easier to achieve ambitious energy targets. And RISE seems to suggest we need a price on carbon.

The question is, in a world where action on clean energy is accelerating in response to climate change and as a driver of economic and social development, will Australia move up or slip down the rankings in the next report?

The Conversation

Alan Pears has worked for government, business, industry associations public interest groups and at universities on energy efficiency, climate response and sustainability issues since the late 1970s. He is now an honorary Senior Industry Fellow at RMIT University and a consultant, as well as an adviser to a range of industry associations and public interest groups. His investments in managed funds include firms that benefit from growth in clean energy.

Categories: Around The Web

Climate change doubled the likelihood of the New South Wales heatwave

Thu, 2017-02-16 05:10
Emergency crews tackle a bushfire at Boggabri, one of dozens across NSW during the heatwave. AAP Image/NEWZULU/Karen Hodge

The heatwave that engulfed southeastern Australia at the end of last week has seen heat records continue to tumble like Jenga blocks.

On Saturday February 11, as New South Wales suffered through the heatwave’s peak, temperatures soared to 47℃ in Richmond, 50km northwest of Sydney, while 87 fires raged across the state amid catastrophic fire conditions.

On that day, most of NSW experienced temperatures at least 12℃ above normal for this time of year. In White Cliffs, the overnight minimum was 34.2℃, breaking the station’s 102-year-old record.

On Friday, the average maximum temperature right across NSW hit 42.4℃, beating the previous record of 42℃. The new record stood for all of 24 hours before it was smashed again on Saturday, as the whole state averaged 44.02℃ at its peak. At this time, NSW was the hottest place on Earth.

A degree or two here or there might not sound like much, but to put it in cricketing parlance, those temperature records are the equivalent of a modern test batsman retiring with an average of over 100 – the feat of outdoing Don Bradman’s fabled 99.94 would undoubtedly be front-page news.

And still the records continue to fall. At the time of writing, the northern NSW town of Walgett remains on target to break the Australian record of 50 days in a row above 35℃, set just four years ago at Bourke Airport.

Meanwhile, two days after that sweltering Saturday we woke to find the fires ignited during the heatwave still cutting a swathe of destruction, with the small town of Uarbry, east of Dunedoo, all but burned to the ground.

This is all the more noteworthy when we consider that the El Niño of 2015-16 is long gone and the conditions that ordinarily influence our weather are firmly in neutral. This means we should expect average, not sweltering, temperatures.

Since Christmas, much of eastern Australia has been in a flux of extreme temperatures. This increased frequency of heatwaves shows a strong trend in observations, which is set to continue as the human influence on the climate deepens.

It is all part of a rapid warming trend that over the past decade has seen new heat records in Australia outnumber new cold records by 12 to 1.

Let’s be clear, this is not natural. Climate scientists have long been saying that we would feel the impacts of human-caused climate change in heat records first, before noticing the upward swing in average temperatures (although that is happening too). This heatwave is simply the latest example.

What’s more, in just a few decades’ time, summer conditions like these will be felt across the whole country regularly.

Attributing the heat

The useful thing scientifically about heatwaves is that we can estimate the role that climate change plays in these individual events. This is a relatively new field known as “event attribution”, which has grown and improved significantly over the past decade.

Using the Weather@Home climate model, we looked at the role of human-induced climate change in this latest heatwave, as we have for other events before.

We compared the likelihood of such a heatwave in model simulations that factor in human greenhouse gas emissions, compared with simulations in which there is no such human influence. Since 2017 has only just begun, we used model runs representing 2014, which was similarly an El Niño-neutral year, while also experiencing similar levels of human influence on the climate.

Based on this analysis, we found that heatwaves at least as hot as this one are now twice as likely to occur. In the current climate, a heatwave of this severity and extent occurs, on average, once every 120 years, so is still quite rare. However, without human-induced climate change, this heatwave would only occur once every 240 years.

In other words, the waiting time for the recent east Australian heatwave has halved. As climate change worsens in the coming decades, the waiting time will reduce even further.

Our results show very clearly the influence of climate change on this heatwave event. They tell us that what we saw last weekend is a taste of what our future will bring, unless humans can rapidly and deeply cut our greenhouse emissions.

Our increasingly fragile electricity networks will struggle to cope, as the threat of rolling blackouts across NSW showed. It is worth noting that the large number of rooftop solar panels in NSW may have helped to avert such a crisis this time around.

Our hospital emergency departments also feel the added stress of heat waves. When an estimated 374 people died from the heatwave that preceded the Black Saturday bushfires the Victorian Institute of Forensic Medicine resorted to storing bodies in hospitals, universities and funeral parlours. The Victorian heatwave of January 2014 saw 167 more deaths than expected, along with significant increases in emergency department presentations and ambulance callouts.

Infrastructure breaks down during heatwaves, as we saw in 2009 when railway lines buckled under the extreme conditions, stranding thousands of commuters. It can also strain Australia’s beloved sporting events, as the 2014 Australian Open showed.

These impacts have led state governments and other bodies to investigate heatwave management strategies, while our colleagues at the Bureau of Meteorology have developed a heatwave forecast service for Australia.

These are likely to be just the beginning of strategies needed to combat heatwaves, with conditions currently regarded as extreme set to be the “new normal” by the 2030s. With the ramifications of extreme weather clear to everyone who experienced this heatwave, there is no better time to talk about how we can ready ourselves.

We urgently need to discuss the health and economic impacts of heatwaves, and how we are going to cope with more of them in the future.

We would like to acknowledge Robert Smalley, Andrew Watkins and Karl Braganza of the Australian Bureau of Meteorology for providing observations included in this article.

The Conversation

Sarah Perkins-Kirkpatrick receives funding from the Australian Research Council.

Andrew King receives funding from the ARC Centre of Excellence for Climate System Science.

Matthew Hale receives funding from the Australian Research Council.

Categories: Around The Web

How the warming world could turn many plants and animals into climate refugees

Wed, 2017-02-15 14:43
The Flinders Ranges were once a refuge from a changing climate. Shutterstock

Finding the optimum environment and avoiding uninhabitable conditions has been a challenge faced by species throughout the history of life on Earth. But as the climate changes, many plants and animals are likely to find their favoured home much less hospitable.

In the short term, animals can react by seeking shelter, whereas plants can avoid drying out by closing the small pores on their leaves. Over longer periods, however, these behavioural responses are often not enough. Species may need to migrate to more suitable habitats to escape harsh environments.

During glacial times, for instance, large swathes of Earth’s surface became inhospitable to many plants and animals as ice sheets expanded. This resulted in populations migrating away from or dying off in parts of their ranges. To persist through these times of harsh climatic conditions and avoid extinction, many populations would migrate to areas where the local conditions remained more accommodating.

These areas have been termed “refugia” and their presence has been essential to the persistence of many species, and could be again. But the rapid rate of global temperature increases, combined with recent human activity, may make this much harder.

Finding the refugia

Evidence for the presence of historic climate refugia can often be found within a species’ genome. The size of populations expanding from a refugium will generally be smaller than the parent population within them. Thus, the expanding populations will generally lose genetic diversity, through processes such as genetic drift and inbreeding. By sequencing the genomes of multiple individuals within different populations of a species, we can identify where the hotbeds of genetic diversity lie, thus pinpointing potential past refugia.

My colleagues and I recently investigated population genetic diversity in the narrow-leaf hopbush, a native Australian plant that got its common name from its use in beer-making by early European Australians. The hopbush has a range of habitats, from woodlands to rocky outcrops on mountain ranges, and has a wide distribution across southern and central Australia. It is a very hardy species with a strong tolerance for drought.

We found that populations in the Flinders Ranges have more genetic diversity than those to the east of the ranges, suggesting that these populations are the remnants of an historic refugium. Mountain ranges can provide ideal refuge, with species only needing to migrate short distances up or down the slope to remain within their optimal climatic conditions.

In Australia, the peak of the last ice age led to dryer conditions, particularly in the centre. As a result, many plant and animal species gradually migrated across the landscape to southern refugial regions that remained more moist. Within the south-central region, an area known as the Adelaide Geosyncline has been recognised as an important historic refugium for several animal and plant species. This area encompasses two significant mountain ranges: the Mount Lofty and Flinders ranges.

Refugia of the future

In times of increased temperatures (in contrast to the lower temperatures experienced during the ice age) retreats to refugia at higher elevations or towards the poles can provide respite from unfavourably hot and dry conditions. We are already seeing these shifts in species distributions.

But migrating up a mountain can lead to a literal dead end, as species ultimately reach the top and have nowhere else to go. This is the case for the American Pika, a cold-adapted relative of rabbits that lives in mountainous regions in North America. It has disappeared from more than one-third of its previously known range as conditions have become too warm in many of the alpine regions it once inhabited.

Further, the almost unprecedented rate of global temperature increase means that species need to migrate at rapid rates. Couple this with the destructive effects of agriculture and urbanisation, leading to the fragmentation and disconnection of natural habitats, and migration to suitable refugia may no longer be possible for many species.

While evidence for the combined effects of habitat fragmentation and climate change is currently scarce, and the full effects are yet to be realised, the predictions are dire. For example, modelling the twin impact of climate change and habitat fragmentation on drought sensitive butterflies in Britain led to predictions of widespread population extinctions by 2050.

Within the Adelaide Geosyncline, the focal area of our study, the landscape has been left massively fragmented since European settlement, with estimates of only 10% of native woodlands remaining in some areas. The small pockets of remaining native vegetation are therefore left quite disconnected. Migration and gene flow between these pockets will be limited, reducing the survival chances of species like the hopbush.

So while refugia have saved species in the past, and poleward and up-slope shifts may provide temporary refuge for some, if global temperatures continue to rise, more and more species will be pushed beyond their limits.

The Conversation

Matt Christmas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

End of the road? Why it might be time to ditch your car

Wed, 2017-02-15 05:08

The average car is stationary 96% of the time. That’s a fairly consistent finding around the world, including in Australia. A car is typically parked at home 80% of the time, parked elsewhere 16% of the time, and on the move just 4% of the time. And that doesn’t include the increasing time we spend at a standstill in traffic.

Bill Ford, executive chair of the Ford Motor Company, says we’re heading for “global gridlock”. And he’s not alone in saying we cannot simply keep adding more cars to our roads.

The funny thing is that while we own more cars than ever, we’re actually using them less. You might think that’s a good thing; that we’re responding to worsening congestion and health, debt and environmental damage by opting to drive fewer kilometres.

But the problem is, we’re still choking our cities and harming our health, finances and environment by continuing to waste our resources on these increasingly dormant vehicles.

It’s not just the car itself that’s wasted. Consider the resources and infrastructure – both private and public – needed to design, mine, manufacture, ship, sell, fuel, move, store, secure, insure, regulate, police, maintain, clean, repair and dispose of all these cars.

David Owen, a staff writer with The New Yorker, has called cars “consumption amplifiers”. They are emblematic of a hyper-consumerist lifestyle that doesn’t really make us any happier.

Our declining car use gives us an opportunity. If we can adjust our car ownership patterns to match our actual needs, we can plan our lives and cities in ways that don’t revolve around a mode of transport that no longer serves us like it used to.

Fast cars?

By default, we still think of cars as fast and convenient. It might appear that way on the street, but the overall reality is quite different.

For a start, cars are a woefully inefficient way to transport a person from A to B. Typically, only around 20% of the energy from fuel combustion is converted into motion.

If we assume that the average car weighs roughly 20 times more than its driver, we can estimate that for a single-occupant car journey, with no significant other cargo, the effective fuel efficiency drops to just 1% (adding a passenger only raises this to 2%). And that’s before we take into account the broader resource and infrastructure requirements, as mentioned above, for that journey to take place.

The urban car isn’t terribly fast either. Research shows that when we take into account not only the time in transit but also the time spent working to pay for the car and its operation, the car’s average “effective speed” in cities is generally well under 13km per hour. This has been called the “urban speed paradox”. As cyclist and author Greg Foyster has pointed out, “your typical commuting cyclist can beat that without breaking a sweat”.

These and other factors have resulted in what’s called “peak car”. The average distance travelled per person by car has been declining for more than a decade. Commuting distances and average urban driving speeds have also peaked and the rate of new licences is plummeting.

Ford Motor Company’s future trends manager, Sheryl Connelly, has suggested that cars no longer symbolise freedom to this generation in the way they did to baby boomers. The rise of car-sharing schemes has also caused renting to lose its stigma. Young people now prize access over ownership.

Yet, for too many of us, a privately owned car remains the default for almost every transport task. There are times when cars are useful, but for general urban commuting, based on what we’ve seen above, it is like using a chainsaw to carve butter.

Expanding the transport toolkit

Many urban areas around the world are seeing a rapid shift away from private cars as the dominant form of transport. Areas of some cities are even going car-free while reallocating old road space to public or active transport, or back to nature.

In Australia, the City of Port Phillip has devised a plan to halt the growth in car ownership, even as the city’s population doubles, by converting hundreds of parking spots into car-share bays. Each share-car is reported to take up to 14 cars off the road, while cutting the costs of personal mobility by up to 60%.

One local resident was reported as saying the recent addition of a car-share spot at the end of his family’s street had prompted them to sell their rarely used car. “Now that there is a really good number of cars close by, we can make that move to going completely car-free.”

Then there’s the rapid development of other shared transport such as bike-share programs. By 2014, the number of cities with bike-share programs had increased to 850, up from only 68 in 2007.

Alongside all this are new planning models for activity centres, integrated transport networks, and carless or near-carless residential developments.

All the while, speed limits are decreasing, free public transport (at point of access) is increasing, and automobile and business associations are advocating for heavy investment in active and public transport.

Transport in 2017 and beyond

None of this is meant to demonise cars or their drivers, or to suggest that no one should own a car. What I am saying is that the model of everyone owning their own car is best relegated to the 20th century. This leads to the question of what the optimal level of car ownership might be, where we achieve the transport benefits without the waste, damage and expense.

What if in 2017 we focused on developing our personal and collective toolkits beyond the chainsaw, to do a better job of moving ourselves around?

You might get to know your local matrix of transport options better, from walking, cycling and skating routes to public transport, shared transport (car-share, ride-share, bike-share, taxis) and rented transport (cars, trucks, motorbikes, bicycles). Over time, you could then home in on how they work best together.

More of us could consider placing our cars in peer-based car-share or ride-share programs (informal or formal). Or we could even choose to sell our cars, and opt into one of the above schemes as a user rather than provider.

Peak car is upon us, and with it comes the opportunity to choose new models of urban transport that better match our current needs for quality, sustainable living. It is vital work. And like any good tradie, we need to make sure we have the right tools for the job.

The Conversation

Anthony James does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Want electricity reform? Start by giving power back to the states

Tue, 2017-02-14 14:44

In 1999, Australians were paying some of the lowest electricity prices in the world. Now they are among the highest. What went wrong?

Back then, the electricity network in the southern and eastern states of Australia had just been reformed to create a regional wholesale market, called the National Electricity Market. Some states – Victoria and then South Australia – privatised their industry. All states then progressively deregulated their retail electricity markets, and transferred the regulation of their remaining network monopolies to two quasi-federal regulatory agencies, the Australian Energy Regulator and the Australian Energy Markets Commission.

These reforms replaced the state governments’ electricity commissions – derided by some as Soviet-style relics – with what was purported to be a dynamic new arrangement of competition and private risk-taking.

The reforms were bolstered by reports by the Industry Commission (now the Productivity Commission) predicting that even though electricity prices were already low, they would fall further as the pressure of competition drove the industry to become more efficient and customer-focused.

The exact opposite happened. The sector’s productivity has declined sharply after tens of billions of dollars were spent on network infrastructure - particularly substations – that are not used at anything like their full capacity, even at the peak of an Australian summer.

But the failures are not just in the regulation of networks. Our retail markets compare very unfavourably with those in other countries, and our wholesale electricity markets seem to be cornered regularly – most recently in South Australia on February 8, where a lack of available generation led regulators to cut the power to some 90,000 customers.

Besides not being cheaper, the system is also no greener or more reliable. The amount of greenhouse emissions per unit of electricity produced has shown little change, and as South Australia has shown, the system can’t always keep the lights on.

Australia is blessed with a surplus of every conceivable energy resource and no shortage of technical and managerial skill. How did it come to this?

Passing the buck

The common factor underlying these failures is accountability. Officials use the phrase “all care and no responsibility” to describe the situation in which politicians become as skilled in finger-pointing as they are in showing empathy for those suffering through power blackouts.

The latest manifestation of this is the mis-characterisation of Australia’s electricity problem as one of renewables versus fossil fuels. In this view, the solution is to turn back the clock to last century’s high-emission technologies (such as coal), despite the clear risk to the private sector of doing so.

What can sensibly be done to get us out of this mess? The real problem is not renewables – it’s poor governance.

Fixing governance problems is hard, but it’s clear which direction we should take. It needs to be made obvious who should be strung up when things go wrong, or covered in glory when they go right. This clarity will in turn deliver the accountability needed to anticipate and solve problems, rather than the buck-passing and blame-dodging we’re seeing now.

The state model

There are lessons to be learned from other comparable federal countries, including Germany, the United States and Canada. They too have regional power markets and retail competition, but they have avoided the bickering between federal and state governments seen in Australia.

Their electricity networks (except interconnectors) and their retail markets are overseen by the states and provinces – as used to be the case in Australia.

When accountability is clearly established, we will know where the buck stops when the lights go out or prices become unaffordable. But under Australia’s current quasi-federal system, there is an irresistible temptation to point fingers and obfuscate if things go wrong.

Politicians past and present created this problem, and they must now rise above it. The immediate task is not to tinker with existing institutions, but instead to make some fundamental changes.

The starting point should be to recognise that electricity supply is the province (under our Constitution) of the states and territories, not the Commonwealth. It would be better to get on with fixing our own back yards than idly waiting and wishing, often without good reason, for “national coordination”.

We should reassign oversight of networks and retail markets back to the states and territories, as used to be the case. Regional transmission interconnection and market operation should continue to be federally coordinated, but the primary responsibility for pricing and reliability must rest with the states. The states might choose to delegate the oversight of various issues to central entities, but these entities must be clearly answerable to those states under the terms of their delegation.

In some respects these will be major changes, and in others, mainly a change of mindset and orientation. But for too long now we have been pushing a model of governance that does not reflect our constitutional responsibilities, and is at odds with the approach adopted in other federal countries.

It has failed and it is time to change. Other nations’ experience can give us confidence that if we make changes we can look forward to vibrant electricity markets that actually work in customers’ best interests.

The Conversation

Bruce Mountain does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

We need a comprehensive housing approach to deal with heatwaves

Tue, 2017-02-14 05:08
We can learn a lot from Queenslanders. Shutterstock

Heatwaves across much of the country this summer have revealed a serious problem with our national housing stock.

Stressed electricity networks that can’t guarantee supply have led to politicians advising people not to go home, but to go to the movies instead. The risk is that houses aren’t built to mitigate the health risks of this kind of heat.

We are using air conditioning as a band-aid instead of identifying the cause and seriousness of the condition. Australia’s continued lack of planning to solve the problem is a risky strategy.

But imagine a future where we can reliably depend on our dwellings to help us “keep our cool”. A future where we don’t have to rely on free air conditioning at the local shopping centre, and where heatwaves don’t overstress our hospitals, electricity networks, or bank accounts.

A staged and comprehensive approach can create such a future – one that would improve our individual, family and national resilience.

Smarter design and construction

Rather than being seduced by the property market’s surface bling, we need to pay more attention to the quality of the building envelope – the roof, walls, windows and floor. We can manage unwanted heat inside our homes in two key ways.

The first is to stop the heat getting in. Many aspects of a home’s design (orientation, eaves, external shading and landscaping) and construction materials (roof colour and coating, insulation, glass and window type) can help control how hot it gets inside. Guides on these design features are available at the government’s Your Home website.

The second key is having strategies to manage unwanted heat. Again, this can be done through good design (with clerestory windows, solar chimneys, roof vents, and so on) and by using the right materials. Opening and closing your house in response to the outside temperature is also important.

For example, some houses combine aspects of traditional Queenslander architecture – deep eaves, shady verandas, casement windows and louvres – with modern materials like high-performance insulation and tinted low e-glass; dense internal materials such as rammed earth; and night-time ventilation. These homes rarely surpass 30℃, despite their southeast Queensland location.

Combining Queenslander design with new materials works magic! Wendy Miller

Sometimes mechanical assistance may be required, but rather than thinking that you need to air-condition the whole house, strategies such as “cooling the occupant” or creating a “safe retreat” – similar to that of a bushfire or cyclone shelter – are worth considering.

Better ratings

It is difficult to know the best design and construction, built to protect against extreme heat, when you see it. The star rating of Australian homes is one attempt to communicate this. It is an indication of how a specific house design and its materials determine internal temperature.

While a good start, the rating system is based on past average weather patterns. What would be better is using current or even future weather data. And knowing the expected temperature of each room in the house would help to find cost-effective solutions for improving the performance of new and existing homes.

Perhaps there is even a need for a “stress test” – giving the house a “heat index” colour code similar to the weather bureau’s forecasts for heatwaves.

Do our homes need a heat risk rating? Wendy Miller

On top of this we need to know that the dwelling in question has actually been built to the standards indicated by the design. Transparent and consistent inspection practices need to be implemented, but are practically non-existent across Australia today.

Leadership from government and industry

Some of the blame for the situation can be put on ideological differences about the role of government. For instance, building regulation is seen as “red tape” rather than consumer protection. The division of powers between governments also complicates the situation.

Despite these challenges, a few barriers should be addressed as a matter of urgency.

The community needs to understand that the current building requirements, which vary by state and by dwelling type, are inadequate. They certainly do not represent a house with safe indoor temperatures throughout the year.

Greater transparency is needed. In particular, “concessions” that allow the minimum standard to be further reduced should be removed from the star rating because these have no impact on internal temperatures.

Information about the performance standard of each dwelling needs to made available to everyone in every property transaction. We need to know more about the buildings we live in – preferably before we buy or rent.

The last step is to acknowledge that housing, health and energy issues are all strongly linked. In extreme weather these are also linked to disaster management and emergency services.

Can we fix it?

Governments have already embarked on several projects, including restructuring our health system, transitioning our electricity market, updating our National Construction Code, and refining our disaster management and emergency response strategy.

But the reforms must be holistic. Policies, regulation and infrastructure planning and expenditure in any one of these sectors can lead to unintended consequences in the others. A “one system” approach would create significant economic, social and environmental opportunities for everyone.

So, can we create a better future? If our politicians, and the associated industries, have the skills, foresight and courage to put your home – our homes – into these discussions, yes we can!

The Conversation

Wendy Miller has received funding from the Australian Research Council, the National Climate Change Adaptation Research Facility, the NSW Office of Environment and Heritage and the South Australia Department of State Development.

Categories: Around The Web

The anatomy of an energy crisis - a pictorial guide, Part 1

Mon, 2017-02-13 10:15
What energy crisis?

Who could forget the energy “crises” that affected electricity supply across south-eastern Australia last year.

Firstly the Tasmanian crisis, following the Basslink outage in December 2015. With hydro storage dams at record lows following a drought on the back of aggressive storage withdrawals during the carbon tax years, Tasmania enforced drastic measures to ensure supply. Thankfully, flooding winter rains, together with the eventual restoration of Basslink in June helped resuscitate the apple isle’s energy supply. Tasmania’s hydro storages now stand at around 40% of full capacity, more than double at the same time last year.

Tasmanian hydropower storage capacity shows a strong seasonal trend, filling in winter rains, and drawing down during the summer and early autumn. Exchanges with Victoria via Basslink help provide security of supply, that was compromised by the outage in December 2015, when storages were already dangerously low on the back of the drought conditions in 2015 and aggressive draw down of storages during the Carbon-tax years to capitilise on the higher mainland spot prices.

July saw the first of the sequence of crises in South Australia that followed from, and were in many eyes, attributable to the closure of its last coal-fired power plant at Port Augusta in May of 2016.

With gas prices at record highs, and South Australia effectively isolated from Victoria due to upgrades on the main interconnector into Victoria, spot prices sky rocketed, culminating on a cold, windless winter day on July 7th. Energy consumers that had not contracted supply were at the whims of traders. Prices averaged over $1400/MWhour for the day and around $520/MWhour for the week, almost 800% above the average for that time of year.

Graphical summary of electrical power generation, demand, spot prices in, and exchange between, each of the five regions comprising the National Electricty Market. The period shown is the week of July 3rd- 9th, 2016, during the first South Australian energy crisis. Over the week, interconnector flows from Victoria into South Australia were restricted to an average of 225 MW, or about 40% of full capacity due to upgrade works. On July 7th, at the height of the crisis, the flow was limited to 166 MW. VWP = volume weighted price in $/MWhour. TOTAL.DEMAND = regional demand in MW. DISPATCH.GEN = regional generation in MW. NETINTERCHANE = net exports (positive) or imports (negative) in MW.

All that was superseded by the events of September, when extreme winds played havoc with the South Australian transmission system, toppling transmission lines in the mid north. Poorly understood default control settings automatically disconnected wind farms, leading to the interconnector tripping and a state-wide blacked out. Unanticipated problems in restarting the system exacerbated the pain.

Finally, failure of a transmission line in south-west Victoria on December 1 lead to a power loss at the aluminium smelter in Portland. The damage to “frozen” pot lines has jeopardised the smelter’s ongoing viability. As the state’s largest energy consumer and the one of the biggest regional employers, the political fallout is intense.

After the NEM’s “annus horibilus”

With 2016 very much the National Electricity Market’s (NEM) “annus horibilus”, pundits awaited the summer of 2017 with bated breath. The combination of high gas prices, frighteningly intense summer heat, a fragile and ageing energy supply system, and increasing concerns about market rules, the scene was set for “interesting times”. Whatever was to transpire it was always going to be inflamed by political point-scoring - the one commodity that seems rarely in short supply.

And so it would prove to be, even in the northern states of Queensland and New South Wales that had hither-too largely escaped the wrath of Electryone.

The summer of 2017 has seen extraordinary rises beset the spot market across the country, particularly in New South Wales and Queensland. Further blackouts in South Australia, and market interventions to avert them in New South Wales, have done little to assuage concern that our electrical power system is no longer fit for purpose. Queensland 2017 prices have been some is 400% above the historical average for this time of year.

Graphical summary of NEM operations for the period 1st January - 11th February 2017.

With the summer far from finished, our politicians remain hard at it, pointing fingers and apportioning blame, doing almost anything and everything but that which is in most short supply - namely, embracing bipartisanship. A glimmer of hope is to be found in comments from Chief Scientist Alan Finkel, who has been charged to lead a review of the security of our National Electricity Market.

What is our NEM?

To provide some guide to what is happening to the NEM, and why, I have compiled a few pictures that illustrate elements its basic anatomy. This is designed as background. In following posts in this series I will focus on the details of recent events that have so heightened the political heat.

The NEM comprises five interconnected regional jurisdictions - one for each state along the eastern seaboard and South Australia. For each region, the market operator AEMO runs a 5-minute interval, energy-only, dispatch ‘pool’, or spot market. The objective is to balance supply with demand in a way that minimises cost, based on the bids submitted by generators. It is a complicated process. Settlement prices are aggregated at half hourly intervals, and determined as the average of bid prices of the last offer needed to meet the demand for the dispatch interval.

Pictorial of the generation structure on the NEM, as of early 2017. The top half shows the five regions comprising the NEM, the bottom half the power as generated and dispatched by fuel type progressing from fossil on the left through to renewables of the right. For the period shown (1/1/2017-11/2/2107) black coal contributed 55.6% of supply (at a capacity factor of 68%), brown coal 22.7% (cf=79%), natural gas 11% (cf = 24%), hydro 5.4% (cf = 14%) and Wind 4.6% (cf=29%) Units are in MW. Note that gas is the only fuel source common to all regions, but its contribution varies significantly from over 50% in South Australia, to just a few percent in Victoria.

With the focus of the dispatch ‘pool’ being least cost electricity supply, AEMO also operates several ancillary markets to ensure the requirements for safe grid operation are met. This includes the provision of reserve supply and frequency control normally sourced from synchronous generators such as large coal plants.

AEMO also has regulatory powers to intervene in the market by demanding generation be made available in cases when the total bid capacity is insufficient. When demand exceeds total capacity, or if the available capacity cannot be made available in a timely fashion, AEMO can authorise load-shedding, effecting a re-balancing of demand to meet the available generation capacity.

Normally, large electricity consumers will contract power supply via the contract market, rather than directly through the spot market. This insures consumers against the potential for extreme price volatility allowed on the spot market, that can see prices range from between -$1000 and $14,000/MWhour. For comparison, the standard domestic retail tariff is about $250/MWhour or $0.25/kWhour.

The bid strategies of power plants reflect differences in their cost structures and performance characteristics. For example, fuel costs for brown coal generators are very low, but they are best operated at constant load. In contrast gas plants are generally much more rampable, but much higher cost. In Victoria, as a consequence gas is used almost exclusively to meet peaks in demand as illustrated in the three graphics below.

Dispatch in Victoria for the period 8/2/2017-10/2/2017, coloured by fuel source. Also shown is the Victorian demand (brown line), available generation bid into the market (top black line), and net exports as negative (bottom black line) Brown coal power generation in Victoria for the period 8/2/2017-10/2/2017, coloured by power station. Natural gas generation in Victoria for the period 8/2/2017-10/2/2017, coloured by power station.

Typically a large base-load generator, such as a brown coal plant, will bid much of its capacity into the spot market at their short run cost, to ensure a slice of the action. In contrast peaking power plants will bid at price well above marginal cost, anticipating that they will required only very occasionally. Forward contracts of various kinds help insure revenue streams for base load generators against spot prices below their long-term cost of production, and for peaking plants being available when needed.

Renewables such as wind dispatch at the whims of the weather, and because of negligible short run marginal costs, bid their output at very low prices. As a price taker, wind generation tends to drive spot prices lower, impacting the viability of other generators. As shown below, and to be discussed in more detail in a following posts, the recent events in South Australian dispatch highlights the challenges in the market when wind power output correlates poorly with demand.

Dispatch in South Australia for the period 8/2/2017 through 10/2/2017, coloured by fuel source. Also shown is the South Australian demand (brown line), available generation bid into the market (top black line), and net imports (bottom white line). Black outs on the 8th February occurred when local dispatch curve hit the available generation. At that time here was no more capacity ready to be dispatched, so AEMO instigated load-shedding. (Note that not all capacity in South Australia was bid into the market at this time.)

Finally, rooftop PV is not dispatched onto the grid, but rather is “revealed” to the market as a demand in reduction.

Why are spot prices rising

In theory, the spot market is designed to encourage a competition that ensures prices provide generators with a revenue stream that is linked to their long run marginal cost of production. If prices do depart, competitive market principles should ensure system re-balancing either through investment in new generation or the withdrawal of old. Of course, competition needs to be provided by an adequate diversity in ownership.

And so shifts in the spot prices, signalled via the contract markets, are designed to reflect the balance of demand and supply. The years 2009-2014 were characterised by persistent reductions in demand across the NEM, in part due to growing penetration of solar PV. At the same time, the addition of new wind farms to meet Renewable Energy Target contributed to a growing oversupply in the market, reflected in very subdued spot prices. For example from 2010-2014, Victorian spot prices averaged about $35/MWhour, after factoring out the carbon tax. While that price is above the cost of production for existing Victorian brown coal generators, it would be well nigh impossible to obtain financing for any new large scale generation at prices less than about 2-3 times that.

Since 2014, demand has risen in Queensland due in part to the commissioning of new LNG gas processing facilities at Curtis Island. Reductions in generation capacity in Victoria and South Australia due to closure and/or mothballing of several fossil plants (Anglesea in Victoria and Northern and Pelican Point in South Australia), has significantly tightened the supply-demand balance. Consequently, spot prices are on the rise across the NEM.

Why do spot prices vary between regions?

Spot prices averaged about $60/MWhour across last year, but vary somewhat by region, and by season, and by jurisdiction.

As shown in diagrams above the make-up of generation in each of the five regions varies considerably, leading to different cost structures. Similarly differences in demand profiles lead naturally to differences in generation fleet. Finally there are differences in market competition.

With limited interconnection capacity, along with differences in regional demand and generation portfolios, occasionally lead to large separation in spot market prices. In times of very high demand during summer heat waves and winder cold snaps, or in times when supply is constrained by infrastructure (power plant or transmission) outages or fuel supply/cost issues, spot prices can be extremely volatile.

Annual variations in spot prices for the period 1st January through 11th February, for each of the four mainland regions. Red numbers shows the average for the years prior to 2017.

Historically, South Australia has had the highest prices and Victoria the lowest. This reflects the higher much higher proportion of gas in the generation mix, its larger proportional daily/seasonal cycle between minimum and maximum demand and, arguably, competition issues. As illustrated below, peak demand in South Australia is over 250% higher than the median, compared to around 150% in Queensland. A greater relative proportion of peaking generation capacity means higher average spot prices. Competition is a particular issue in South Australia, since the closure of the Northern Power Station, as it is in Queensland.

Annual demand in South Australia and Queensland, in MW in top panel, and as percentage of median demand in bottom panel. Note the recent rise in demand in QLD due in large part to the recent commissioning of LNG plants. The bottom panel highlights the much greater daily and seasonal variability in demand in SA which sees maximum demand occasionally exceeds 2.5 time median. In comparison QLD peaks are only 1.5 times median. The boxes show 25-75 percent quartile ranges with notch at the median. Outliers more than 1.5 times IQR are shown by dots. How well suited is our market?

It is important to realise that while the physical characteristics of any power system are governed by the laws of physics, the market itself is a construct - just one of many ways of matching supply and demand. In particular as an energy-only ‘pool’ , there are questions about how well our NEM is suited to meeting the need of providing a cost effective, secure and environmentally acceptable energy supply. In particular, there is very little incentive for demand side management. Moreover, the power system does not operate in isolation, and needs to be considered with other policy settings in the gas and water markets as well as climate policy. In the following posts in this series I intend to address some of these issues with examples drawn from our recent experience on the NEM.

The Conversation Disclosure

Mike Sandiford receives funding from ARC and ANLEC.

Categories: Around The Web

How drones can help fight the war on shark attacks

Mon, 2017-02-13 05:09

Following an unprecedented series of shark attacks off Australian beaches, the need to find practical solutions is intensifying.

Aerial drones could be an important tool for reducing risk of shark attacks on our beaches within the coming years. Here’s how it would work. Drones would fly autonomously over beaches continuously scanning for sharks with image recognition software.

If a shark is detected, real-time video will be instantly sent to beach authorities, such as lifeguards. If it is a dangerous shark, appropriate action can be taken to ensure public safety, such as sounding alarms and clearing people from the water.

Like other shark bite mitigation measures, this cannot completely eliminate the possibility of a shark attack. However, it could help to reduce the risk to an acceptable level for the majority of beach users.

Importantly, the drone-based approach to shark bite mitigation does not harm sharks or other marine wildlife, such as whales, dolphins, rays and sea turtles, unlike more controversial shark control measures such as mesh nets or baited drum lines.

Surfer has a close encounter with a great white shark as seen by a drone. Testing drones

As part of the NSW government’s A$16 million Shark Management Strategy, researchers from the NSW Department of Primary Industries (NSW DPI) and Southern Cross University (SCU) have demonstrated that drones can reliably detect sharks off Australian beaches.

NSW DPI researchers have also compared the costs and benefits of marine wildlife sightings between drones and helicopters, as well as established environmental conditions suitable for drones to provide effective shark detection capabilities.

This summer, a team of SCU and DPI researchers completed an intensive drone trial on five important beaches in NSW to verify that drones will work in the long term. As part of the trial, drones performed six 20-minute patrols each morning on each beach for every day of the school holidays.

Researchers monitoring drone footage spotted great white, bull, whaler, mako and hammerhead sharks off NSW beaches. They also saw many dolphins, sea turtles and less dangerous shark species, such as shovel-nosed sharks.

These trials included experiments comparing “people versus machines” by evaluating the utility of automated flight paths and shark recognition software.

Drone captures a great white shark cruising the shallows of Northern NSW. Automating the drone-based approach

The overall objective of this research is to develop a fully automated drone-based shark surveillance system in the near future.

We envisage that a team of aerial drones could run continuous shark detection missions during the hours when most people are on our beaches.

When required, each drone will automatically take off, patrol for sharks, land itself and charge up again, ready for the next mission. If a drone detects a shark, to can alert beach authorities.

Their response will vary depending on the species of shark detected and its location. This will be immediately apparent from the live video feed and location data they receive. As well as tracking sharks, the drones will also be fitted with sirens and lights to contribute to any emergency actions.

Great white shark off a beach in Northern NSW. Problems to solve

There are still at least five major challenges to overcome before establishing a fully functional automated drone-based shark surveillance system. But these could be gradually overcome within the next few years.

Civil aviation regulations

Aviation regulations restrict the use of fully automated drones in most airspace. We could overcome this problem by modifying the law or establishing restricted zones over beaches where drones can fly.

Public safety concerns

We need to minimise the risk of injury as a result of drone failure, by making sure their flight components are failsafe and having flight paths clear of beachgoers. We also need airspace safety systems to ensure that drones are grounded when emergency and other aircraft are in the vicinity.

Public privacy concerns

A drone-based shark surveillance system would require public acceptance. For this, beachgoers need to be aware of the sorts of data being collected by the drones, and to rest assured that this does not breach privacy legislation.

Reliable hardware

Although aerial drones can already automatically take off, fly routes, land and charge themselves, it is not clear how reliably this technology will stand up to the Australian beach environment. To be effective, we will need drones that can reliably function under heavy workloads in coastal conditions. Similarly, data transfer platforms also need to be fast and reliable.

Purpose-designed software

Image analysis software needs to be further developed to automatically detect sharks with a high level of accuracy. Customised software will also need to be developed to coordinate the missions of a team of drones and to ensure seamless video streaming to the portable wireless devices of beach authorities and users.

In terms of the hardware and software challenges, there are a number of research groups racing towards solutions with the goal of commercialising their products. Once an automated drone-based technology for shark bite mitigation is in place, it should be possible to solve issues regarding legislation, safety and privacy.

Given the current rate of technological development and the falling costs of commercially available drones, fully automated drones could be reducing the risk of shark attacks on Australian beaches within five years. However, for many nervous beachgoers, this may not be soon enough.

The Conversation

Brendan Kelaher receives funding from the NSW Department Primary Industries for two PhD students working on shark projects.

Andrew Colefax receives project funding for his PhD from the NSW Department of Primary Industries (NSW DPI). He also receives additional work from the NSW DPI.

Paul Butcher works for NSW Department of Primary Industries. He receives funding from the NSW and Commonwealth Governments. He is an Adjunct Associate Professor at Southern Cross University.

Vic Peddemors receives funding from the NSW Government, the Australian Research Council and the Fisheries Research and Development Corporation (FRDC) on behalf of the Australian Government.

Bob Creese does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Why did energy regulators in South Australia deliberately turn out the lights?

Fri, 2017-02-10 13:05
High gas prices have left Adelaide's Pelican Point power station running at less than half its capacity. Peripitus/Wikimedia Commons, CC BY-SA

Last Wednesday evening, shortly after 6pm local time, around 90,000 homes and businesses in South Australia were deliberately disconnected from the electricity grid for up to an hour. In what is becoming a familiar pattern, this event provoked politicians and political actors to release a stream of claims and counter-claims about what happened and what should be done about it.

So why did it actually happen? At the start of the day, electricity was being supplied by a combination of wind power, the two interconnectors from Victoria, and a modest amount of local gas generation. As the day heated up (the temperature in Adelaide hit a maximum of 42℃), demand grew, wind generation fell away, and the volume of electricity supplied by gas generators increased rapidly.

Half-hourly total state electricity consumption reached its maximum for the day between 5.00pm and 5.30pm, by which time rooftop solar was supplying about 9% of the total. This is a very common pattern on hot days in the state.

As the sun went down, total consumption went down but solar generation went down faster. This is also very common and in theory there is more than enough capacity to meet this level of demand from gas-fired generators plus the interconnectors.

In practice, however, not all of South Australia’s gas generation was available on the day, meaning that it was not sufficient to meet demand. This happened shortly after 6.30pm local time, not helped by the fact that the maximum temperature arrived very late in the day, boosting the demand for after-work air conditioning.

Switched off

To prevent potentially widespread damage to the entire system, which might have triggered even more widespread blackouts, the Australian Energy Market Operator exercised its authority to instruct SA Power Networks (the local “poles-and-wires” distributor) to start a series of rolling disconnections of blocks of consumers – a tactic known as “load-shedding”.

Unfortunately, although the demand was only lowered by 3%, it affected a large number of consumers. It was about 40 minutes before the underlying demand had fallen to the point where available sources of generation could supply all the electricity that was required, at which time all customers were reconnected.

There are two reasons why this was deemed necessary. First, the peak demand for grid electricity was the highest for three years. Second, the amount of gas generation available on Wednesday was about 20% less than the nominally available capacity. Had the full capacity been available, the blackouts would have easily been avoided. It is this fact that has particularly angered the South Australian government, which is once again facing political derision for failing to keep the lights on.

The largest single part of the unavailable capacity is 240 megawatts – roughly 8% of the state’s total gas generation – at Pelican Point power station. Pelican Point is the highest-efficiency, lowest-emission thermal power station in South Australia. But nearly two years ago its owner, the French multinational Engie (which also owns the Hazelwood coal station in Victoria), announced that the rising cost of gas had made it too expensive to run at full capacity. Since then Pelican Point has operated only intermittently, and never at more than half of its nameplate capacity.

What a gas

High gas prices are the direct result of the huge demand for gas by the three export LNG plants at Gladstone, in Queensland. Gas that might notionally have been used to supply electricity for South Australians is instead being shipped to customers in Asia.

Meanwhile, smaller amounts of nominally available gas-fired electricity were also offline in South Australia on Wednesday. We are unlikely to know why until the official reports on the incident are published.

More importantly, however, making more gas generation capacity available is only a short-term fix and does not seriously address the changes needed to maintain, in the words of the National Electricity Objective, a secure, reliable and affordable supply of electricity.

What kinds of changes will be required? A good starting point would be to acknowledge the role that rooftop solar is already playing in reducing peak demand for electricity from the grid. On Wednesday, the peak demand for grid-supplied electricity was about two hours later and 4% lower than it would have been if no one had solar panels.

The need for load-shedding could have been completely avoided with the help of technologies that are already available for power consumers to reduce their own demand. For more than a decade, demand-side participation (which gives consumers more influence over the timing and quantity of their own electricity use) and direct load control (which involves reducing specific customers’ demand at certain times) have both been talked about, reported on, trialled, and instituted in only a desultory way. They have never been taken seriously by either industry participants or their regulators.

Large-scale electricity storage has emerged only recently because of significant cost reductions. These are just some of the likely components of a low-emission, 21st-century electricity supply system.

Almost the only positive action which governments have taken on these matters in recent times has been to establish the review by Chief Scientist Alan Finkel. The real test for the politicians will be whether they understand and act decisively on what Finkel and his colleagues have to say.

The Conversation

Hugh Saddler is a member of the Board of the Climate Institute

Categories: Around The Web

Delving through settlers' diaries can reveal Australia's colonial-era climate

Fri, 2017-02-10 05:13

To really understand climate change, we need to look at the way the climate behaves over a long time. We need many years of weather information. But the Bureau of Meteorology’s high-quality instrumental climate record only dates back to the start of the 20th century.

This relatively short period makes it hard to identify what is natural climate change and what is human-induced, particularly when it comes to things like rainfall. We really need data that go further back in time.

Natural records of climate such as tree rings and ice cores can tell us a lot about pre-industrial climate. But they too need to be verified in some way, matched against some other form of data.

So, we went hunting for some. Over two years, we looked through newspapers, manuscripts, government documents and early settlers’ diaries from Sydney, Melbourne, Adelaide and Tasmania. We took thousands of photos of letters, journals, tables and graphs. We rediscovered handwritten observations from farmers, convicts, sailors and reverends across southeastern Australia, stretching all the way back to European settlement in 1788.

Rummaging around in libraries might not seem like the best way to understand what’s been happening with our climate. But weather diaries kept by dedicated observers in the 1800s are proving important for climate research.

While there are still many observations to be rescued, the records we’ve found so far have already called into question the stability of the relationship between El Niño, La Niña and rainfall in southeastern Australia.

The records

We collected 39 different sources of weather data covering 1788–1860, with continuous observations from the mid-1830s. The numbers we’ve found so far paint a dramatic picture of the weather and climate experienced by Australia’s colonial settlers.

For example, Thomas Lempriere, who ran the Port Arthur penal settlement, recorded the harsh Tasmanian winters he suffered in the 1830s. Surgeon William Wyatt in Adelaide noted heatwaves and snowfall during the 1840s. And William Dawes, Australia’s first meteorologist, diligently observed the first drought encountered by Australia’s English settlers in 1790 and 1791.

Weather diaries kept by Reverend William Clarke in Sydney in the 1840s, now at the State Library of New South Wales. Author supplied. Connecting past and present

While the observations taken by these “weather people” are valuable insights into the climate of the past, observations made more than 150 years ago are not quite the same as those taken today. Many of the instruments were not kept in the best locations. John Pascoe Fawkner, one of Melbourne’s early settlers, even stored his thermometer in a cellar!

Differences in exposure, observation techniques and instruments also mean that it’s difficult to use these observations to quantify the exact size of the temperature change since the First Fleet arrived.

However, old weather records can still tell us a lot about year-to-year climate variations. Historical rainfall observations, for example, are less prone to large biases, because rain gauges are less complex than, say, a thermometer or barometer. By using a combination of instrumental and documentary information, we can tell the story of our climate over a much longer time scale than ever before.

Flagstaff Hill in Melbourne 1858, by George Rowe. On the right you can see the weather observer taking his daily observations on the white platform, with a rain gauge behind him. State Library of Victoria

Australia’s climate is almost manic in its ability to swing between droughts and floods. Combining our rescued weather observations with modern data from similar locations means we can see this in southeastern Australia’s rainfall over the past 170 years.

Periods of low rainfall stand out, such as the mid-1840s, the Federation Drought at the turn of the 20th century, the World War II Drought in the early 1940s, and the Millennium Drought from 1997 to 2009. There are also clear times of high rainfall, including the 1870s, 1890s and 1970s.

Rainfall, and prolonged wet and dry periods, in two regions of southeastern Australia from 1840 to 2010. Adapted from Ashcroft et al. 2016.

Most of these periods are associated with El Niño and La Niña events: dry conditions in southeastern Australia are generally linked to El Niño, while wet years often coincide with La Niña. However, this is not always the case. Previous studies have found a breakdown in the relationship in the mid-20th century, and natural palaeoclimate records suggest a similar breakdown in the early 1800s.

Understanding these periods might help us better understand how El Niño and La Niña events might change in the future. But what do the observations from the weather people say?

We compared our historical rainfall data to previous El Niño/La Niña events and found a weakening in the relationship during 1920–1940 and 1835–1850. The breakdown was especially clear in data from the southern part of our study region. This is the first time the 19th-century breakdown has been seen in Australia using instrumental data.

The hunt continues

Of course, the next question is why? Why does the impact of El Niño and La Niña on Australian rainfall change over time? What happened in the mid-1800s? It might be El Niño’s cranky uncle, the Interdecadal Pacific Oscillation, or perhaps strange behaviour in the atmosphere around Antarctica.

We’re still not sure. But the weather observations taken by dedicated settlers more than 150 years ago are helping us answer these questions. Until then, the hunt continues.

The Conversation

Linden Ashcroft has received funding from the Australian Research Council.

David Karoly receives funding from the Australian Research Council Centre of Excellence for Climate System Science and an ARC Linkage grant. He is a member of the Climate Change Authority and the Wentworth Group of Concerned Scientists.

Joelle Gergis receives funding from the Australian Research Council.

Categories: Around The Web

Droughts and flooding rains already more likely as climate change plays havoc with Pacific weather

Thu, 2017-02-09 04:57

Global warming has already increased the risk of major disruptions to Pacific rainfall, according to our research published today in Nature Communications. The risk will continue to rise over coming decades, even if global warming during the 21st century is restricted to 2℃ as agreed by the international community under the Paris Agreement.

In recent times, major disruptions have occurred in 1997-98, when severe drought struck Papua New Guinea, Samoa and the Solomon Islands, and in 2010-11, when rainfall caused widespread flooding in eastern Australia and severe flooding in Samoa, and drought triggered a national emergency in Tuvalu.

These rainfall disruptions are primarily driven by the El Niño/La Niña cycle, a naturally occurring phenomenon centred on the tropical Pacific. This climate variability can profoundly change rainfall patterns and intensity over the Pacific Ocean from year to year.

Rainfall belts can move hundreds and sometimes thousands of kilometres from their normal positions. This has major impacts on safety, health, livelihoods and ecosystems as a result of severe weather, drought and floods.

Recent research concluded that unabated growth in greenhouse gas emissions over the 21st century will increase the frequency of such disruptions to Pacific rainfall.

But our new research shows even the greenhouse cuts we have agreed to may not be enough to stop the risk of rainfall disruption from growing as the century unfolds.

Changing climate

In our study we used a large number of climate models from around the world to compare Pacific rainfall disruptions before the Industrial Revolution, during recent history, and in the future to 2100. We considered different scenarios for the 21st century.

One scenario is based on stringent mitigation in which strong and sustained cuts are made to global greenhouse gas emissions. This includes in some cases the extraction of carbon dioxide from the atmosphere.

In another scenario emissions continue to grow, and remain very high throughout the 21st century. This high-emissions scenario results in global warming of 3.2-5.4℃ by the end of the century (compared with the latter half of the 19th century).

The low-emissions scenario - despite the cuts in emissions - nevertheless results in 0.9-2.3℃ of warming by the end of the century.

Increasing risk

Under the high-emissions scenario, the models project a 90% increase in the number of major Pacific rainfall disruptions by the early 21st century, and a 130% increase during the late 21st century, both relative to pre-industrial times. The latter means that major disruptions will tend to occur every four years on average, instead of every nine.

The increase in the frequency of rainfall disruption in the models arises from an increase in the frequency of El Niño and La Niña events in some models, and an increase in rainfall variability during these events as a result of global warming. This boost occurs even if the character of the sea-surface temperature variability arising from El Niño and La Niña events is unchanged from pre-industrial times.

Although heavy emissions cuts lead to a smaller increase in rainfall disruption, unfortunately even this scenario does not prevent some increase. Under this scenario, the risk of rainfall disruption is projected to be 56% higher during the next three decades, and to remain at least that high for the rest of the 21st century.

The risk has already increased

While changes to the frequency of major changes in Pacific rainfall appear likely in the future, is it possible that humans have already increased the risk of major disruption?

It seems that we have: the frequency of major rainfall disruptions in the climate models had already increased by around 30% relative to pre-industrial times prior to the year 2000.

As the risk of major disruption to Pacific rainfall had already increased by the end of the 20th century, some of the disruption actually witnessed in the real world may have been partially due to the human release of greenhouse gases. The 1982-83 super El Niño event, for example, might have been less severe if global greenhouse emissions had not risen since the Industrial Revolution.

Most small developing island states in the Pacific have a limited capacity to cope with major floods and droughts. Unfortunately, these vulnerable nations could be exposed more often to these events in future, even if global warming is restricted to 2℃.

These impacts will add to the other impacts of climate change, such as rising sea levels, ocean acidification and increasing temperature extremes.

The Conversation

This research was supported by the National Environmental Science Programme and the Australian Climate Change Science Programme.

Brad Murphy, Christine Chung, François Delage, and Hua Ye do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

A wolf in dogs' clothing? Why dingoes may not be Australian wildlife's saviours

Thu, 2017-02-09 04:57
Dingoes are often promoted as a solution to Australia's species conservation problems. Dingo image from www.shutterstock.com

Dingoes have often been hailed as a solution to Australia’s threatened species crisis, particularly the extreme extinction rate of the country’s small mammals.

But are dingoes really the heroes-in-waiting of Australian conservation? The truth is that no one knows, although our recent research casts a shadow over some foundations of this idea.

The notion of dingoes as protectors of Australian ecosystems was inspired largely by the apparently successful reintroduction of wolves into Yellowstone National Park in the United States. But Australia’s environments are very different.

Cascading species

To understand the recent excitement about wolves, we need to consider an ecological phenomenon known as “trophic cascades”. The term “trophic” essentially refers to food, and thus trophic interactions involve the transfer of energy between organisms when one eats another.

Within ecosystems, there are different trophic levels. Plants are typically near the base; herbivores (animals that eat plants) are nearer the middle; and predators (animals that eat other animals) are at the top.

The theory of trophic cascades describes what happens when something disrupts populations of top-order predators, such as lions in Africa, tigers in Asia, or Yellowstone’s wolves.

The wolves’ decline allowed herbivores, such as elk, to increase. In turn, the growing elk population ate too much of the shrubby vegetation alongside rivers, which, over time, changed from being mostly willow thickets to grassland. Then another herbivore – beavers – that relies on willows went locally extinct. This in turn affected the ecology of the local streams.

Wolves play a key role in Yellowstone’s ecosystems. Wolf image from www.shutterstock.com

Without beavers to engineer dams, local waterways changed from a series of connected pools to eroded gutters, with huge flow-on effects for smaller aquatic animals and plants.

Now, the reintroduction of wolves appears to have reduced the impact of elk on vegetation, some riparian areas have regenerated, some birds have returned and there are signs of beavers coming back. That said, wolf reintroduction has not yet fully reversed the trophic cascade.

Comparing apples with quandongs

Sturt National Park, in the New South Wales outback, has been nominated as an experimental site for reintroducing dingoes. Recently, we compared the environment of Sturt with Yellowstone to consider how such a reintroduction might play out.

These regions are clearly very different. Both are arid, but that is where the similarity ends. Yellowstone has a stable climate and nutrient-rich soils, sits at high altitude and features diverse landscapes. Precipitation in Yellowstone hasn’t dropped below 200mm per year in more than a century.

Herds of bison in Yellowstone National Park. Helen Morgan

Yellowstone’s precipitation falls largely as heavy winter snow. Each spring the snowmelt flows in huge volumes into rivers, streams and wetlands across the landscape. This underpins a predictable supply of resources which, in turn, triggers herbivores to migrate and reproduce every year.

These predictable conditions support a wide range of carnivores and herbivores, including some of North America’s last-remaining “megafauna”, such as bison, which can tip the scales at over a tonne. Yellowstone also has many large predators – wolves, grizzly bears, black bears, mountain lion, lynx and coyotes all coexist there – along with a range of smaller predators too.

Predators in Yellowstone can be sure that prey will be available at particular times. The environment promotes stable, strong trophic links, allowing individual animals to reach large sizes. This strong relationship between trophic levels means that when the system is perturbed – for instance, when wolves are removed – trophic cascades can occur.

Unlike Yellowstone, arid Australia is dry, flat, nutrient-poor and characterised by one of the most extreme and unpredictable climates on Earth. The yearly rainfall at Sturt reaches 200mm just 50% of the time.

Australia’s Sturt Desert has a highly unpredictable climate. Helen Morgan

Australia’s arid ecosystems have evolved largely in isolation for 45 million years. In response to drought, fire and poor soils, arid Australia has evolved highly specialised ecosystems, made up of species that can survive well-documented “boom and bust” cycles.

Unlike the regular rhythm of Yellowstone life, sporadic pulses of water and fire affect and override the trophic interactions of species, between plants and herbivores, and predators and their prey. Our native herbivores travel in response to patchy and unpredictable food sources in boom times. But however good the boom, the bust is certain to follow.

Unpredictable but inevitable drought weakens trophic links between predators, herbivores and plants. Individuals die due to lack of water, populations are reduced and can only recover when rain comes again.

Our arid wildlife is very different from Yellowstone’s too. Our megafauna are long gone. So too are our medium-sized predators, such as thylacines.

Today, arid Australia’s remaining native wildlife is characterised by birds, reptiles and small mammals, along with macropods that are generally much smaller than the herbivores in Yellowstone.

Our predators are small and mostly introduced species, including dingoes, foxes and cats. None is equivalent to wolves, mountain lions or bears, which can reach more than three times the weight of the largest dingo. Wolves are wolves, and dingoes are dogs.

Wolves in dingo clothes?

What does all this mean for Australia? Yellowstone’s stable climate means that there are strong and reliable links between predators, prey and plants. By comparison, arid Australia’s climate is dramatically unstable.

This raises the question of whether we can reasonably expect to see the same sorts of relationships between species, and whether dingoes are likely to help restore Australia’s ecosystems.

We should conduct experiments to understand the roles of dingoes and the impacts of managing them. How we manage predators, including dingoes, should be informed by robust knowledge of local ecosystems, including predators’ roles within them.

What we shouldn’t do is expect that dingoes will necessarily help Australia’s wildlife, based on what wolves have done in snowy America. The underlying ecosystems are very different.

Many people are inspired by the apparently successful example of wolves returning to Yellowstone, but in Australia we should tread carefully.

Rather than trying to prove that dingoes in Australia are just as beneficial as wolves in Yellowstone, we should seek to understand the roles that dingoes really play here, and work from there.

The Conversation

Helen Morgan receives funding from the Keith and Dorothy Mackay Travelling Scholarship, University of New England, the Holsworth Wildlife Endowment Trust and Invasive Animals CRC

Guy Ballard receives funding from the Invasive Animals Cooperative Research Centre, NSW Local Land Services and the NSW National Parks & Wildlife Service.

John Thomas Hunter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Crisis, what crisis? How smart solar can protect our vulnerable power grids

Wed, 2017-02-08 05:11

Some commentators seem to be worried that our electricity networks are facing an impending voltage crisis, citing fears that renewables (rooftop solar panels in particular) will threaten the quality of our power supply.

These concerns hinge on the fact that solar panels and other domestic generators can push up voltages, potentially making it harder for network companies to maintain stability across the grid. But what is less well understood (and far less reported) is the massive potential for local generation to actually improve the quality of our power, rather than hinder it.

A new report from our Networks Renewed project aims to show how technologies such as “smart inverters” can help to manage voltage at the household scale, rather than at substations. This would improve the quality of our power and flip the potential problem of household renewables into a solution.

Why all the fuss about voltage?

Electricity from our power points should be at roughly 230 volts, without deviating too far above or below. It fluctuates throughout the day, depending on how much power is being used.

Here’s an analogy: think of water flowing through pipes. The power lines are the pipes themselves, and the voltage is like the water pressure in the pipes – that is, the amount of force pushing the water (or electricity) along. Using large amounts of power causes the voltage to drop, rather like when the washing machine comes on while you’re having a shower; all of a sudden the pressure drops because other appliances are using the water too.

Pressure is also affected by how close the appliance is to the source. For instance, if your washing machine and shower were connected right at the foot of the dam, instead of at the end of several miles of pipes, you could have them both switched on and not notice a drop in pressure.

For an electrical distribution system, this means that the houses farthest away from the substation are the most susceptible to sagging (lower) voltage when large amounts of power are being used.

Voltage management has always been an issue for grid operators, particularly in rural locations where the power lines are longer. Low voltage on long power lines often means dim and flickering lights for residents at the end of the line.

On the flip side, overvoltages can damage sensitive electronic equipment – a bit like when the water pressure pops your garden hose off the tap.

These fluctuations can become a problem for power companies when the voltage goes outside the allowable range.

How does solar power affect voltage?

Our electricity networks were not originally built for lots of local generation sources like rooftop solar panels or small wind turbines. Until recently, power has generally flowed only in one direction, from a large (usually coal-fired) power station to consumers.

The growing number of household solar panels on the network have changed this landscape and now power flows both ways. Solar panels can make managing the grid more complex, because the voltage rises where they are generating power.

A small voltage increase is not a problem when there is enough demand for electricity. But when nobody is home in the neighbourhood, the solar power might lift the voltage beyond the upper limit.

In this case, the circuit protectors in the generator will probably trip and the solar panels will be cut off, to protect the network. This also means that the household won’t have access to (or get paid for!) the solar power it is generating.

Any customer-owned generator can affect the voltage – including solar, batteries, or diesel generators. But we tend to hear about solar because it is by far the most popular means of local generation; Australia now has more than 1.5 million homes with rooftop solar, and that figure is rising rapidly.

While some people might see this as an issue, sometimes the solution lies in the problem itself. In this case, new solar systems can offer a much more sophisticated way to manage grid voltage.

The innovation: smart inverters can control solar and batteries to help stabilise voltage on the grid. How can solar become the solution?

Traditionally, voltage management solutions are fairly blunt, affecting tens or even hundreds of properties at a time, despite the fact that conditions might be quite different at each property. The equipment used – replete with technical-sounding names such as “on-load tap changers” and “line-drop compensators” – is expensive and is often located within transformers at substations. All of this electrical engineering kit adds to the cost of energy for customers.

However, new solar and battery systems now have the intelligence to manage voltage in a cheaper and more targeted way, through their “smart” inverters. These new technologies may provide the missing link to new renewable and reliable energy sources.

This is how it works: residential solar, batteries and other generators are connected to the grid through inverters that now have embedded IoT (internet of things) communications technology. These smart inverters allow the network to “talk” to the local generator and request support services, including through what’s called reactive power (see graphic below).

Reactive power can help to raise and lower the voltage on the network, improving the quality of our power including the voltage stability. For more technical detail see our newly released report on the potential for smart inverters to help manage the grid.

Smart inverters can export or absorb both real and reactive power.

All this is only possible if network businesses are open to new, proactive ways of operating - as demonstrated by our Networks Renewed project partners United Energy in Victoria and Essential Energy in New South Wales.

This means a shift in thinking from the traditional passive customer model – we deliver energy to you! – to a more dynamic and collaborative one in which customers can actually help to manage the grid as well as using and generating power.

Sure, transitioning an entire energy system is no mean feat, but it offers an opportunity to build a better, more resilient electricity system that includes more renewable energy.

If we are smart, we will not need to trade off our climate impact with the dependability of our electricity system. We just need to be open to the new ways of solving old problems.

The Conversation

The Institute for Sustainable Futures (ISF) at the University of Technology Sydney undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses. The Networks Renewed project is funded by the Australian Renewable Energy Agency (ARENA) and the NSW and Victorian state governments, in partnership with Essential Energy, United Energy, Reposit Power, SMA Australia, and the Australian PV Institute. Lawrence McIntosh is also a partner at PV Lab Australia, a solar panel quality assurance business, and serves as the part time Principal Executive Officer of SolarShare, a community owned solar project in Canberra, ACT.

Dani Alexander is a member of the Institute for Sustainable Futures (ISF), which undertakes paid sustainability research for a wide range of government, NGO and corporate clients, including energy businesses.

Categories: Around The Web

Australia's universities are not walking the talk on going low-carbon

Wed, 2017-02-08 05:11
Australia's universities are great at green innovation, but not so good at going low-carbon themselves. PrinceArutha/Wikimedia Commons, CC BY-SA

Australian universities have a proud tradition in researching, teaching and advocating the science of climate change. The famous statistic that 97% of climate scientists agree that humans are altering the climate is courtesy of researchers at the University of Queensland. Nine of the nation’s 43 universities have been ranked “well above world standard” in environmental science, and many of the leading public voices on climate policy – such as Ross Garnaut, Will Steffen and Tim Flannery – are university professors.

The science these universities (and many others around the world) have produced is very clear. Keeping average global temperatures within 2℃ of pre-industrial levels, as per the Paris climate agreement, will require a reduction in carbon (and other long-lived greenhouse gases) of 40-70% from 2010 levels by 2050, and near-zero emissions by 2100 (see section 3.4 here).

What’s less clear is what Australian universities are actually doing about it in practical terms. Universities exist to do three things: teach, research and engage. Climate change permeates all three endeavours, and these days many academics have lost any previous reticence about expressing forthright views on political questions such as the government’s emissions targets or renewable energy policies.

Anyone who followed Australian politics during Tony Abbott’s years as opposition leader and then prime minister will recall the fierce debates over the carbon tax, direct action, and the axing of the Climate Commission. Those with good memories will remember the furious argument that erupted around the Australian National University’s decision to divest from seven resources companies.

Universities clearly know what the science says and what society needs to do about it. But it is evidently easier to say what needs to be done than to do something about it. This contrast between words and actions is shown clearly by Australian universities’ collective response to climate change.

Promises, promises

Of the 43 Australian universities, three (RMIT, UTS and CSU, of which the latter remains Australia’s only carbon-neutral university) have committed to absolute reductions in carbon emissions. A further 12 have pledged to reduce carbon emissions but have sprinkled their commitments with riders, such as reducing emissions per “gross floor area”, which would allow emissions to grow as the university expands and is inconsistent with the need to cut carbon in absolute terms.

To compile these data, I looked at all Australian universities’ 2015 annual reports, forward-looking corporate strategies, and historic mission-based compacts (performance agreements with the Commonwealth). Clearly, it is possible for universities to have a carbon target that is not mentioned in these reports, but my logic is that these documents give a clear picture of the organisation’s priorities and spending.

Worryingly, 11 universities make no mention at all of carbon-reduction policies anywhere in these documents.

The picture is no rosier for those nine universities (ANU, Griffith, JCU, Macquarie, Canberra, Melbourne, Queensland, UTS and UWA) whose environmental science has received the highest rating. Only Melbourne and Queensland mention carbon in their corporate strategies; the other seven are silent.

The same is true for 10 of the 12 universities whose researchers were involved in compiling the Intergovernmental Panel on Climate Change’s landmark Fifth Assessment Report. And if it’s not in the strategy it seems unlikely to be a priority for the university.

There are ten Australian universities that consume enough energy to be required to publish their emissions data, under the National Greenhouse and Energy Reporting Act (2007). Data from the Clean Energy Regulator show that their emissions increased by 4.6% between 2010-11 and 2014-15.

Lead by example

This poses two tricky questions for universities. First, why don’t universities act more decisively on the implications of their own climate research, while they are urging society to do so? Second, in a networked economy where knowledge is king, how will universities manage to partner with businesses to drive down greenhouse emissions, if they can’t even successfully do it themselves?

Universities are not short of funds to demonstrate how to build a low-carbon future, but they are short of partners. Currently Australian universities are at the bottom of the OECD’s rankings for fostering business partnerships and innovation. Yet the opportunities are there.

My analysis of universities’ 2015 reports shows that universities have committed to spending more than A$1.5 billion in property, plant and equipment capital works during 2016 alone (2016 annual reports have not yet been released). For comparison, the Australian Research Council awarded less than A$100 million between 2011 and 2013 for universities to research the built environment and design, meaning that it would take the ARC 50 years to match what universities spent on their own property in 2016.

Yet in spite of this huge outlay, only eight universities have committed to using their campuses as “living labs” to apply their research or to help deliver teaching and research in this field.

All universities talk of the need to forge external partnerships with government, communities and business. Yet looking at the detail, there are just 17 universities – fewer than half – that have committed themselves to trying to work across the university internally. It should be no surprise that universities are so poor at partnering with external organisations if they can’t manage it within their own organisations.

Evidence-based spending?

All of this suggests that most Australian universities are failing to take proper account of their own climate science in choosing how to run themselves. Remarkably, 25% of universities do not mention greenhouse emissions anywhere in their public reports, corporate strategies or mission-based compacts.

Less than 20% of Australian universities are using their campus development to deliver teaching and research outcomes or as a living lab to innovate. Only one university is committed to doing this in the future.

Yet meanwhile, universities have spent more than A$1.5 billion during 2016 (according to their 2015 annual reports) on their built environments. If this infrastructure spend is not used also to drive teaching and research outcomes, or to showcase how to adopt research, then it is being spent inefficiently.

If this money is being spent in a way that doesn’t help Australia hit its climate targets, and the world to live up to the Paris Agreement, then this spending is not evidenced-based. And if spending and research are not evidence-based, we really do need to worry about what tomorrow brings.

This article is based on a presentation given at the World Renewable Energy Congress in Perth on February 6.

Universities Australia deputy chief executive Catriona Jackson responds:

Australia’s universities have a wide range of energy savings and lower-carbon initiatives.

Actually there are a significant number of projects and programs in place across the Australian university sector towards greater sustainability. Many of those initiatives have also been recognised through programs such as the Green Gowns awards.

But one of the challenges for universities in modernising facilities to meet higher environmental standards is having an ongoing source of infrastructure funding.

That’s yet another reason why we’re strongly against the closure of the $3.7 billion Education Investment Fund, which has funded major building works on Australia’s university campuses.

If we want smarter buildings and cleaner technology – let alone cutting-edge research and teaching facilities – an infrastructure fund is vital.

The Conversation

Mike Burbridge receives a PHD scholarship from Co-operative Research Centre for Low Carbon Living and is currently a PHD student at Curtin University.

Categories: Around The Web

The environment needs billions of dollars more: here's how to raise the money

Tue, 2017-02-07 05:15
Australia: there's a lot of it to look after. Thomas Schoch/Wikimedia Commons, CC BY-SA

Extinction threatens iconic Australian birds and animals. The regent honeyeater, the orange-bellied parrot, and Leadbeater’s possum have all entered the list of critically endangered species.

It is too late for the more than 50 species that are already extinct, including bettongs, various wallabies, and many others. Despite international commitments, policies and projects, Australia’s biodiversity outcomes remain unsatisfactory.

A 2015 review of Australia’s 2010-2050 Biodiversity Conservation Strategy found that it has failed to “effectively guide the efforts of governments, other organisations or individuals”.

Insufficient resourcing is one cause of biodiversity loss. The challenge is impressive. Australia must tackle degradation and fragmentation of habitat, invasive species, unsustainable use of resources, the deterioration of the aquatic environment and water flows, increased fire events, and climate change.

This all requires money to support private landholders conducting conservation activities, to fund research, to manage public lands, and to support other conservation activities conducted by governments, industry, and individuals.

So where can we find the funds?

How much money is needed?

We have estimated that Australia’s biodiversity protection requires an equivalent investment to defence spending – roughly 2% of gross domestic product.

Of course, such estimates are up for debate given that how much money is required depends on what we want the environment to look like, which methods we use, and how well they work. Other studies (see also here and here point to a similar conclusion: far more money is needed to achieve significantly better outcomes.

Apart from government funding, private landholders, businesses, communities, Indigenous Australians, and non-government organisations contribute significantly to natural resource management. We were unable to quantify their collective cash and in-kind contributions, as the information is not available. But we do know that farmers spend around A$3 billion each year on natural resource management.

Nonetheless, the erosion of environmental values indicates that the level of spending required to sufficiently meet conservation targets far exceeds the amount currently being spent. The investment required is similar to value of agriculture in Australia.

Conservation doesn’t come cheap. JJ Harrison/Wikimedia Commons, CC BY-SA

Unfortunately, the concentration of wealth and labour sets a limit to what any given community can pay.

Despite a high GDP per person and very wealthy cities, Australia has fewer than 0.1 people per hectare and a wealth intensity (GDP per hectare) of less than US$2,000 due to the sparse population and income of rural Australia.

Australia’s rural population has declined sharply, from over 18% in 1960 to around 10% today. Other countries (for example in Europe) are not limited to the same degree. Even China has a greater rural resource intensity than Australia.

Rural incomes are often volatile, but environmental investments need to be sustained. The history of Landcare highlights that private landholders have struggled to secure a reliable investment basis for sustainably managing the environment.

Can government pay what is required?

If Australia is serious about the environment, we need to know who will pay for biodiversity protection (a public good). This is especially true given that it is not feasible for rural (particularly Indigenous) landholders and communities to invest the required amount.

Will government be the underpinning investor? The federal government’s current spending program on natural resource management was initiated in 2014 with an allocation of A$2 billion over four years.

This was split between the second National Landcare Program, the (now-defunded) Green Army, the Working on Country program, the Land Sector Package, the Reef 2050 plan, the Great Barrier Reef Foundation, and the Whale and Dolphin Protection Plan.

As well as federal funding, the state, territory, and local governments invest in public lands, bushfire mitigation, waste management, water management, environmental research and development, biodiversity programs, and environmental policies. Local and state government departments together spend around A$4.9 billion each year on natural resource management.

The problem is that government spending on natural resource management can not be significantly increased in the near future due to fiscal pressures and the focus on reducing budget deficits.

Show us the money

At a time when Australia is reconsidering many aspects of its environmental policies, we should address the strategy for funding natural resource management.

It should be possible to leverage more private spending on the environment preferably as part of a coordinated strategy. Diverse, market-based approaches are being used around the world.

For example, we could use market instruments such as biodiversity banking to support landholders in protecting biodiversity.

Taxation incentives, such as a generous tax offset for landholders who spend money on improving the environment, can be a very powerful catalyst and could be crucial for meeting environmental investment needs.

Evidence suggests that integrating a variety of mechanisms into a coordinated business model for the environment is likely to be the most efficient and effective approach. But this will not happen unless Australia faces the fiscal challenge of sustainability head-on.

Australia needs an innovative investment plan for the environment. By combining known funding methods and investment innovation, Australia can reduce the gap between what we currently spend and what the environment needs.

Without a more sophisticated investment strategy, it is likely that Australia will continue on the trajectory of decline.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.

Categories: Around The Web

Pages