Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Key Principles and Defining Features of Nabers: September 2014

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

The Key Principles and

Defining Features of
NABERS

VERSION 1.0

September 2014
Cover photo: Perth Skyline from Kings Park. Photographed by Neale Cousland, downloaded from Shutterstock
under a standard license.

This document was prepared by Common Capital Pty Ltd for the Office of Environment and Heritage in its role as
NABERS National Administrator.

The Office of Environment and Heritage (OEH) has compiled this document in good faith, exercising
all due care and attention. OEH shall not be liable for any damage which may occur to any person or
organisation taking action or not on the basis of this publication. Readers should seek appropriate
advice when applying the information to their specific needs. This document may be subject to
revision without notice and readers should ensure they are using the latest version.

Published by
Office of Environment and Heritage
59 Goulburn Street
PO Box A290
Sydney South NSW 1232

Ph: (02) 9995 5000 (switchboard)


Ph: 131 555 (environment information and publications requests)
Fax: (02) 9995 5999
TTY: 133 677 then ask for 131 555
Speak and Listen users: 1300 555 727 then ask for 131 555

Email: nabers@environment.nsw.gov.au
U U

Website: www.nabers.gov.au
U U

OEH 2014/0710

September 2014

© 2014 State of NSW and Office of Environment and Heritage


EXECUTIVE SUMMARY

The NABERS program has been very successful in Australia, with more than 70% of the national
office market using NABERS Energy to measure and promote their energy and greenhouse efficiency.
Combined, rated buildings saved more than 380,000 tonnes of greenhouse gas emissions and 1.6
million litres of water in 2012/13 alone.

The success of the NABERS Energy for offices tool has driven demand for an expansion of the
NABERS methodology to cover different building types and environmental issues, and beyond
Australian borders. In 2013 the first international adaptation of NABERS was launched in NZ.

Central to all NABERS ratings are a set of key principles that guide decisions on the design,
development and delivery of rating programs. While individual ratings may use different metrics and
approaches based on the issue at hand, they have common goals when it comes to the design of the
metrics, the method of measuring ratings, and the governance of the program.

Drawing from the evolution of NABERS over time, this paper identifies the key principles that underpin
the NABERS tools, and considers how these principles have been applied to shape the defining
features of NABERS. Further, it sets out a template to guide the development of comparable rating
tools in Australia and beyond.

The key principles of NABERS are divided into metrics, including the calculation methodologies and
rating scales that make up a NABERS rating; methods, including the system for managing the rating
process, rules and quality assurance; and governance, including responsibilities for the oversight of
the scheme, and stakeholder engagement principles. These are outlined below.

Key principle 1: NABERS measures actual impact, not intent


Key principle 2: NABERS is relevant to building operations
Metrics Key principle 3: NABERS ratings are meaningful
Key principle 4: NABERS ratings are simple and easy to perform

Key principle 5: NABERS ratings are reliable


Method
Key principle 6: NABERS management is trustworthy
Governance Key principle 7: NABERS development is collaborative

Table 1 NABERS key principles

The intent of this paper is not to provide an exhaustive list of NABERS design features, rather it aims
to describe the features that define and differentiate NABERS ratings and explain the thinking behind
them. It includes a consideration of the core elements of the program design and a range of features
that must be present for ratings in the NABERS family.

A ready reckoner is attached below for easy reference to the key design features that need to be
considered during the rollout of NABERS ratings.

NABERS Key Principles and Defining Features v.1.0 3


READY RECKONER FOR THE DESIGN OF NABERS RATINGS

CORE FEATURES – the proposed rating:

☐ Relies upon externally validated performance data (eg energy bills) Page 10

☐ Does not give credit to design intent or management strategies Page 11

☐ Is a benchmark of comparable buildings Page 12

☐ Adjusts for operating conditions to measure real efficiency Page 13

☐ Is aligned with responsibilities within the building Page 14

☐ Measures building rather than district efficiency Page 16

☐ Reflects the range of building performance Page 18

☐ Measures substantive environmental impacts from the building Page 19

☐ Is implemented in a suitable market Page 19

☐ Is low cost Page 21

☐ Is assessed using public, plain English rules Page 22

☐ Is delivered by competent individuals Page 22

☐ Is subject to both random and targeted audits Page 23

☐ Is informed by industry experts Page 24

DESIRABLE FEATURES – the rating has also considered whether it:

☐ Is based on a full year of data, and is valid for 12 months Page 10

☐ Is based on measured environmental impact not consumption data Page 11

☐ Is measured in a familiar device Page 17

☐ Uses existing data where possible Page 17

☐ Has coverage matching market standards Page 21

☐ Is measured by assessors that are independent of the rated building Page 23

☐ Is administered independently of industry Page 23

☐ Works closely with industry partners to complement other approaches Page 24

NABERS Key Principles and Defining Features v.1.0 4


TABLE OF CONTENTS

Executive Summary.................................................................................................................................3

Ready reckoner for the design of NABERS ratings ................................................................................4

Introduction ..............................................................................................................................................6

The evolution of NABERS principles .......................................................................................................6

The NABERS key principles and defining features .................................................................................9

Principle 1: NABERS measures actual impact, not intent ................................................................... 9


Feature 1.1 NABERS ratings rely upon externally validated performance data ........................... 10
Feature 1.2 NABERS ratings are based on a full year of data, and are valid for 12 months ........ 10
Feature 1.3 NABERS ratings do not give credit to design intent or management strategies ....... 11
Feature 1.4 NABERS ratings are based on measured environmental impact rather than
consumption data .......................................................................................................................... 11

Principle 2: NABERS is relevant to building operations .................................................................... 12


Feature 2.1 NABERS is a benchmark of comparable buildings.................................................... 12
Feature 2.2 NABERS adjusts for operating conditions to measure real efficiency ....................... 13
Feature 2.3 NABERS ratings are aligned with responsibilities within the building........................ 14
Feature 2.4 NABERS measures building rather than district efficiency ........................................ 16

Principle 3: NABERS ratings are meaningful .................................................................................... 16


Feature 3.1 NABERS ratings are measured in a familiar device .................................................. 17
Feature 3.2 NABERS rating scales reflect the range of building performance ............................. 18
Feature 3.3 NABERS measures substantive environmental impacts from the building ............... 19
Feature 3.4 NABERS ratings are implemented in suitable markets ............................................. 19

Principle 4: NABERS ratings are simple and easy to perform .......................................................... 20


Feature 4.1 NABERS ratings use existing data where possible ................................................... 20
Feature 4.2 NABERS rating coverage matches market standards ............................................... 21
Feature 4.3 NABERS ratings are low cost .................................................................................... 21

Principle 5: NABERS ratings are reliable .......................................................................................... 22


Feature 5.1 NABERS ratings are assessed using public, plain English rules .............................. 22
Feature 5.2 NABERS ratings are delivered by competent individuals .......................................... 22
Feature 5.3 NABERS ratings are subject to random and targeted audits .................................... 23
Feature 5.4 NABERS assessors should be independent of the rated building............................. 23

Principle 6: NABERS management is trustworthy............................................................................. 23


Feature 6.1 NABERS is administered independently of industry .................................................. 23

Key principle 7: NABERS development is collaborative ................................................................... 24


Feature 7.1 NABERS development and delivery is informed by industry experts ........................ 24
Feature 7.2 NABERS works closely with industry partners to complement other approaches .... 24

Conclusion .............................................................................................................................................25

NABERS Key Principles and Defining Features v.1.0 5


INTRODUCTION

The vision of the National Australian Built Environment Rating System (NABERS) is to “support a
more sustainable built environment through a relevant, reliable and practical measure of building
performance”1. The program has certainly been successful in driving improved environmental
efficiencies from commercial buildings in Australia. The NABERS Energy for offices tool has had a
particularly strong influence on the market. In 2013 alone, re-rated office buildings achieved a
reduction in greenhouse gas emissions of 380,000 tonnes of greenhouse gas emissions, and saved
1.6 million litres of water. More than 70% of the national office market has now used NABERS to
measure and communicate their energy efficiency and emissions.2

This success has driven demand for an expansion of the NABERS methodology to cover different
building types and environmental issues, and beyond Australian borders. In 2013 the first
international adaptation of NABERS was launched in NZ.

This paper documents the key principles that underpin NABERS tools, and is intended as a template
to guide the development of comparable tools in Australia and beyond. Each key principle leads to a
number of defining features that should be present for any NABERS tool

The key principles of NABERS cover the principles behind NABERS metrics, including the calculation
methodologies and rating scales that make up a NABERS rating; the method of obtaining NABERS
ratings, including the systems for managing the rating process, rules and quality assurance; and the
governance structure for NABERS management, including responsibilities for the oversight of the
scheme, and stakeholder engagement principles.

The paper is structured to outline the transformation of the key NABERS principles since its
establishment, and show that NABERS as we know it has been informed over time by the maturity
and growth of both the program and the property market. The paper then expands on the key
principles themselves, and shows how scheme administrators have interpreted and applied these
principles to create the features that define NABERS.

THE EVOLUTION OF NABERS PRINCIPLES

The National Australian Built Environment Rating System (NABERS) was originally launched as the
Australian Building Greenhouse Rating (ABGR) system in 1999. Many of the key principles supporting
NABERS ratings have been in place since the inception of the ABGR, with some evolution over time
as the scheme has matured and evolved.

The scheme has evolved over four main phases, reflecting the differing needs of customers and
stakeholders over time. Each phase has therefore been characterized by the evolution of core
principles that define NABERS. Throughout this period, the overarching goal of NABERS (and
predecessor ratings) has been to provide a system that will encourage building owners and occupants
to take steps to reduce their environmental impact.

1
NABERS 2013, NABERS Strategic Plan 2013-2018, p8
2
NABERS 2012-13 Annual Report

NABERS Key Principles and Defining Features v.1.0 6


Phase 1: focus on relevance and practicality (1999 – 2001)

In 1998 the NSW Government determined that efforts to reduce greenhouse gas emissions should
include measures for non-residential buildings, and commissioned a review of international
approaches to inform the development of a local program. The review “Rating energy efficiency of
non-residential buildings: a path forward for New South Wales” sets out the founding principles of what
would become the Australian Building Greenhouse Rating scheme.

The initial phase in the evolution of the NABERS program focused on designing and implementing a
rating system to benchmark the environmental impact of buildings that was performance rather than
design based; that was relevant to Australian office buildings; and that provided meaningful
information. The initial NABERS design and delivery process was very customer focused, with close
engagement helping to shape the final metrics. However, the NABERS method and governance
structures were less well defined in this early stage.

NABERS measures actual impact, not intent


NABERS is relevant to building operations
NABERS ratings are meaningful
NABERS ratings are simple and easy to perform

Figure 1 NABERS key features - phase one

Phase 2: focus on reliability (2002 – 2005)

The early rollout of ABGR was delivered by a small pool of engineers working under contract to the
Government. With little documentation to guide them, these engineers established their own methods
for gathering and applying information to the rating, which affected the consistency of rating results.

To address these issues and respond to a rapid increase in demand for ratings, in 2002 the ABGR
scheme expanded significantly to incorporate independent Accredited Assessors accompanied by a
well-documented set of rules, training and quality assurance checks. As the scheme became
available across Australia, a new National Steering Committee was also established to advise on the
future direction of the program. This codified many of the key principles that were developed during
the first phase, and introduced a new key principle; that NABERS ratings should be credible and
reliable.3

Key principle 5: NABERS ratings are reliable

Figure 2 NABERS key features - phase two

Phase 3: focus on expansion (2006 – 2010)

In 2005 the NSW Government was awarded a tender to commercialise the NABERS program, which
had been developed until that point by the Australian Commonwealth Government. NABERS was
intended to apply the successful ABGR model to a range of environmental impacts and buildings
types.

The NSW Government developed criteria to assess the suitability of each of the newly developed
metrics under a broader environmental rating system as described in Table 2 below.

3
NABERS 2006, National Australian Built Environment Rating System National Steering Committee
Terms of Reference

NABERS Key Principles and Defining Features v.1.0 7


The objective of NABERS for offices is to provide a framework for
improvement in the environmental performance of office buildings.
NABERS revolves around a benchmark of the actual environmental
Relevance
performance of offices. NABERS must be independent, credible and
consistent. To drive improved performance, it must be relevant to the
intended market and easily understood.

NABERS must provide a realistic rating scale that recognises and


Realism rewards current performance levels, and encourages and promotes
best practice.

NABERS must be easy to use. It should make use of data currently


collated by building owners and managers and complement existing
management practices. If there are environmental issues affecting
an existing office building for which data is not currently collated,
Practicality there should be broad consultation to determine the best method for
gathering additional data. Consideration should be made of a
transition to better management practices. The cost of a NABERS
rating must be reasonable and should be able to be conducted in a
reasonable time frame with minimal resources.

4
Table 2 Criteria for NABERS metrics

These principles guided an overhaul of the newly developed metrics, and the development of a
number of new NABERS metrics in following years. NABERS ratings are now in place to measure
energy, water, waste and indoor environment quality in office buildings, energy and water performance
in hotels and shopping centres, and the energy performance of data centres.

Phase 4: focus on governance (2010 – 2013)

In 2010 the Australian Government introduced the Building Energy Efficiency Disclosure Act (2010),
compelling building owners to disclose a NABERS Energy rating for their premises at the point of sale
and lease.

The introduction of mandatory NABERS ratings led to an increased scrutiny from industry regarding
the governance and management of the scheme, including separate reviews by two industry bodies5,6.
The national rollout of NABERS had been administered and driven to that point by the NSW
Government with support from other state governments. To provide certainty that the administration
of the scheme would pay adequate attention to issues throughout Australia, the Council of Australian
Governments committed to the “enhancement of the national governance framework of NABERS
Energy”7. This was to be achieved by installing the Australian Government as chair of the NABERS
National Steering Committee (in place of the NSW Government) and clarifying that the NSW
Government was to develop and deliver NABERS “under the direction of” the National Steering
Committee. Alongside these changes, the National Steering Committee formed a new Stakeholder
Advisory Committee to improve industry participation in NABERS governance processes. The

4
Department of Energy, Utilities and Sustainability 2005, NABERS Office Trial, NSW Government
November 2005, pp.4-5
5
AIRAH 2012, Industry Survey Report – NABERS rating tools
6
The Fifth Estate 2013, “PCA calls for NABERS overhaul. Again”, September 2013
7
Council of Australian Governments 2009, National Strategy on Energy Efficiency, p24

NABERS Key Principles and Defining Features v.1.0 8


Stakeholder Advisory Committee meets alongside the National Steering Committee and provides
advice on all major scheme decisions.

The key principles for the governance of NABERS were tested during this phase, and key principles
for scheme management were informed by a series of reviews and recommended changes. In
essence, NABERS governance structures ensure that the management of the scheme is trustworthy,
and that NABERS development is collaborative. These basic principles had largely been in place
since the outset of the rating system, but the scheme required a renewed focus and revision of
activities to test that its administration delivered on these principles.

Key principle 6: NABERS management is trustworthy


Key principle 7: NABERS development is collaborative

Figure 3 NABERS key principles - phase four

Phase 5: focus on consolidation and integration (2013 – present)

Following these reviews of the appropriate administration and purpose of NABERS, the scheme is
now in a period of consolidation. The NABERS Strategic Plan 2013-2018 summarises many of the
key principles noted above, and sets out a broad range of objectives and actions to enhance the
system over the next five years. According to the Strategic Plan, NABERS provides a “a trusted,
reliable and easy-to-use metric of the actual environmental performance of a building that compares
buildings on a like-for-like basis” (key principles 1, 3-6), “a star rating scale that recognises and
rewards current performance levels, and encourages and promotes best practice” (key principle 2), “a
common language through which industry and government can communicate and improve building
sustainability” (key principle 3), and “an independent benchmark to support industry and government
decision making.” (key principles 6, 7).8

THE NABERS KEY PRINCIPLES AND DEFINING FEATURES

The key principles of NABERS set the overall direction of the scheme and inform decisions by
administrators on scheme content and management. During design and delivery, each of these
principles translates into a number of features that define NABERS.

The defining features of NABERS may vary from location to location, and between environmental
issues. This paper identifies the core NABERS features and analyses which of them must form a part
of any NABERS tool, and which features are ideally present but may not be possible for every
NABERS tool depending on factors such as market conditions or data availability

PRINCIPLE 1: NABERS MEASURES ACTUAL IMPACT, NOT INTENT

The initial focus of the ABGR scheme was the development of a rating system focused on “actual
energy consumption and greenhouse gas emissions of buildings (rather than design intent)”. This was
considered important as “energy-efficient designs do not perform at the level claimed (often because
of changes made during construction, poor commissioning, etc), and that energy consumption varies

8
NABERS 2013, NABERS Strategic Plan 2013-2018, p3

NABERS Key Principles and Defining Features v.1.0 9


over time, depending on maintenance and usage.”9. All NABERS ratings have followed this key
principle.

As NABERS is outcomes based, it is also technology neutral. By relying solely on measured results,
there are no prescriptive methods to improve ratings and no “picking winners” – any initiative that
actually reduces the measured impact of the building will also improve the rating. This increases the
longevity and simplicity of the rating, and means the market can rely on NABERS ratings as a true
measure of environmental performance rather than a statement of good intentions.

FEATURE 1.1 NABERS ratings rely upon externally validated performance data

This feature must be present for ratings covering utility provided services (energy, water), or where
other reliable third party data is available.

Externally validated data is less susceptible to manipulation or selective inclusions that would impact
on the rating results. This principle extends to all data obtained for a NABERS rating, not only to utility
data. For example, data such as occupancy may be validated by signed agreements between
landlords and tenants, and leased area in a building validated through surveyor certification.

Using external data improves the reliability of ratings and makes use of existing infrastructure,
reducing overall rating cost and complexity. During the design of a NABERS rating, acceptable data
sources for the rating must be considered with external validation preferred in each case.

NABERS ratings should also be based on measured rather than estimated data. Where estimates are
necessary due to missing data (for example, energy metering or retailer changes) clear principles
should be developed to ensure a consistent estimation approach. These estimates should not cover
the start or end of a twelve-month rating period to avoid overly significant impacts on the final rating.

Feature 1.2 NABERS ratings are based on a full year of data, and are valid for
12 months

This feature is a desirable element of NABERS ratings, and is mandatory if metering systems are in
place for the rated element.

A twelve-month rating cycle has a number of benefits. A full year of data ensures that seasonal
effects do not affect the final rating. A validity of 12 months ensures that published ratings are recent
and are a trustworthy indicator of the current building performance. The twelve month rating / renewal
cycle means that if a building wishes to retain a NABERS rating, they are monitoring their performance
at all times – there are no “slack years” in which performance is not measured and may consequently
be left to decay.

A longer data collection period (say a three year cycle) would give a more accurate long term picture
of building performance, but would mean that performance improvements have a dampened affect on
the rating until the cycle is completed and data from before the improvement is no longer counted. It
would also preclude newer buildings from rating their building for a lengthy period.

However, it may not be practical to gather twelve months of data for the measured issue. Where the
infrastructure exists and the data is already collected – such as energy or water bills - the rating must
be based on the full twelve months of data. When developing metrics and measurement standards for
other issues (such as waste, indoor environment or other issues) the developers should consider

9
Pears, A. 1998, Rating energy efficiency of non-residential buildings: a path forward for New South
Wales, Sustainable Solutions, p7

NABERS Key Principles and Defining Features v.1.0 10


whether the necessary infrastructure could be realistically implemented, and seek to transition the
market where possible. Point in time measurements should be used only as a transitional strategy, or
for the measurement of issues that will not have a significant impact upon the final rating.

Feature 1.3 NABERS ratings do not give credit to design intent or management
strategies

This feature must be present for all NABERS ratings. All NABERS ratings must be based on
measured environmental impacts, not on the method for reducing environmental impacts.

Complementary services may be developed within the NABERS suite to support improved
environmental performance. In Australia this has included Commitment Agreements, under which
newly designed buildings may commit to achieve a particular NABERS rating when operational, and
use this target to inform design decisions; diagnostic tools which help building owners or occupants
analyse their rating to better understand where to focus their improvement efforts; and documents and
reports that support improved management strategies and designs; etc.

This suite of complementary services seeks to improve understanding of the building in operation, and
supports real reductions in environmental impact. For example, rather than allow continued promotion
of “design” ratings showing the intended performance of the building throughout its operational life,
NABERS Commitment Agreements expire when the first NABERS rating for the building is published.
This ensures that the market understands and can value the actual performance of the building, rather
than the design intent.

Feature 1.4 NABERS ratings are based on measured environmental impact


rather than consumption data

This feature is a desirable element of NABERS ratings but is not mandatory. For example, the
NABERS Energy rating tool is based upon the greenhouse gas emissions arising from the energy
consumption in a building. This means that the ratings are directly relevant for environmental
reporting and better reflect community concern on environmental impacts rather than raw energy
consumption information.

However, in some circumstances it may not be possible to translate consumption data into the actual
environmental impact – it may, for example, be difficult to translate energy consumption into
greenhouse gas emissions. Alternatively, the measured issue may contribute to a broad range of
environmental impacts, and have more meaning to the community at large as a consumption measure
– for example waste.

Furthermore, when implementing NABERS ratings developers should consider the local context and
existing environmental reporting standards. Previously employed NABERS metrics may not have
relevance to a local market. For example, the local energy market may have de-carbonised, and a
primary energy metric better matches local environmental impacts than a greenhouse gas rating.

In any case, the chosen metric should, as far as possible, reflect the full environmental impact of the
rated issue in a manner that has relevance to the local market, while giving consideration to
consistency with other NABERS metrics. For NABERS Energy tools in particular, ratings should not
be based on delivered energy consumption. Delivered energy consumption does not adequately
reflect the overall environmental impact of different fuels, particularly network sourced electricity in
which the primary environmental impacts occur at remote generation sites.

NABERS Key Principles and Defining Features v.1.0 11


The final decision of measurement units for any NABERS metric should consider:

 local expectations and existing reporting standards on that environmental issue


 precedent in other NABERS tools
 data availability, and
 impact of the chosen metric on comparability with other NABERS ratings.

PRINCIPLE 2: NABERS IS RELEVANT TO BUILDING OPERATIONS

The rating scheme was developed to allow like-for-like comparisons between buildings and make the
rating as meaningful as possible to the rated building. To achieve this goal, the rating scale would be
“individualized for the building being assessed” by making adjustments for “variations in climate,
occupancy and use, power intensity of equipment etc”10

Furthermore, the ratings themselves are designed to match responsibilities for energy management.
Multi-tenanted commercial buildings are relatively common in Australia, with a high degree of
consistency in the services provided by building landlords. Separate energy metering is typically
installed in Australian office buildings so that tenants pay their own energy costs directly to energy
retailers, rather than to landlords.

Reflecting this separation of landlord and tenant energy costs, ABGR was designed to separately
allow landlords to rate the efficiency of the “base building” services they provide, and tenants to rate
the efficiency of the energy they consumed in their “tenancy”. This important early design decision
ensured that each responsible party in a building could independently understand the impact of their
own activities. For single-tenanted buildings, or where a high degree of cooperation exists between
landlord and tenants, a “whole building” rating can be undertaken which combines landlord and tenant
energy costs.

FEATURE 2.1 NABERS IS A BENCHMARK OF COMPARABLE BUILDINGS

This feature must be present for NABERS ratings. A successful NABERS rating allows buildings that
consider themselves peers to benchmark themselves against each other. This facilitates competition
between like buildings, and promotes improved behavior.

In Australia, the like-for-like comparison in NABERS has enabled knowledge transfer throughout
industry, as competitors learn from each other and adopt successful approaches. It has driven waves
of innovation as leaders strive to outperform each other. The overall impact has been significant. The
success of NABERS is measured not only in the number of buildings rated, but in the improvements
measured by those buildings over time. Perhaps the most startling result of NABERS has been the
dramatic increase in the common definition of a “good” energy efficient building. The original
developers of NABERS considered 4 stars “not easily achievable” and 5 stars achievable only through
“exceptional design and operation”11. Until 2006 this this was certainly the case, with only 5% to 15%
of rated buildings achieving 4 stars or higher. However, these ratings are now commonplace - 1422
buildings were rated using NABERS Energy in 2012/13, with a median result at 4 stars and more than
20% achieving 5 stars or higher.12

10
ibid, p8
11
Exergy 2002, The Technical Derivation of the Australian Building Greenhouse Rating Scheme, p4
12
OEH 2013, NABERS Annual Report 2012-13

NABERS Key Principles and Defining Features v.1.0 12


13
Figure 4 distribution of NABERS office ratings over time

To ensure that ratings adequately compare buildings, an analysis of the best available data showing
the current range of market performance should underpin the rating scale and calculations. Ideally,
the rating will be based upon a detailed analysis of high quality data from a statistically significant
sample of the target market. This data may need to be gathered specifically for the establishment of
NABERS ratings, through partnerships with industry associations and/or utilities.

In some cases, the best available data may not be of sufficient quality to have confidence in the rating
scale or calculation metrics. In this case it may be appropriate to pilot the rating and use data from
initial ratings to support further refinement of the tool, or to reconsider whether the market conditions
are suitable for the implementation of a NABERS rating. For example the NABERS Waste tool for
offices relies upon waste audits to generate data for the rating. The cost of these audits has been
subject to some criticism14, despite the developers of the tool seeing the audit process as an interim
measure until better quality data became available from waste management companies15. While the
cost relates to the poor measurement of waste collection, which is an issue for the market overall
rather than a specific NABERS problem, criticism such as this risks damaging the market for NABERS
altogether and may hinder rather than stimulate improved data collection standards by industry.

FEATURE 2.2 NABERS ADJUSTS FOR OPERATING CONDITIONS TO


MEASURE REAL EFFICIENCY

This feature must be present for NABERS ratings. NABERS ratings must reflect the efficiency of the
building in operation, and not be dependent on factors outside the control of the building.

NABERS is only meaningful if those obtaining ratings feel that the ratings are a fair reflection of the
efficiency of their building, rather than a reflection merely of the services they are providing. For
example, two identical buildings side-by-side should be measured on the same scale. However, if one
building is larger than another, it will have a larger environmental footprint. If the size difference is not

13
ClimateWorks July 2013, Tracking progress towards a low carbon economy: 4. Buildings, p27
14
see for example Jewell September 2013 PCA calls for NABERS overhaul. Again. TheFifthEstate
15
Department of Energy, Utilities and Sustainability 2005, pp.4-5

NABERS Key Principles and Defining Features v.1.0 13


taken into account, ratings will merely reflect this difference rather than compare building efficiency.
To allow for comparison, the size difference should be taken into account in NABERS metrics.

Generally, NABERS ratings should seek to compare buildings based on the fundamental function they
are providing – for example, number of occupants housed, or size of occupied office space.
Allowances should be made for factors outside the control of the building that will lead to a higher
environmental impact – for example, the local climate, or extended hours of operation. These factors
should be accounted for using empirical evidence from operating buildings, and where possible use
methods consistent with those in other NABERS ratings. If empirical evidence is not available, it may
be possible to use secondary evidence such as energy modelling to quantify the impact of the
controlled factor.

Extra services and facilities provided beyond the fundamental function of the building should generally
be considered environmental costs rather than genuine differences in operating conditions. No
allowance should be made in the rating metrics for these added value services – the associated extra
environmental impacts will reduce the rating for those buildings. This allows building owners and
occupants to make informed decisions on whether to continue to offer these added services in light of
the associated environmental impact.

For example, consider the buildings above – if the larger building is larger because it has a particularly
large lobby, rather than having a larger functional space – that is, it does not house any more
occupants or tenants – the size difference should not be allowed for as there is no difference in
fundamental function.

This feature has a number of practical benefits for management and use of the rating scheme.

 Each additional building feature taken into account in a NABERS rating adds to the rating
cost. Generally, developers should seek to define a rateable building type by the smallest
number of comparable features to minimize the rating cost and complexity, while allowing for
meaningful comparisons between building.
 A simpler baseline metric is more easily communicated and more easily understood by rating
users. As the basis for comparison between buildings becomes more complicated, the ability
for users to intuitively understand the rating diminishes, and comparisons become less
meaningful.
 Across the market, the number of added value options and extra services provided by
different buildings is likely to be very large, and not easily identified during the development of
the rating. The comparison should be made at the most fundamental level to avoid continual
changes to the rating scale as more of these features come to light and need to be taken into
account.

FEATURE 2.3 NABERS RATINGS ARE ALIGNED WITH RESPONSIBILITIES


WITHIN THE BUILDING

This feature is mandatory in NABERS ratings, where a split in responsibilities for the management of
the environmental issue is clearly defined, the impacts of each party are significant, and these impacts
are separately measured.

One of the factors often quoted as important in the success of NABERS Energy in the commercial
office market is its unique ability to differentiate between landlord and tenant responsibilities. The
reason for this success is that a well-defined and consistent split in responsibilities between landlords
and tenants already exists across the Australian office market. Most office buildings incorporate a
standard set of services provided by landlords, and have discrete billing that identifies the energy
costs of providing these services. By separately comparing the efficiency of landlord-provided
services from tenancy operations, NABERS Energy matches the existing relationships in that market,

NABERS Key Principles and Defining Features v.1.0 14


and allows each party to separately understand their performance. NABERS Energy also includes a
whole-building metric which incorporates the energy consumption of both parties, but this rating is not
widely adopted outside owner-occupied buildings (such as Government owned offices).

The separation of landlord from tenant energy use has facilitated new interactions between these
parties, and allows both demonstration of best practice by each party and a means of holding each
other accountable. A major driver of the early success of NABERS was a requirement that
Government agencies only rent space in buildings that achieve a high NABERS base building rating.
As the Government is one of the largest tenants in the Australian office market this policy inspired
many large building owners to improve their base building ratings to achieve these standards.

However, this clear split in both responsibilities for service provision, and in the measured utility cost of
providing these services, may not be available for all building types or environmental indices. For
example, the NABERS Water tool for offices does not distinguish base building from tenancy
consumption. This is due to both a lack of sub-metering, and a meaningful reason to distinguish the
responsible parties. As the vast majority of water consumption in Australian offices is generally due to
landlord services (common area facilities and air conditioning), landlords have assumed responsibility
for the entire building water consumption and a split in ratings would not be meaningful.

Even where the split is clearly defined, it may not be well measured. Prior to the establishment of
NABERS, in some Australian states it was not common practice to accurately measure tenant energy
use through utility grade sub-metering systems, making separate base building and tenancy ratings
impossible. As NABERS became embedded in market systems in other jurisdictions, and building
owners and tenants recognised the value of obtaining ratings that reflect their responsibilities, building
owners invested in accurate submetering where necessary to allow appropriate measurement. Along
with facilitating NABERS ratings, this gave owners and tenants the ability to measure and understand
building performance, and provided the information necessary to support investments in energy
savings actions.

Furthermore, the credibility and relevance of these split ratings depends upon a consistent split in
responsibilities within buildings across the whole market, to ensure buildings are compared on a like-
for-like basis. Establishing multiple ratings within a building means that small changes to the
responsibilities of the parties in the building can impact significantly on NABERS ratings. This split
must be monitored, and the rules for NABERS ratings maintained to ensure all ratings are measured
on the same basis.

During the early rollout of NABERS, some participants sought ways to manipulate their ratings by
making tenants responsible for energy consumption that the market normally considered a landlord
cost. For example, the primary air conditioning in a building is generally a landlord cost. Seeking to
improve ratings, a new building owner did not install primary air conditioning in tenant spaces, and
suggested the since tenants would then install their own air conditioning, any air conditioning energy
use was a tenant responsibility. Allowing this split would mean that NABERS ratings for this building
were not comparable to the rest of the market. Administrators must be alert to this potential situation
for all ratings. NABERS assessors and auditors must be well-trained to apply the rules appropriately,
and to flag when the rules need clarification.

In new markets where a split between landlords and tenants is not clearly distinguished, developers of
a NABERS tool should give careful consideration to market conditions and the costs and benefits prior
to imposing a split.

The costs of imposing a split where none currently exist are significant. It will require additional rating
costs such as sub-metering, and additional indirect costs such as training to the market and assessors
to ensure consistency in rating scope. In this case, NABERS will need to establish a common
understanding for the assignment of responsibility for each service present in the building. The

NABERS Key Principles and Defining Features v.1.0 15


benefits of imposing a split in the ratings may outweigh these costs if a significant component of the
measured environmental impact is outside the control of the party currently taking responsibility by
paying associated bills. For example, the energy ratings of multi-tenanted commercial buildings
should generally seek to separately account for landlord and tenant consumption, as both will
contribute significantly to the overall energy consumption of the building. Without separate
accounting, it is likely that neither party will seek to improve their efficiency, as the benefits of taking
action will be obscured by the activities of the other party.

FEATURE 2.4 NABERS MEASURES BUILDING RATHER THAN DISTRICT


EFFICIENCY

This feature must be present in NABERS ratings, unless the scope of the rating is clearly defined as
a multi-building or district-type metric.

NABERS ratings are most effective when they align closely with the party most responsible for the
measured environmental impact, and the entity they are measuring is clearly established. For all
NABERS ratings to date, this entity is an individual building. Unless a rating is explicitly established to
measure the efficiency of groups of buildings, or local regions etc, the scope of all NABERS ratings
should be clearly limited to the boundaries of the building itself.

This feature has implications during the delivery of any NABERS ratings, as some buildings in any
market will be linked to those around them, either by shared services such as district cooling or
heating, common facilities such as shared lobbies or car parks, or common management or ownership
structures for multiple sites. NABERS should seek, where possible, to separate such buildings based
on identifiable boundaries, and allocate shared spaces or services in a manner that reflects the
relative “ownership” of that building on the shared space or service. This may be based on
proportional size of linked buildings, or measured use of a particular service, or some other readily
identifiable split that adequately describes “ownership” of the environmental impact for each linked
building.

Ratings may be applied to groups of buildings as an exception to this rule only where such a split
would be prohibitively difficult, or would make the rating more confusing to users. Buildings rated
together will be less comparable to others rated under the scheme, significantly reducing the impact of
the rating itself.

However future NABERS developers may wish to apply the NABERS methodology to new regional
level ratings. This feature would clearly not apply in this case, as the rating would be associated with
the rated region rather than individual buildings.

PRINCIPLE 3: NABERS RATINGS ARE MEANINGFUL

Ratings are intended as a communication device that translates technical data about the performance
of a building into a more accessible metric. A core principle of a successful rating program is that the
communication itself should be immediately intuitive and should not need translation for non-experts.
The ABGR system was established as a five star rating scale to be “consistent with widely-accepted
programs with which many Australians are familiar”16. The early developers of ABGR also considered
that the ratings should be achievable – noting that some previous schemes had failed because

16
Pears, A. 1998, p7

NABERS Key Principles and Defining Features v.1.0 16


“benchmarks for good performance that appear totally out of the reach of real buildings”, and aimed to
provide “useful information for all buildings rather than just a target for best practice”17.

The adjustments and calculations incorporated into the final ABGR scale were designed to make the
scale “relevant and meaningful” by ensuring that:

 A small number of poorly performing buildings should lie outside the scheme, but the vast majority
should rate within the scales provided.
 An "average" building should perform at two stars.
 Excellent buildings should be able to attain four stars, but this should not be easily achievable.
 Five stars should be attainable through exceptionally good design and operation involving market-
leading levels of innovation.
 The three rating types should be self-consistent, so that a three star tenancy plus a three star base
building equals a three star whole building.18

While these specific details may not be relevant for other building types or rating metrics, the
principles behind them are consistent across the NABERS portfolio. The current NABERS strategic
plan explains this principle as “a realistic rating scale that recognises and rewards current
performance levels, and encourages and promotes best practice.”19

FEATURE 3.1 NABERS RATINGS ARE MEASURED IN A FAMILIAR DEVICE

This feature is desirable for NABERS ratings. In Australia, NABERS ratings are communicated on a
star rating scale that has been widely adopted for other energy efficiency measures such as appliance
ratings. However, NABERS may have been equally effective had an alternative device been chosen
at the outset. Many benchmarking schemes have developed simple devices to communicate the
performance of a product, such as ticks of approval, award levels such as gold/silver/bronze,
percentile scores, thumbs up/down, etc. As a general rule, each of these tools is already familiar or
can be easily interpreted, and users can see at a glance where the rated product sits in comparison to
others.

International developers of NABERS ratings should take the local context into account before
determining an appropriate scale, and particularly the metrics used for other environmental
benchmarks. Consistent communication devices between different schemes should reinforce market
familiarity and ensure that the ratings for each scheme convey greater meaning to their audience.

Features that support the success of the NABERS star rating scale include:

 An obvious difference between ratings – no high level numeracy skills are required to interpret
the rating results. This impact would be reduced if there were too many rating levels.
 Clearly showing the rank of the rated building, as the viewer can see at a glance how far the
rating is from the top and bottom of the scales.
 Allowing gaps between rating levels that reflect the accuracy of the rating, so building ratings
do not generally change due to temporal fluctuations in conditions but reflect step changes in
efficiency.

17
Exergy 2002, The Technical Derivation of the Australian Building Greenhouse Rating Scheme, p5
18
Exergy 2002, The Technical Derivation of the Australian Building Greenhouse Rating Scheme, p4
19
NABERS 2013, NABERS Strategic Plan 2013-2018, p3

NABERS Key Principles and Defining Features v.1.0 17


The last point is important in the ongoing communication of NABERS ratings. There is pressure even
in the mature Australian market to communicate NABERS ratings to a higher degree of accuracy (eg
decimalised ratings). Such a refined rating scale would give a misleading impression of the precision
of NABERS ratings. The designers of NABERS benchmarks should seek to make them as accurate
as possible, but the final rating depends on metering standards, data availability and the ability to
account for matters outside the control of the building such as climate. The rating scale should be set
at a point that reflects the rating precision – that is, the allowable variation in a rating due to these
factors should be less than the smallest rating difference (in Australia, smaller than half a star).

FEATURE 3.2 NABERS RATING SCALES REFLECT THE RANGE OF BUILDING


PERFORMANCE

This feature must be present for NABERS ratings. The distribution of the market on the rating scale
is vital to the successful deliver of NABERS. If properties are not able to identify their position on the
NABERS scale, and the position of their peers and competitors, the rating will not be meaningful and
will not deliver improved environmental outcomes.

To achieve this distribution, the majority of any targeted market should be placed somewhere on the
rating scale. Only a small number of buildings should rate worse than the lowest rating, or better than
the highest.

Designers of NABERS rating scales should also consider the following factors:

High ratings drive innovation. The highest ratings should be “stretch targets” that are achievable
only through innovative approaches and a concerted effort to lift design and operational standards. As
a rule of thumb no more than 5% of the market should achieve the highest rating. If a large proportion
of the market is able to achieve the highest ratings, the rating will become a barrier to further
innovation as there will be little incentive to improve beyond the rating scale. In this case, a new
stretch target should be implemented for high ratings either by extending or adjusting the rating scale.

Buildings should be able to progress along the scale. Designers should ensure that most of the
market achieves a rating, and there are not a large number of buildings performing worse than the
lowest rating. The worst rating should not be set at a high level as this will discourage very poor
buildings from making any improvement – noting that buildings with the lowest ratings can deliver the
greatest environmental improvements. Inefficient buildings should clearly see their progress along the
rating scale to the higher ratings.

The initial market average should be between 2 and 3 stars. On the establishment of the scheme,
the current average performance should lie between 2 and 3 stars. As the market improves the
average rating will also slowly improve. To date NABERS has extended rating scales rather than
adjusting them to suit the higher performing market to retain comparability between ratings over time.

Rating divisions larger than 5% are desirable. This allows for a balance between precision in the
rating scale and the accuracy of the rating. The rating divisions should also not be too large as this
makes progress more difficult, discouraging users from improving their performance.

Rating scales should be reviewed regularly. A regular review ensures that the scale is still
meaningful to the market. This may include annual reporting on the distribution of rated buildings
along the scale, and analysis of the proportion of high / average / low ratings to ensure the
considerations above still hold for that market. An extension or adjustment of the scales may be
warranted when any of these considerations are no longer met.

NABERS Key Principles and Defining Features v.1.0 18


FEATURE 3.3 NABERS MEASURES SUBSTANTIVE ENVIRONMENTAL
IMPACTS FROM THE BUILDING

This feature must be present in NABERS ratings. NABERS ratings will only be meaningful to
building owners, occupants and the general public if they are related to substantive issues for both the
building and the community. During the development and commercialization of NABERS many
different environmental indicators were considered. Issues such as landscape diversity and
stormwater runoff were not considered relevant or meaningful for inclusion in the final rating, either
because there was little an operating building could do to influence or manage the issue, they were
not considered relevant for commercial building operations, or the potential benefit of improved ratings
were not considered substantive enough to warrant the development of a rating system.

In order to be successful, NABERS ratings should only be implemented if the building owner or
occupant can manage the measured environmental impact and take steps to improve – otherwise the
ratings will be static and deliver no environmental benefit.

FEATURE 3.4 NABERS RATINGS ARE IMPLEMENTED IN SUITABLE MARKETS

This feature must be present in NABERS ratings. The success of NABERS ratings depends on the
suitability of the target market for a benchmark approach. NABERS ratings are a means to an end (of
improved environmental outcomes). If the rating is not likely to drive improved performance it is not
worth implementing.

When developing a new rating, at least the following factors should be tested to assess the suitability
of the market for a NABERS rating:

Existing level of maturity: If the market is not currently measuring or taking responsibility for an
environmental impact, a voluntary rating system is unlikely to be effective in driving improved
performance. A basic level of awareness of the issue and of the importance of the building’s role in
the issue is required before a simple metric such as a rating will make any difference. Prior to rolling
out a rating in such a market, developers should consider general awareness raising and alternative
approaches to improvements (for example, if the issue is a significant concern, minimum standards
may be a better alternative)

Building performance is relevant to third parties: A key strength of NABERS is its ability to
translate technical performance data into easily communicated and intuitive information. This will
have the greatest impact if building performance in the target market is of interest to parties other than
the building owner – for example by building tenants, corporate investors, or by the general public for
buildings with a high exposure, etc. In this case, the rating becomes a powerful communication device
that can add value to the property by informing these third parties of improved performance and
consequent reduced costs, lower regulatory or environmental risk, etc. An alternative to a rating in
less exposed markets may be simpler benchmarks of performance for buildings to target in-house,
rather than a full third-party assured NABERS rating with associated cost.

Market competition: competition between building owners and/or occupants drives rating uptake and
innovation, as competitors strive to outperform each other. Diversity in ownership between buildings
also ensures that industry benchmarks have a place in setting market-wide standards. In markets with
limited competition ratings may have lesser impact.

A sufficient pool of comparable buildings: NABERS can only be meaningful and relevant if there
is a basis for comparison between buildings. This may not exist in the target market – for example,
large industrial sites have quite specific environmental impacts that may not be comparable between
industries or between sites. The market also needs to be of a sufficient size to make comparison

NABERS Key Principles and Defining Features v.1.0 19


between buildings meaningful – there is little point in creating a NABERS rating system for a market of
two comparable buildings.

PRINCIPLE 4: NABERS RATINGS ARE SIMPLE AND EASY TO PERFORM

The cost and the process to obtain a rating is also a key component of NABERS. NABERS is a
measure of current performance, and is only the first step to reducing environmental impacts. On
obtaining a NABERS rating, building owners or occupants will need to invest time and resources into
understanding how to improve their building and in undertaking these improvements. As such, ratings
should be obtainable at a low cost to avoid encroaching on the resources available for making
improvements to the building.

The indirect costs of ratings should also be minimized. Ratings should be able to be conducted in a
reasonable time frame with minimal resources. This is achieved by using existing data and current
industry standards where possible to avoid the need to generate specific data for NABERS purposes.

NABERS rating calculations should also be freely available for self-assessments. Building owners
and occupants are able to use the NABERS rating calculator to estimate their performance, and see
for themselves how the rating is calculated. A public calculator also allows them to run scenarios for
their building to see how much effort is involved to reach higher rating levels, or the impacts of
different operating arrangements on their rating. This level of free access to the rating system
improves the broader industry and public awareness and understanding of NABERS, leading to
greater official adoption of the rating and more opportunities for improved environmental performance.

Maintaining a low cost for NABERS ratings may mean making trade-offs during the design and
development of the scheme between the availability of data and rating accuracy. The overall accuracy
of the rating should be maintained within target limits. In particular, the accuracy of the rating must
always be better than the smallest difference between rating levels, so that any variations in ratings
due to data or calculation accuracy do not change rating results.

FEATURE 4.1 NABERS RATINGS USE EXISTING DATA WHERE POSSIBLE

This feature is desirable for any NABERS rating. As far as possible, NABERS ratings should use
data that is already gathered by building owners or tenants, and avoid the need to generate new data.
This simplifies the rating process, reduces rating costs, and ensures that the rating reflects existing
market practices for understanding the issue being measured.

However, existing data should only be used if it is of sufficient quality. For example, utility grade
energy and water consumption data is generally reliable and accurately reflects the actual energy and
water use in the building over the billed period. However, data on other issues may be unavailable, or
if available, may be of a poor quality. For example, data provided on waste generation by Australian
office buildings is generally not attributable to individual buildings, or is based upon capacity metrics
such as the number of bins lifted or size of waste bins rather than the actual waste collected from a
building. In such cases an interim measurement standard may need to be introduced to encourage
better quality data collection. If it is not practical for the rating developer to develop or implement such
a standard, the development of that particular rating should be reconsidered until reliable data is
available, with clear communication to the market that an improved standard is a necessary
prerequisite for the rating system. Adoption of improved metrics will have benefits beyond the rating,
as accurate measurement will allow for improved accountability and management of the issue by the
responsible parties.

NABERS Key Principles and Defining Features v.1.0 20


FEATURE 4.2 NABERS RATING COVERAGE MATCHES MARKET STANDARDS

This feature is desirable for NABERS ratings. All NABERS ratings require a basis for comparison
between buildings, which means including or excluding particular parts of each building to make them
comparable. The coverage of NABERS ratings should reflect the common practice for buildings of the
rated type to minimize exclusions and associated rating costs.

For example, a mixed office/retail building will not be directly comparable to a standalone office
building without some adjustment either to the scope of the rating as it applies to the building, or to the
rating scale so that the differences between the buildings are taken into account.

To date, NABERS ratings have managed these differences by clearly defining the scope of the rated
building type, matching the common industry practice for that building, and excluding any parts of
rated buildings that do not fit this scope. For example, the NABERS Energy for Offices rules include
both office spaces and “office support facilities” which occupy space that could be used as an office
and are for the exclusive use of tenants, and excludes other spaces such as public retail.

This requires a regularly maintained rule set so that the scope of the rating is clear in any situation. To
manage this process, and ensure that participants actively seek appropriate rules, where a building
feature is not expressly covered by the NABERS rules, assessors must take the more conservative
decision. That is, in the case of doubt assessors will use the information that reduces the NABERS
rating. A simple “ruling” process has been established so that the NABERS administrators can easily
amend the rules. In this way, participants can be confident that ratings are obtained on a consistent
and fair basis.

The application of the NABERS rules should be regularly reviewed to ensure they reflect common
practice. For example, if most buildings of a certain type are making a particular exclusion, this
suggests that the excluded feature is actually a normal part of that sort of building. In this case, the
rule should be reconsidered as it is adding unnecessary cost and complexity to the rating.

Future developers may choose to take an alternative approach of adjusting rating scales to account
for mixed-use buildings, so that a mixed office/retail building is compared to other like buildings based
on both the office and retail scales. Under this approach ratings would be easier to perform, but the
development of appropriate metrics would be more difficult. This approach would be consistent with
the NABERS principles.

FEATURE 4.3 NABERS RATINGS ARE LOW COST

This feature must be present for NABERS ratings. Additional rating costs will generally reduce
funding available to buildings to reduce their environmental impact. The development of NABERS
metrics and rules for acceptable data involves weighing up the costs and benefits of quantifying a
particular operating condition or seeking a higher degree of accuracy in the methodology.

To date the NABERS metrics and data rules have developed so that:

 Ratings are accurate to within 5% of the measured environmental impact (which equates to
less than half a star)
 Issues outside the control of the building owner are only taken into account in ratings if they
are likely to impact ratings by more than 5%.

NABERS Key Principles and Defining Features v.1.0 21


 Estimates are acceptable for minor issues when the cumulative impact of these estimates is
shown to be less than 5%.20

In offices, this means factors such as building size, climate, hours of occupancy, and equipment
density (for tenancy ratings only) are taken into account, but other information such as occupant
density is not, as the cost and complexity of obtaining the data and quantifying the impact on the rating
outweighs any benefit. The list of factors varies between building types depending on the likely
influence on rating results and cost of gathering data to measure this impact.

PRINCIPLE 5: NABERS RATINGS ARE RELIABLE

The reliability of NABERS ratings is fundamental to the reputation and credibility of the scheme.
NABERS ratings will only lead to improved environmental outcomes if rated buildings are confident
that their rating properly describes their level of efficiency. The rating will not succeed if it is
unreliable, or if individual building ratings changes over time without any change in the underlying
building efficiency.

FEATURE 5.1 NABERS RATINGS ARE ASSESSED USING PUBLIC, PLAIN


ENGLISH RULES

This feature must be present for NABERS ratings. Consistent ratings depend on easily understood
rules. Rules should be publicly available and written in plain English with as little jargon as possible,
so that the owners and occupants of rated premises have access and can better understand the basis
of their ratings.

Widespread awareness of rating rules reduces rating costs, as building owners and occupants know
what data is required for ratings and are better prepared, taking steps to gather the data before official
ratings and ensuring it is in the correct formats. It improves the meaning of ratings, as the basis of
comparison between buildings is well known and understood. This improves the likelihood that
NABERS ratings become a common language for users across the market to understand and
communicate building efficiency.

Furthermore, well written rules are more easily and consistently applied during official ratings.
NABERS aims to reduce any differences in interpretation as much as possible, so that the rating result
depends only on the NABERS rules, not the person applying the rules.

FEATURE 5.2 NABERS RATINGS ARE DELIVERED BY COMPETENT


INDIVIDUALS

This feature is mandatory for any NABERS system. Certified ratings should be assessed by people
that have proven their competence in applying the NABERS rules.

In Australia, this proof of competence involves a detailed training session, passing a difficult exam
which involves the theoretical application of the rules, and passing a practical assessment through live
ratings under the supervision of an experienced Assessor. Further “on-the-job” support is provided by
the NABERS administrators should any Assessor be unsure of the application of the rules in a
particular situation. As a result of this comprehensive training and support package, Australian
NABERS rating results are highly consistent and do not vary between different Accredited Assessors.

20
see Office of Environment and Heritage, 2013b, NABERS Energy and Water for offices: Rules for
collecting and using data, OEH2013/004, State of NSW, February 2013

NABERS Key Principles and Defining Features v.1.0 22


FEATURE 5.3 NABERS RATINGS ARE SUBJECT TO RANDOM AND
TARGETED AUDITS

This feature must be present for NABERS ratings. Random audits maintain the quality of ratings
across the board by checking that individual assessors are applying the rules appropriately. The audit
regime should involve repercussions for failed audits, up to and including dismissal from the scheme
where audits show that an assessor is not able to maintain the required competence.

While most audits should be randomly assigned to ensure the overall quality of the rating scheme,
targeted audits also form an important part of the NABERS quality regime to check that assessors with
a particularly strong potential conflict of interest or who have failed previous audits are applying the
rules appropriately.

FEATURE 5.4 NABERS ASSESSORS SHOULD BE INDEPENDENT OF THE


RATED BUILDING

This feature is desirable for any NABERS system. Independent assessors are less likely to be
affected by conscious or unconscious biases that may affect rating results. However, the practical
implementation of this requirement may be difficult, as it is likely that assessors, who are generally
professionals from the property or environmental services sector, have some relationship with most
buildings either as an employee, service provider or competitor. While full independence is desirable,
in practice most NABERS ratings to date have instead required disclosure of any potential conflict of
interest by an assessor. This disclosure informs both the rating customer and NABERS administrators
of potential biases in the rating, and may be used for targeted audit purposes.

PRINCIPLE 6: NABERS MANAGEMENT IS TRUSTWORTHY

The management of NABERS is independent of the property industry, which avoids the possibility of
vested interests introducing biases to the rating metrics. This approach ensures that ratings are
trustworthy to all stakeholders, both the direct users of the rating system and those such as investors
and tenants who use ratings to externally assess building performance and inform investment
decisions.

In Australia this independence is guaranteed by Government management of the rating system.


Government involvement elevates the importance of the rating system and adds legitimacy. It builds
on a core role of government to provide standards for measurement. Government backing also
ensures the rating can be trusted by those within and outside the industry, making it more likely that
the rating becomes a common and definitional language for environmental efficiency. Furthermore,
direct Government management ensures that complementary programs can partner with NABERS to
deliver shared goals, rather than compete for market share. In Australia this approach has seen the
growth of industry-backed design tools referencing NABERS, government and industry funding to
support building efficiency upgrades (measured by improvements in NABERS ratings), and a thriving
environmental services industry using NABERS to ground their advice and as a simple measure to
communicate technical efficiency improvements.

FEATURE 6.1 NABERS IS ADMINISTERED INDEPENDENTLY OF INDUSTRY

This feature must be present for NABERS ratings. NABERS should be independently governed.
While NABERS rules must be informed by practitioners in order to be practical and relevant,
independent establishment and maintenance of the measurements and rating scale ensures that they
are free from vested interests, and are meaningful to both direct users and the broader community that
relies on the ratings as a measure of building efficiency.

NABERS Key Principles and Defining Features v.1.0 23


Government administration is preferable, but other administrators may be appointed in different
jurisdictions based on local government structures, available resources and policy context.

If the administrator of a NABERS scheme is not a government entity, it should be established


independently of the property industry users of the scheme and be self-funding where possible. Fully
transparent decision-making is essential for any administrator that may be subject to vested interests
to ensure impartiality of the rating scheme and remove any perception of bias. A trusted scheme
administration is vital for a trustworthy and meaningful rating.

KEY PRINCIPLE 7: NABERS DEVELOPMENT IS COLLABORATIVE

Industry involvement is instrumental for the success of NABERS. Industry expertise is necessary to
inform the application of the NABERS key principles to a particular building type and environmental
issue. Industry expertise and inside knowledge is critical if NABERS ratings are to be actually
relevant, reliable and practical – only industry practitioners know what information is available and the
current measurement standards in place, and this knowledge is vital when building a new, simple and
credible rating.

Furthermore, when customers of the future NABERS tool are involved in its development they have a
greater understanding of the basis of the tool. They have a stake in the decisions made during
development, and a sense of ownership over the final product. This means they are more likely to
make use of the tool in their operations, leading to greater uptake of the rating system and deeper
savings from an informed and engaged customer base.

FEATURE 7.1 NABERS DEVELOPMENT AND DELIVERY IS INFORMED BY


INDUSTRY EXPERTS

This feature must be present for NABERS ratings. Industry expertise is needed to effectively
implement most of the defining features of any NABERS tool. Developing a NABERS rating requires
a detailed understanding of existing data and measurement standards, management practices and
alignment of responsibilities applying to the rated building and issue.

Industry expertise may be included through permanent or temporary advisory committees,


consultations and other forms of broad engagement, through contracted research projects, or by many
other means. The development of any NABERS rating must involve a partnership between the rating
developers and targeted industry.

FEATURE 7.2 NABERS WORKS CLOSELY WITH INDUSTRY PARTNERS TO


COMPLEMENT OTHER APPROACHES

This is a desirable feature of any NABERS system. As NABERS is an outcomes based rating, it is
readily incorporated in other rating systems as a measure of the environmental impact. This
complementary approach strengthens both schemes. The complementary scheme benefits by using
NABERS as an independent verification of the actual impact of the building. NABERS benefits from
wider adoption of the rating, and broader use as the “common language” to communicate the
measured environmental impact.

However, when embedding NABERS in other approaches, the NABERS developers should take care
to ensure that the defining features of the NABERS rating are not compromised, and the rating is not
diminished.

NABERS Key Principles and Defining Features v.1.0 24


CONCLUSION

NABERS is underpinned by a set of key principles and defining features that, combined, have
successfully transformed the environmental performance of the Australian property market. This
paper documents these principles to assist future rating tool developers as they expand the NABERS
methodology to other sectors and international markets.

NABERS Key Principles and Defining Features v.1.0 25

You might also like