Arr Guide Book PDF
Arr Guide Book PDF
Arr Guide Book PDF
FLOOD ESTIMATION
Australian Rainfall and Runoff
A Guide To Flood Estimation
The Australian Rainfall and Runoff: A guide to flood estimation (ARR) is licensed under the
Creative Commons Attribution 4.0 International Licence, unless otherwise indicated or
marked.
Third-Party Material
The Commonwealth of Australia and the ARR’s contributing authors (through Engineers
Australia) have taken steps to both identify third-party material and secure permission for its
reproduction and reuse. However, please note that where these materials are not licensed
under a Creative Commons licence or similar terms of use, you should obtain permission
from the relevant third-party to reuse their material beyond the ways you are legally
permitted to use them under the fair dealing provisions of the Copyright Act 1968.
If you have any questions about the copyright of the ARR, please contact:
arr_admin@arr.org.au
c/o 11 National Circuit,
Barton, ACT
ISBN 978-1-925848-36-6
With nationwide applicability, balancing the varied climates of Australia, the information and
the approaches presented in Australian Rainfall and Runoff are essential for policy decisions
and projects involving:
• infrastructure such as roads, rail, airports, bridges, dams, stormwater and sewer
systems;
• town planning;
• mining;
• developing flood management plans for urban and rural communities;
• flood warnings and flood emergency management;
• operation of regulated river systems; and
• prediction of extreme flood levels.
However, many of the practices recommended in the 1987 edition of ARR have become
outdated, and no longer represent industry best practice. This fact, coupled with the greater
understanding of climate and flood hydrology derived from the larger data sets now available
to us, has provided the primary impetus for revising these guidelines. It is hoped that this
revision will lead to improved design practice, which will allow better management, policy
and planning decisions to be made.
In addition to the update, 21 projects were identified with the aim of filling knowledge gaps.
Funding for Stages 1 and 2 of the ARR revision projects were provided by the now
Department of the Environment. Stage 3 was funded by Geoscience Australia. Funding for
Stages 2 and 3 of Project 1 (Development of Intensity-Frequency-Duration information
across Australia) has been provided by the Bureau of Meteorology. The outcomes of the
projects assisted the ARR Editorial Team with the compiling and writing of chapters in the
revised ARR. Steering and Technical Committees were established to assist the ARR
Editorial Team in guiding the projects to achieve desired outcomes.
Assoc Prof James Ball Mark Babister
ARR Editor Chair Technical Committee for
ARR Revision Projects
Related Appointments:
ARR Project Engineer: Monique Retallick
ARR Admin Support: Isabelle Testoni
Assisting TC on Technical Matters: Erwin Weinmann, Dr Michael Leonard
Editors:James Ball
Mark Babister
Rory Nathan
Bill Weeks
Erwin Weinmann
Monique Retallick
Isabelle Testoni
Peter Coombes
Steve Roso
This document is a living document and will be regularly updated in the future.
In development of this guidance, and discussed in Book 1 of ARR 1987, it was recognised
that knowledge and information availability is not fixed and that future research and
applications will develop new techniques and information. This is particularly relevant in
applications where techniques have been extrapolated from the region of their development
to other regions and where efforts should be made to reduce large uncertainties in current
estimates of design flood characteristics.
Therefore, where circumstances warrant, designers have a duty to use other procedures and
design information more appropriate for their design flood problem. The Editorial team of
this edition of Australian Rainfall and Runoff believe that the use of new or improved
procedures should be encouraged, especially where these are more appropriate than the
methods described in this publication.
Care should be taken when combining inputs derived using ARR 1987 and methods
described in this document.
The ARR team have been working hard on finalising ARR since it was released in 2016. The
team has received a lot of feedback from industry and practitioners, ranging from substantial
feedback to minor typographical errors. Much of this feedback has now been addressed.
Where a decision has been made not to address the feedback, advice has been provided as to
why this was the case.
A new version of ARR is now available. ARR 2019 is a result of extensive consultation and
feedback from practitioners. Noteworthy updates include the completion of Book 9,
reflection of current climate change practice and improvements to user experience, including
the availability of the document as a PDF.
Key updates in ARR 2019
Climate Reflected best practice as of 2016 Climate Updated to reflect current practice
change Change policies
PMF chapter Updated from the guidance provided in 1998 Minor edits and reflects differences required for use in
to include current best practice dam studies and floodplain management
Examples Examples included for Book 9
Figures Updated reflecting practitioner feedback
xi
Scope and Philosophy
xii
Scope and Philosophy
xiii
List of Figures
1.2.1. Australian Rainfall and Runoff Preferred Terminology ................................................. 9
1.2.2. Different Types of Uncertainty, Aleatory and Epistemic, in the Context of Design
Flood Estimation ........................................................................................................ 18
1.2.3. Impact of Uncertainty on a Design Flood Estimate for Two Design Cases ............... 19
1.3.1. Illustration of Stochastic Influence of Hydrologic Factors on
Flood Peaks and the Uncertainty in Flood Risk Estimates Associated with
Observed Flood Data .................................................................................................... 32
1.3.2. Illustration of Relative Efficacy of Different Approaches for
the Estimation of Design Floods .................................................................................. 45
1.4.1. The Data Cycle .......................................................................................................... 58
1.4.2. Standard Rain Gauge (Source: Bureau of Meteorology) .......................................... 62
1.4.3. Tipping Bucket Rain Gauge (Source: Bureau of Meteorology) ................................. 63
1.4.4. Typical Rating Curve ................................................................................................. 73
1.4.5. Loop in Rating Curve ................................................................................................. 74
1.4.6. Stage-Discharge Relationship Zones ........................................................................ 75
1.4.7. Annual Maximum Series ........................................................................................... 76
1.4.8. Concept of RTK GPS Technique of Field Survey ...................................................... 82
1.4.9. Aerial Photograph Example (Region A) .................................................................... 84
1.4.10. Sample of Processed Photogrammetry data set (Region A) ................................... 85
1.4.11. Sample of Processed Photogrammetry data set (Region A detail) ......................... 86
1.4.12. Sample of Raw ALS data set (Region A) ................................................................ 87
1.4.13. Sample of Processed ALS data set (Region A) ...................................................... 89
1.4.14. Global Greenhouse Gas Emissions Scenarios for the 21st Century (from IPCC,
2007) ....................................................................................................................... 99
1.4.15. From (IPCC, 2001) ................................................................................................ 100
1.5.1. Components of Flood Risk (After McLuckie (2012) ................................................. 108
1.5.2. Map Showing Different AEP Flood Extents Including an Extreme Event ................ 109
1.5.3. Map of Flood Extents .............................................................................................. 121
1.5.4. Map of Flood Extents and Flood Function ............................................................... 122
1.5.5. Map of Flood Extents and Flood Hazard ................................................................. 122
1.5.6. Map of Flood Extents and Flood Emergency Response
Classification ........................................................................................................... 123
1.5.7. Map of Variation in Constraints Across the Floodplain ............................................ 123
1.5.8. Example of Estimated Average Risk to a Community Due to
Flooding ................................................................................................................... 125
1.5.9. Indicative Stage Damage Curve for some Residential House
Types ................................................................................................................... 126
1.5.10. Example of Flood Damage Curve for a Range of AEP Flood
Events ..................................................................................................................... 127
xiv
Scope and Philosophy
xv
List of Tables
1.2.1. Sources of Uncertainty in Design Flood Estimation .................................................. 27
1.3.1. Summary of Common Procedures used to Directly Analyse Flood
Data ............................................................................................................................. 34
1.3.2. Summary of Recommended Rainfall-Based Procedures .......................................... 37
1.3.3. Summary of Advantages and Limitations of Common Procedures
used to Directly Analyse Flood Data ......................................................................... 41
1.3.4. Summary of Advantages and Limitations of Common
Rainfall-Based Procedures ........................................................................................ 43
1.4.1. Typical Accuracies of Field Survey ........................................................................... 81
1.4.2. Typical Swath Values ................................................................................................. 87
1.5.1. Example
Qualitative Risk Matrix ............................................................................................... 110
1.5.2. Infrastructure types and potential Effective Service
Life ........................................................................................................................... 139
1.6.1. Design Flood Annual Exceedance Probabilities ...................................................... 165
1.6.2. Central Slopes Cluster ............................................................................................ 168
1.6.3. East Coast Cluster .................................................................................................. 168
1.6.4. Monsoonal North Cluster ......................................................................................... 169
1.6.5. Murray Basin Cluster ............................................................................................... 169
1.6.6. Rangelands Cluster ................................................................................................. 170
1.6.7. Southern Slopes Cluster ......................................................................................... 170
1.6.8. Southern and South-Western Flatlands .................................................................. 171
1.6.9. Wet Tropics ............................................................................................................. 171
xvi
Chapter 1. Introduction
James Ball, Mark Babister, Monique Retallick, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
1.1. General
While previous editions of Australian Rainfall and Runoff have served the engineering
profession and the general community well, in the period since the release of the previous
edition, a number of developments have arisen that necessitate the production of a new
edition. These developments include the many recent advances in knowledge regarding
flood processes, the increased computational capacity available to engineering hydrologists,
expanding knowledge and application of hydroinformatics, improved information about
climate change and the use of stochastic inputs and Monte Carlo methods.
The intention during the development of this new edition has been to provide appropriate
guidance addressing these issues. In many situations, the guidance provided in this edition
of Australian Rainfall and Runoff requires an enhanced knowledge of flood generation and
the design process. The guidance developed has maintained the aim of Australian Rainfall
and Runoff which is to provide the best available information on design flood estimation in a
manner suitable for use by Australian practitioners with varying levels of knowledge about
the design flood problem, flood processes, and engineering hydrology.
Development of guidance for inclusion in Australian Rainfall and Runoff consistent with the
aims previously stated poses the question of a definition for the design flood problem.
Design flood estimation remains a problem for many engineering projects. Advice is required
regarding design flood characteristics for the:
The flood characteristic of most importance depends on the nature of the problem under
consideration, but typically it is one of the following:
• Flow rate - commonly the peak but other flood flows may be needed for particular projects;
• Level - commonly the peak but other flood levels may be needed for particular projects;
• Volume - the volume of flood hydrographs is required for the design of many hydraulic
structures designed to retain part of the flood hydrograph for flood mitigation purposes;
• Rate of rise - needed for the planning associated with operation flood management such
as preparation of evacuation routes; or
1
Introduction
• System failure - this may be failure of a dendritic network within a catchment, the failure of
a transport route crossing multiple catchments, or the failure of some other system due to
the occurrence of one or more flood events.
While all of these flood characteristics have been noted as being of interest to flood
practitioners, the dominant characteristic of concern, historically, has been the peak flood
flow. The peak flood flow was also the main focus of the previous edition of Australian
Rainfall and Runoff (Pilgrim, 1987).
In this edition of Australian Rainfall and Runoff, many of the recommended practices focus
on the prediction of peak design flows and prediction of full hydrographs. Since publication
of the last edition of Australian Rainfall and Runoff, it has been recognised that this focus on
flows provided insufficient guidance on other flood characteristics. For the holistic planning,
design and operation of flood management systems, flood characteristics other than peak
flow will also be relevant. For example, the design flood storage for the many retarding
basins located in urban areas is usually a flood volume issue rather than a peak flow issue.
As a result, other recommendations in this edition of Australian Rainfall and Runoff focus on
all flood characteristics that may be of interest in design flood estimation.
This approach is consistent with the aims of Engineers Australia's National Committee on
Water Engineering when they resolved that a revision of Australian Rainfall and Runoff was
needed by the profession and the wider community. These aims can be stated broadly as
being:
• to collect, review and evaluate available design procedures, and to update the document
to include the best available methods and design data Australia wide;
• to provide guidance on the concepts involved in the recommended procedures and their
application;
• to carry out those research activities necessary to meet the above objectives.
Therefore, where circumstances warrant, designers have a duty to use other procedures and
design information more appropriate for their design flood problem. The authorship team of
this edition of Australian Rainfall and Runoff believe that the use of new or improved
procedures should be encouraged, especially where these are more appropriate than the
methods described in this publication. Assessment of the relative merits of new procedures
and design information should be based on the following desirable attributes:
2
Introduction
While most of the procedures presented in the guidelines require software for their
implementation, the role of Australian Rainfall and Runoff is not to endorse particular
software packages but rather to provide details of the procedures to be incorporated in flood
estimation software packages. However, enabling software is provided to allow site-specific
design data to be extracted from databases (e.g. for the design rainfall database and the
regional flood frequency estimation). These databases will be updated when warranted by
the availability of significant amounts of new or revised information.
1.2. Contents
While the presentation and formats of Australian Rainfall and Runoff have varied between
the editions, the focal aim has remained one of providing information relevant to design flood
estimation in a form readily accessible to practitioners.
This edition of Australian Rainfall and Runoff has followed the same philosophy and has
grouped information on different aspects of design flood estimation into separate books. The
aim of this is to allow easy updating of components in the future. A total of 9 Books has been
prepared for this edition of Australian Rainfall and Runoff with the following contents:
This book provides a general introduction to Australian Rainfall and Runoff with an emphasis
on the need for the revision and the basic philosophy for the application of the guidelines. It
gives a brief introduction to terminology used within the document, discusses fundamental
issues and basic approaches to flood estimation, data related aspects inclusive of its
management and data uncertainty, risk based design and dealing with climate change.
This book discusses the importance of design rainfall for flood estimation, and includes
discussion of differences between historical and design rainfalls, issues associated with
development of rainfall models for design flood estimation in Australian Rainfall and Runoff.
It provides the basis for the recommended Intensity Frequency Duration relationships,
design spatial patterns of rainfall and design temporal patterns of rainfall. Also considered in
this book are continuous sequences rainfall inclusive of the stochastic generation of
alternative design storm sequences.
This book provides a general introduction to peak flow estimation based on flood frequency
analysis, as well as covering specific technical aspects of this topic area. The first of the
technical chapters provides guidelines for Flood Frequency Analysis at a specific site,
illustrated by a range of examples. The second deals with Regional Flood Frequency
Estimation techniques and describes the application of a tool developed to readily provide
peak flow frequency estimates for any location in Australia.
3
Introduction
This book deals with general concepts and issues in catchment modelling for design flood
estimation. The first chapter discusses the need for catchment simulation and introduces
general catchment simulation concepts. The next chapter discusses key hydrologic
processes contributing to floods and how they are represented in modelling systems. This
chapter is followed by a discussion of the types of catchment modelling systems (event and
continuous) and the need for integrating hydrologic, and hydraulic components of the
system. The final chapters deal with the treatment of joint probability issues and uncertainty
in the outputs of simulation models.
The focus of this book is the hydrologic models necessary for prediction of design flood
hydrographs. The first chapter gives a general introduction to concepts presented in this
book while the remaining chapters deal with the modelling of particular components of the
flood formation process. The first of the technical chapters deals with the different types of
hydrologic models used to represent the runoff generation and runoff routing phases of the
flood formation process. The final two chapters deal with baseflow and losses for design
flood estimation and provide design data for these important inputs to flood hydrograph
estimation.
This book is concerned with the basic aspects of hydraulics. It is worth noting that the
material presented in this chapter is not a replacement for the many textbooks in this area or
that it will cover all the information necessary for the application of hydraulic principles in
design flood estimation. The chapters in this book present information relevant to the
hydraulic modelling of river reaches, floodplains and structures for design flood estimation,
the application of software for numerical modelling of flood hydrographs, blockage of
hydraulic structures and interaction of coastal and catchment flooding. A tool has been
developed to assist practitioners in assessing the interation of coastal and catchment
flooding. Also included in this book is guidance on designing for the safety of people and
vehicles. The people safety information presented includes a discussion of the importance of
the demographics in assessing safety.
This book provides discussion of major issues in the practical application of catchment
modelling systems to different flood estimation problems, including establishment of
catchment modelling systems, calibration and validation of model parameters and dealing
with uncertainty in model outputs.
This book provides information and guidelines for the special design applications where
floods of low Annual Exceedance Probabilities need to be estimated. Examples of these
design applications include the sizing of spillways for large dams, design of major structures
located in the floodplain and flood risk management in situations where very large flood
damages or significant risk to life from flooding could be expected. Floods in the range of
very rare to extreme events are generally estimated by the methods described in Book 8,
Chapter 2 to Book 8, Chapter 7 but a number of special considerations and additional design
data are required, as described in Book 8. This book includes an overview of the procedures
available for estimating very rare to extreme floods, estimation of design rainfall and rainfall
4
Introduction
excess for rarer events, and special requirements for the models used to generate flood
hydrographs for very rare to extreme flood events. The application of these special
procedures is illustrated by a number of examples.
This book first provides a general introduction to urban drainage systems and the philosophy
adopted in Australian Rainfall and Runoff. It then discusses urban drainage approaches,
changes to the natural hydrologic cycle resulting from urbanisation and how these changes
impact on design flood estimation in urban environments, and use of storage facilities from
on-site storage to detention (retention) basins to large flood mitigation dams. An important
aspect of this discussion relates to limitations of the Rational method and the changes in
approach necessary for consideration of volume-based problems rather than peak flow
based problems.
1.3. References
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
5
Chapter 2. Fundamental Issues
James Ball, Mark Babister, Monique Retallick, Fiona Ling, Mark Thyer
2.1. Introduction
This chapter introduces important concepts of probability and statistics with respect to flood
estimation, and defines the recommended terminology for these probability concepts. The
chapter also discusses the difference between design and actual events, conversion of
rainfall of a given probability to a flood of the same probability, risk-based design and dealing
with uncertainty in flood estimates. Much of the text from the 1987 edition of Australian
Rainfall and Runoff is still relevant and has formed the basis for the information provided in
some of following sections.
2.2. Terminology
2.2.1. Background
Probability concepts are fundamental to design flood estimation and appropriate terminology
is important for effective communication of design flood estimates. Terms commonly used in
the past have included "recurrence interval", "return period", and various terms involving
"probability". It is common for these terms to be used in a loose manner, and sometimes
quite incorrectly. This has resulted in misinterpretation by the profession, the general
community impacted by floods, and other stakeholders.
In considering the terminology that should be used in this edition of Australian Rainfall and
Runoff, the National Committee on Water Engineering's three major concerns were:
• Clarity of meaning;
It is believed that irrespective of the terms used, it is critical that all stakeholders have a
common interpretation of the terms. Furthermore, it is important that stakeholders
understand that the terms refer to long term averages. This means, for a given climatic
environment, that the probability of an event of a given magnitude being equalled or
6
Fundamental Issues
exceeded in a given period of time (for example, one year) is unchanged throughout the life
of the structure or the drainage network. Furthermore, it is not uncommon for an event to
occur more than once in a single year.
Additionally, given the wet and dry phases that occur in many regions of Australia, these
events are likely to be clustered in time. The occurrence of these wet and dry climatic
phases highlight the misleading and inappropriate interpretation that flood events occur at
regular intervals as implied by "recurrence interval" and "return period”.
Flood events generally are random occurrences, and the period between exceedances of a
given event magnitude usually is a random variate, the properties of which are assumed to
be constant in time for a given location and climatic environment. The adopted terminology
reflects this fundamental concept and is intended to convey a clear and precise
interpretation.
The two approaches used when describing probabilities of flood events in previous editions
of Australian Rainfall and Runoff were:
• Average Recurrence Interval (ARI) - the average time period between occurrences
equalling or exceeding a given value. Usually the ARI is derived from a Peak over
Threshold series (PoTS) where every value over a chosen threshold is extracted from the
period of record.
Details of AMS and PoTs and the background to these alternative techniques for extracting
flood series from recorded data are presented in Book 3, Chapter 2. Included in this
discussion are the assumptions necessary for conversion of one probability terminology to
the other using the Langbein formula (Langbein, 1949).
Using the Langbein formula, in probability terms, there is little practical difference for events
rarer than 10% AEP. Historically, however, there has been a reluctance to convert from the
approach used for derivation of the design flood estimate. Furthermore, terminology was
attached to particular design flood estimation techniques; for example, when AMS were used
to derive design flood estimates, the resultant probability was expressed as an AEP while
when a PoTS was used for the same purpose, the resultant probability was expressed as an
ARI.
In many situations, this distinction between an ARI and an AEP was imprecise as the design
flood prediction methodology adopted did not explicitly note the use of either an AMS or a
PoTS in the methodology. As a result, use of ARI and AEP was considered to be
interchangeable. This interchangeable use often resulted in confusion.
The National Committee on Water Engineering believes that within Australian Rainfall and
Runoff a terminology should be used which, while being technically correct, is consistent
7
Fundamental Issues
with other uses. Furthermore, the terminology adopted should be easily understood both by
the profession and by other stakeholders within the community.
The interaction of the profession with the community and the increased public participation in
decision making means that terminology needs to be clear not only to the profession but also
to the community and other stakeholders, other professions involved in flood management,
and to the managers of flood-prone land. This need has resulted in a move away from the
terminology adopted in the 1987 Edition of Australian Rainfall and Runoff towards a clear
and unambiguous terminology supported by the National Committee on Water Engineering
of Engineers Australia and the National Flood Risk Advisory Group (NFRAG, a reference
group under the Australian and New Zealand Emergency Management Committee). All
parties believe that terminology involving annual percentage probability best conveys the
likelihood of flooding and is less open to misinterpretation by the public.
8
Fundamental Issues
9
Fundamental Issues
As shown in the third column of Figure 1.2.1, the term Annual Exceedance Probability (AEP)
expresses the probability of an event being equalled or exceeded in any year in percentage
terms, for example, the 1% AEP design flood discharge. There will be situations where the
use of percentage probability is not practicable; extreme flood probabilities associated with
dam spillways are one example of a situation where percentage probability is not
appropriate. In these cases, it is recommended that the probability be expressed as 1 in X
AEP where 100/X would be the equivalent percentage probability.
For events more frequent than 50% AEP, expressing frequency in terms of annual
exceedance probability is not meaningful and misleading , as probability is constrained to a
maximum value of 1.0 or 100%. Furthermore, where strong seasonality is experienced, a
recurrence interval approach would also be misleading. An example of strong seasonality is
where the rainfall occurs predominately during the Summer or Winter period and as a
consequence flood flows are more likely to occur during that period. Accordingly, when
strong seasonality exists, calculating a design flood flow with a 3 month recurrence interval
is of limited value as the expectation of the time period between occurrences will not be
consistent throughout the year. For example, a flow with the magnitude of a 3 month
recurrence interval would be expected to occur or be exceeded 4 times a year; however, in
situations where there is strong seasonality in the rainfall, all of the occurrences are likely to
occur in the dominant season.
Consequently, events more frequent than 50% AEP should be expressed as X Exceedances
per Year (EY). For example, 2 EY is equivalent to a design event with a 6 month recurrence
interval when there is no seasonality in flood occurrence.
Different users of Australian Rainfall and Runoff, in general, will use different segments of
the relationship between flood magnitude and exceedance probability. To reduce confusion,
that may arise from switching between different terminologies, it is recommended that
consistent terminology in accordance with one of the columns of Figure 1.2.1 be used within
an industry segment.
These expressions of estimated frequencies relate directly to the particular time period for
which data have been analysed and frequencies determined with no consideration given to
the long term effects of climatic change. Nonetheless, the adopted terminology is considered
to be equally applicable to both stationary and non-stationary climatic environments, as there
is no requirement for the annual exceedance probabilities to be constant over time.
Consequently, where flood characteristics are changing as result of long term climatic
change, the AEP of a flood characteristic for a future time period may be different or,
conversely, a flood characteristic magnitude corresponding to a given AEP may change.
A design flood is a probabilistic or statistical estimate, being generally based on some form
of probability analysis of flood or rainfall data. An Annual Exceedance Probability is
attributed to the estimate. This applies not only to normal routine design, but also to probable
maximum estimates, where no specific probability can be assigned but the intention is to
obtain a design value with an extremely low probability of exceedance. In the flood
estimation methods based on design rainfalls, the probability relationship between design
10
Fundamental Issues
rainfall events and design flood events is not a direct one. Occurrence of a rainfall
eventwhen the catchment is wet might result in a very large flood, while occurrence of the
same rainfall event when the catchment was dry might result in relatively little, or even no
runoff. For the design situation, the combinations of different factors combining to produce a
flood event are not known and must be assumed, often implicitly in the design values that
are adopted.
The approach to estimating an actual (or historic) flood from a particular rainfall event is
quite different in concept and is of a deterministic nature. All causes and effects are directly
related to the specific event under consideration. The actual antecedent conditions
prevailing at the time of occurrence of the rain are directly reflected in the resulting flood and
must be allowed for in its estimation. No real information on the probability of the on flood
probability can be gained from consideration of a single actual flood event.
Although the differences in these two types of events are often not recognised, they have
three important practical consequences. The first is that a particular procedure might be
might be appropriate for analysing actual flood events but quite unsuitable for probabilistic
design flood events.
The second concerns the manner in which values of parameters are derived from recorded
data, and the manner in which designers regard these values and apply them. If actual
floods are to be estimated, values for use in the calculations should be derived from
calibration on individual observed events. If design floods are to be estimated, the values
should be derived from statistical analyses of data from many observed floods.
The third practical consequence concerns the manner in which parameters are viewed by
designers and analysts. For example, design initial losses for bursts can be very different
from event initial losses derived from actual events, yet practitioners still often compare them
without understanding the differences.
However, each of the processes represented in a model that converts rainfall to runoff and
forms a flood hydrograph at the point of interest introduces some joint probability, resulting in
the fundamental problem that the true probability of the derived flood characteristic may be
obscure, and its magnitude may be biased with respect to the true flood magnitude with the
same probability as the design rainfall, especially at the low probabilities of interest in
design.
Since publication of ARR 1987 (Pilgrim, 1987) there has been a steady shift towards
methods that better account for the stochastic nature of how floods of different magnitude
and exceedance probabilities are generated. Procedures of different complexity to deal with
this fundamental issue are discussed in Book 1, Chapter 3.
11
Fundamental Issues
In cases where the interest is principally on the accurate estimation of the AEP that
corresponds to a specified flood magnitude (e.g. the flood level at which a particular flood
protection structure is expected to fail), an expected AEP (or expected probability) quantile
should be used. The use of such a quantile ensures that, on average, its AEP equals the
true value. In cases where the mean-squared-error in the flood magnitude is to be minimized
for a given AEP, expected parameter quantiles should be used.
The difference between these quantile estimates is typically not of significance when there is
little or no extrapolation of the observed range of data, and especially if the skew is small.
However, if extrapolation is required and high skews are involved, the difference can be
appreciable. The methods in Book 3, Chapter 2 describes how to estimates these quantiles.
This problem is receiving increasing attention from transport managers and is discussed in
detail in Book 1, Chapter 5.
12
Fundamental Issues
For the first time, ARR has been based completely on Australian data to better reflect
Australia’s variable landscape, including a national database of extreme flood hazards. A
major task of the current ARR update was assembling a national databases of rainfall and
streamflow data for developing inputs and methodologies. ARR 1987 (Pilgrim, 1987) used
600 pluviographs rainfall gauge (measures the amount of rainfall which fell) with greater than
6 years data and 7500 daily rainfall gauges with over 30 years record. ARR 2016 uses
almost 30 years of extra rainfall and streamflow data, including data from over 2200
pluviographs and over 8000 daily rainfall gauges. Over 900 streamflow gauges were
analysed. Over 100, 000 storm events were analysed. This data provides a valuable
resource for the development of future methodologies.
Major improvements have been made to design flood estimation methods but national
databases will allow the use and parameterisation of more complex methods. Major
advances will continue that will allow us to leverage the limited data we can afford to collect
on the continent nation. Many projects have opened the eyes of researchers and
practitioners on what could be done with more time, money and the still limited data
available. The data sets developed as part of this update should be enhanced and applied to
for future improvements.
Book 1, Chapter 4 provides a summary of the types of data used for flood estimation.
This edition of ARR funded research projects which investigated the following aspects:
• How climate change will affect flooding and the factors influencing flooding;
• How to incorporate climate change into the investigation methodologies used by the
engineering profession to estimate design floods;
• Updating of the methodology in Australian Rainfall and Runoff so that the outcomes from
climate change research (e.g. regional dynamic downscaling) can be incorporated easily
into the investigation methodology as the science and results become available.
The impacts of climate change on design flood estimation are discussed in detail in Book 1,
Chapter 5, Section 10. More detail can be found in the ARR Climate Change Research Plan
and ARR Project 1: Climate Change Synthesis report (Bates and Westra, 2013; Bates et al.,
2015).
13
Fundamental Issues
The major areas where climate change will impact flooding are:
• Antecedent conditions;
A warming climate leads to an increase in the water holding capacity of the air, which causes
an increase in the atmospheric water vapour that supplies storms, resulting in more intense
precipitation. This effect is observed, even in areas where total precipitation is decreasing
(Trenberth, 2011). Indeed, some of the largest impacts of climate change are likely to result
from a shift in the frequency and strength of climatic extremes, including precipitation (White
et al, 2010). It is likely that the frequency of heavy precipitation will increase by the end of
the 21st century, particularly in the high latitudes and tropical regions and there is likely to be
an increase in heavy rainfalls associated with tropical cyclones (IPCC, 2012).
There have been many studies globally that have found increases in the intensity or
frequency of extreme precipitation events (Bates et al., 2008; Westra et al., 2013). It is likely
that since the 1970s the frequency of heavy precipitation events has increased over most
areas (Bates et al., 2008). From 1950 to 2005, extreme daily rainfall intensity and frequency
has increased in north-western and central Australia and over the western tablelands of New
14
Fundamental Issues
South Wales, but decreased in the south-east and south-west and along the central east
coast (CSIRO and Australian Bureau of Meteorology, 2007). Projections analysed by CSIRO
and Australian Bureau of Meteorology (2007) showed that an increase in daily precipitation
intensity is likely under climate change. The study found that the highest 1% of daily rainfalls
tends to increase in the north of Australia and decrease in the south, with widespread
increases in summer and autumn, but not in the south in winter and spring when there is a
strong decrease in mean precipitation (CSIRO and Australian Bureau of Meteorology, 2007).
The increases in precipitation are more evident in sub-daily rainfalls and major changes in
the intensity and temporal patterns of sub-daily rainfalls can be expected by the end of the
21st century (Westra et al., 2013). In a study of downscaled outputs from climate models,
Abbs and Rafter (2008) found that by 2070 the models projected an increase of an average
of 40% in intensity for 24 and 72 hour events around the Queensland-New South Wales
border, and an increase of more than 70% in the two hour rainfall events in the high terrain
inland from the Gold Coast.
Projections of potential evapotranspiration over Australia show increases by 2030 and 2070.
The largest projected increases are in the north and east, where the change by 2030 ranges
from little change to a 6% increase, with the best estimate being a 2% increase. By 2070,
the A1FI scenario gives increases of 2% to 10% in the south and west with a best estimate
of around 6%, and a range of 6% to 16% in the north and east with a best estimate around
10% (CSIRO and Australian Bureau of Meteorology, 2007).
15
Fundamental Issues
and north-west coasts of Australia of 9mm/yr, and a rate of 2 to 4 mm/yr on the south-
eastern and eastern Australian coastline (Church et al., 2012).
• Raise awareness of the various sources of uncertainty in common techniques for design
flood estimates.
The causes of these uncertainties are that practitioners are required to: (1) Use
mathematical algorithms to represent the complexity of catchment processes that transform
rare rainfall into rare flood events. (2) Calibrate and validate these algorithms using
measurements of the catchment process that are highly uncertain. It is widely acknowledged
that there is significant spatial variation in catchments and temporal and spatial variation in
the antecedent catchment wetness and rainfall events that drive significant flood events.
Practitioners use hydrologic models, which are simplified mathematical conceptualisations to
represent these complex spatially and temporally distributed hydrological processes. These
hydrologic models are calibrated to measurements of data on variables such as rainfall,
evaporation and flow. It is widely acknowledged that these data can have significant
measurement errors (refer to Book 1, Chapter 4). Rainfall is spatially heterogeneous,
however, typically there are only a small number rainfall gauges in a given catchment.
Streamflow is based on river height (stage) measurements and a rating curve, which can be
difficult to reliability estimate for large flood events. Typically these uncertainties are ignored
in the design flood estimation process.
Uncertainty analysis provides the tools with which to handle this uncertainty and incorporate
it into the design flood estimates. To enable the use of uncertainty analysis tools, it is first
important to distinguish two broad types of uncertainty:
• Aleatory (or inherent) Uncertainty - refers to uncertainty that arises through natural
randomness or natural variability that we observe in nature; and
16
Fundamental Issues
These definitions are consistent with the broad definitions provided by Ang and Tang (2007)
in wider context of general engineering and the specific context of flood risk by
Pappenberger and Beven (2006). The major differences between the two types of
uncertainty is that epistemic uncertainty can be reduced, through advances in process
understanding or improvement in measurement techniques, while aleatory uncertainty
cannot be reduced, and therefore needs to be characterised. Both types of uncertainty can
be characterised using tools of uncertainty analysis. Ang and Tang (2007) provide a wealth
of examples of the two types of uncertainty in a general engineering context.
In the context of design flood estimation, a simple example to understand the differences
between these two types of uncertainty is to consider an example of a flood frequency
distribution, as shown in Figure 1.2.2, with probability limits on the design flood estimates
over the range of Annual Exceedance Proabilities.
An illustration of epistemic uncertainty is the uncertainty in the estimate of the design flood
for a given Annual Exceedance Probability, e.g. Figure 1.2.2, the design flood for a 1%
Annual Exceedance Probability has an expected flow of 100 m3/s and the 95% probability
limits are 65 and 155 m3/s. This uncertainty in the design flood estimate for a given Annual
Exceedance Probability is primarily of type epistemic (or knowledge based) uncertainty.
There is an opportunity to reduce this uncertainty, if there were longer flow records which
would reduce the uncertainty in the parameters of the flood frequency distribution fitted to
the annual maximum floods. Similarly, for catchment modelling, or if there was a better
understanding on the catchment processes obtained through better data to calibrate and
verify the catchment modelling system, this would reduce the uncertainty in the flood
estimates of the catchment model.
17
Fundamental Issues
Figure 1.2.2. Different Types of Uncertainty, Aleatory and Epistemic, in the Context of Design
Flood Estimation
Despite the simplicity of the two illustrations of aleatory and epistemic uncertainty, given in
the flood frequency distribution in Figure 1.2.2, there are occasions where the distinction
between the two different types of uncertainty is not always clear. For example, the
illustration of Figure 1.2.2 implies that as level of information increases and the epistemic
uncertainty is reduced then “true” flood frequency distribution for a given catchment will
emerge. There is practical limit on the level of information (data and/or process
understanding) available on a given catchment hence the concept of a single “true” flood
frequency distribution for a given catchment is likely to unobtainable. Hence the epistemic
uncertainty given in Figure 1.2.2, will have a component of aleatory uncertainty.
The concepts of aleatory and epistemic uncertainty are similar to concepts of flood likelihood
and uncertainty from risk-based decision-making (Book 1, Chapter 5).
An example of the potential benefits of incorporating uncertainty for more informed decision
making is provided in Figure 1.2.3. Consider two different designs; Design A and Design B.
The practitioner needs to choose the design that reduces the flood magnitude for given
catchment location. Design A has a higher value for the most likely estimate of the design
flood, but has a lower uncertainty than Design B. The differences in the uncertainty
estimates could arise because Design B is a more complicated design option than design A
18
Fundamental Issues
and requires the use of more complex catchment modelling approach (e.g. fully distributed
model (Book 5)) and there was a lack of spatial data in the catchment to calibrate the
distributed model and hence parameter estimates had to be based on regional information.
In contrast Design A was based on catchment modelling approach that was well-calibrated
using high quality data that was readily available in the catchment. If the uncertainty is
ignored then Design B would be the preferred choice of the practitioner, because the most
likely estimate of the flood magnitude is lower than Design A. If the uncertainty in the flood
magnitude incorporated than a practitioner who is risk-averse may prefer to choose Design
A, because it the probability of a large magnitude flood with major/catastrophic consequence
is lower than Design B. This example illustrates how the uncertainty in the design flood
estimates, when combined with risk attitude (risk-averse, risk-neutral, or risk-seeking) of the
practitioner provides a more information on which to base the design choice.
Figure 1.2.3. Impact of Uncertainty on a Design Flood Estimate for Two Design Cases
From a practical and scientific perspective Pappenberger and Beven (2006) provide an
overview of the common reasons for not undertaking uncertainty analysis for hydrologic and
hydraulic models and argue that these arguments are not tenable. A summary of the
reasons provided by Pappenberger and Beven (2006) and their counter arguments are
summarized as follows:
Pappenberger and Beven (2006) states there are a group of practitioners who believe that
their models are (or at least will be in the future) physically correct and thus parameter
calibration or uncertainty analysis should not be necessary (or only minimal) if predictions
are based on a true understanding of the physics of the system simulated. This position is
difficult to justify considering published discussions of the modelling process in respect of
the sources and impacts of uncertainties (Beven, 1989; Beven, 2006; Oreskes et al.,
1994). It is argued that this group of practitioners have too much faith in the model
representation of physical laws or empirical equations. An alternative is a group of
19
Fundamental Issues
To be able to learn about how water flows through the landscape and the best model to
represent this water flow requires the use of a hypothesis testing framework . In real
applications, this hypothesis testing framework would evaluate different competing
hypothesis (ie. models) against the observations, and should explicitly consider the
potential sources of uncertainty in applications to real systems to enable the results to be
stated in a probabilistic rather than a deterministic manner. This would enable evaluation
of whether the differences in model performance, can be reliability identified given the
uncertainty in the predictions and observations.
Pappenberger and Beven (2006) cite several scientific studies that suggest practitioners
actually want to get a feeling for the range of uncertainty and the risk of possible
outcomes. Furthermore, policy-makers derive decisions on a regular basis under severe
uncertainties. If uncertainty is not communicated and there is a misunderstanding of the
certainty of modeling results this can lead to a loss of credibility and trust in the model and
the modelling process.
However, it is acknowledge that there are a wide range of different perceptions of “risk”
and “uncertainty” and that effort is required on the part of both pracitioners and policy-
makers to work together to achieve a common understanding of uncertainty.
There are two supporting arguments to this reason (1) Decisions are binary; (2)
Uncertainty bounds are too wide to be useful in decision making. Pappenberger and
Beven (2006) conclude there is no question that, for many environmental systems, a
rigorous estimate of uncertainty leads to wide ranges of predictions. There are certainly
cases in which the predictive uncertainty for outcomes of different scenarios is
significantly larger than the differences between the expected values of those scenarios.
This leads to the perception that decisions are difficult to make. To counter these
arguments, Pappenberger and Beven (2006) present numerous examples from the
literature on decision support systems and decision analysis which provide a range of
methods for decision making under uncertainty based on assessments of the risk and
costs of possible outcomes. Examples of decisions under uncertainty for Flood Frequency
Analysis are illustrated by Wood and Rodriuez-Iturbe (1975) and more recently by Botto
et al. (2014). The key outcome from Botto et al. (2014) was that incorporating uncertainty
in estimating the design floods (by minimising the total expected costs) leads to
substantial higher estimates of the design flood compared to standard approaches when
uncertainty is ignored. This suggests incorporating uncertainty leads to reduce expected
costs and highlights the benefits of incorporating uncertainty.
Pappenberger and Beven (2006) identify that in the application of uncertainty analysis
methods, certain decisions must be made, some of which include an element of
subjectivity, including the choice of probability distributions for data errors, prior
20
Fundamental Issues
Pappenberger and Beven (2006) note this is a common attitude amongst practitioners
and is consequence of the need to spend more time and money on assessing the
different potential sources of uncertainty in any particular application, coupled with a lack
of clear guidance about which methods might be useful in different circumstances.
Pappenberger and Beven (2006) note that in general, uncertainty analysis is not too
difficult to perform and provide list of relevant software that is available. Since,
Pappenberger and Beven (2006) review, the research publications on uncertainty analysis
in hydrologic modelling has increased substantially, with many new tools/techniques and
reviews available (for example the recent review by Uusitalo et al. (2015)). These tools
will be reviewed to provide guidance for practitioners on which is applicable for different
situations in the context of design flood estimation. The continued increases in
computational power have reduced the computational costs of uncertainty analysis, which
reduces the difficulty in undertaking uncertainty analysis.
In summary, Pappenberger and Beven (2006) conclude that in the past many modelling and
decision making processes have ignored uncertainty analysis and it could be argued that
under many circumstances it simply would not have mattered to the eventual outcome.
However, they note that the arguments for uncertainty analysis are compelling because:
1. It makes the practitioner think about the processes involved and the decisions made
based on model results;
3. It allows a more fundamental retrospective analysis and allows new or revised decisions
to be based on the full understanding of the problem and not only a partial snapshot; and
4. Decision makers and the public have the right to know all limitations in order to make up
their own minds and lobby for their individual causes.
21
Fundamental Issues
the two common techniques used for design flood estimation; the Flood Frequency Analysis
and catchment modelling approaches to design flood estimation. The primary drivers of each
of the sources of uncertainty will then also be discussed.
The various sources of uncertainty that are relevant to design flood estimation are outlined
as follows:
• Predictive Uncertainty
Predictive uncertainty represents the total uncertainty in the predictions of interest, typically
the estimates of the design flood. It is comprised of the various sources of uncertainty that
are outlined below, including data uncertainty, parametric uncertainty, structural uncertainty,
regionalisation uncertainty (if relevant) and deep uncertainty (if relevant). This total predictive
uncertainty is what used as input to the decision making uncertainty framework, to provide
reliable predictions. The magnitude of the total predictive uncertainty and the relative
contribution of the various sources of uncertainty is of obvious interest. The magnitude
provides an indication of the total uncertainty of the predictions, while the relative
contribution highlights which sources of uncertainty are the key contributors and which can
be reduced.
• Data Uncertainty
Data uncertainty is a key source of predictive uncertainty. The more uncertain the data used
to inform the methods used to estimate the peak flows, the more uncertainty in the
predictions of the peak flows. The definition of “data” is a challenging one in the context of
design flood estimation since in each step of the modelling process, the data used an input
maybe based on the output of a prior modelling process, rather than actual measurements.
Data uncertainty is dependent on the quality and number of measurements undertaken to
inform that data.
• Parametric Uncertainty
Design flood estimates relay on using mathematical models to predict design floods. These
models are estimated using time series of uncertain data with finite length. These limitations
induce uncertainty in the estimates of these parameters, called parametric uncertainty. This
parametric uncertainty would occur even if the mathematical model were exact. The
magnitude of this parametric uncertainty, decreases as the length of the time series of data
increases and increases when the uncertainty of the data increases. When time series are
short and/or uncertainty in the data are high then parametric uncertainty can contribute
significantly to total predictive uncertainty.
• Structural Uncertainty
Structural uncertainty refers to the uncertainty in the mathematical model used to provide the
predictions of the peak flows. It is a consequence of the simplifying assumptions made in
approximating the actual environmental system with a mathematical hypothesis (Renard et
al., 2010). The structural error of a hydrologic model depends the model formulation.
• Regionalisation Uncertainty
22
Fundamental Issues
the flood frequency distribution, the parameters of the runoff-routing model, the loss model
or the design rainfall used in the catchment modelling approach. It is a function of the
predictive uncertainty of the original application of the model at the data rich location (which
is a dependent on the structural, parametric and data uncertainty at that data rich site) and
the regionalisation model used to transfer information from one site to another. Given there
are large number of sources of uncertainty in regionalisation uncertainty, it can induce
significant predictive uncertainty, when there is very limited at-site data.
• Deep Uncertainty
Deep uncertainty refers to the sources of uncertainty that impact on the robustness of design
but are difficult to assign apriori probabilities measures to. It acknowledges that practitioners
and decision makers may not be able to enumerate all sources of uncertainty in a system
nor their associated probabilities (Herman et al., 2014). It is related to the emerging field of
robust decision making, where it is assumed that future states of the world are deeply
uncertainty and instead of assigning probabilities, it seeks to identify robust strategies which
perform well across the range of plausible future states. In the context of design flood
estimation, examples of deep uncertainty could include the effects of climate change,
because the different scenarios used for future greenhouse gas emissions cannot be
assigned probabilities, another example might be future land use changes within a
catchment, because it depends on variety of political, social and economic factors, which
can be difficult to reliably assign probabilities. This source of uncertainty requires a different
approach to the other sources, where scenario analysis is used to test the system and
identify thresholds where significant failures occur. This approach has seen recent
application in analysing water resources systems for long-term drought planning, however
the application in flood design is limited. Given this is still a burgeoning area with significant
research required, the approaches to treat this source of uncertainty will not be further
considered in the scope of this uncertainty in Australian Rainfall and Runoff.
1. Identify the information required for each step of the methods; and
2. Identify the potential sources of uncertainty in the information required for each of the
steps.
Uncertainty is related to the level of information (ie. available of at-site data, its length and
quality). For the purposes of this illustration, two different scenarios of available information
will be considered (a) Using at-site data (b) No at-site data available, using regional
information only. In practise, the level of information will be commonly be somewhere in
between these two scenario, nonetheless these two scenarios provide convenient “use”
case, to illustrate the identification of the sources of uncertainty.
The relative contribution of each of these sources of uncertainty to the total predictive
uncertainty is catchment specific, and depend on a range of factors (outlined below). Hence,
to evaluate and determine the dominant source of uncertainty in a particular catchment
requires a rigourous uncertainty analysis. Hence the following description will focus on
23
Fundamental Issues
describing the various source of uncertainty for each of the steps in both Flood Frequency
Analysis and catchment modelling and identify the factors that will impact on the magnitude
of that particular source. In any particular combination of information available means that
one source could dominant the other. Hence in the following descriptions, each uncertainty
source will not be described as low or high, rather the description will identify what increases
or decreases the magnitude of the sources uncertainty.
Data Uncertainty
When using at-site streamflow data to estimate the Flood Frequency Distribution, the
data uncertainty in this streamflow data is a source of uncertainty. The factors that
effect the magnitude of this source of uncertainty are primarily the quality of the rating
curve used to estimate the streamflow, the number of gaugings (and their quality), the
degree of extrapolation of the rating, the stability of the rating curve, among others (Le
Coz et al., 2013).
Parametric Uncertainty
As parameters of the Flood Frequency Distribution are estimated based on limited time
series of data, this induces uncertainty in the parameters. This parametric uncertainty
is determined by the length of data (uncertainty increases as the length decreases) and
the quality of the data (parametric uncertainty increases as data uncertainty increases).
Structural Uncertainty
The source of structural uncertainty is the assumed form of the food frequency
distribution probability model, ie. log-Normal, Log Pearson III etc. When calibrating to
at-site data, this source of uncertainty can be checked by comparing against the
observed data, to determine if the quality of the fit to observed data.
When there is no at-site data, then regional information is used to inform the
parameters and the choice of the probability model used for the flood frequency
distribution. For this case, there data uncertainty is not a source of uncertainty,
however the parametric uncertainty is higher than case (a), because no at-site data is
available, and the structural uncertainty is also high than case (a) because no at-site
data is available to evaluated if the chosen probability model for the flood frequency
distribution is appropriate.
Regionalisation Uncertainty
When using regional information there is also regionalisation uncertainty because the
parameters of the flood frequency has been transferred from another catchment. All
the sources of uncertainty that contribute to the regionalisation uncertainty as
described previously will be relevant to this source of uncertainty.
24
Fundamental Issues
In this Step 2 of predicting design floods using Flood Frequency Analysis, the data,
parametric and structural uncertainty sources identified in Step 1 will be present. A
additional contributor to the structural uncertainty when predicting design floods with
Annual Exceedance Probability beyond the range of the streamdata (e.g. 1 in 100 Annual
Exceedance Probability based on 30 years of streamflow data) is the assumption that the
chosen probability model will provide a reliable estimate of design floods under
extrapolation to the 1 in 100 or 1 in 200 Annual Exceedance Probability flood. This
additional source of structural uncertainty will be present, irrespective of case (a) or case
(b) levels of information. A longer time series of at-site streamflow data, and hence a
lower degree of extrapolation will decrease, but not eliminate, the magnitude of this
source of uncertainty.
The parameters for the runoff-routing model and the loss model are usually calibrated
jointly using flooding events in a given catchment. There are distinct components of the
catchment modelling processes, however as their sources of uncertainty are similar, they
will be discussed together.
Data Uncertainty
Runoff-routing models (e.g. RORB) and loss models (e.g. required in the catchment
modelling approach are typically calibrated to at-site flood event data. In this calibration
step, the data uncertainty is the uncertainty in the streamflow data (discussed
previously) and the additional uncertainty in the rainfall data, which as discussed
previously, increases as the rainfall gauge density within the catchment decreases.
Parametric Uncertainty
The runoff-routing model loss model have parameters estimated through calibration to
a limited number of flood events. This source of parametric uncertainty will decreases
as the number of events decreases, and the consistency of the parameter estimates
between events also increases. If the parameter estimates vary significantly between
events, this will increase the parametric uncertainty.
Structural Uncertainty
25
Fundamental Issues
uncertainty. As the fit to the data used for calibration increases this source of
uncertainty will decrease, but will not be eliminated. If the complexity of the runoff-
routing model increases, e.g. move from lumped to a spatially distributed model, may
potentially decreased the structural uncertainty, however, with a spatially distributed
model the challenge becomes estimating the parameters over a spatial grid. Hence, if
there is a lack of spatial streamflow and rainfall data to calibrate the model, than there
is a potentially a shift from structural uncertainty to parametric uncertainty, which may
results in no reduction the total predictive uncertainty.
Similar to Flood Frequency Analysis, when there is no at-site data, the regional
information is used to inform the parameter estimates, and choice of runoff-routing and
loss model. For this case, there is data uncertainty is not a source of uncertainty,
however the parametric uncertainty is higher than case (a), because no at-site data is
available, and the structural uncertainty is also high than case (a) because no at-site
data is available to evaluated if the runoff-routing model or loss model is appropriate.
Regionalisation Uncertainty
When using regional information there is also regionalisation uncertainty because the
parameters of the runoff-routing model and loss model have been transferred from
another catchment. All the sources of uncertainty that contribute to the regionalisation
uncertainty as described previously will be relevant to this source of uncertainty, but
they will apply both to the loss model and the runoff-routing model. In comparison to
regionalisation of flood frequency distribution which is relatively well advanced , the
regionalisation of runoff-routing models and loss models is still relatively unreliable and
hence the regionalisation uncertainty of runoff-routing and loss models is likely to far
larger than regionalisation of flood frequency distributions.
In the majority of cases practitioners will use the design rainfall estimates provided by the
Bureau of Meteorology, rather than undertake an Intensity Frequency Duration analysis of
the observed rainfall data within a catchment, hence only the case when regional
information is available will be considered in this description. There are many similarities
to sources of uncertainty in the Flood Frequency Analysis, except the goal is to estimate
extreme rainfall events rather than flow events.
The source of data uncertainty is rainfall gauge density and the length of rainfall data
across Australia, is highly variable in different parts of Australia and with far lower gauge
density and shorter records for sub-daily rainfall data then daily. This can induce
significant data uncertainty in the design rainfall estimates. Similar to Flood Frequency
Analysis, a probability model is used to estimate the extreme rainfall events (e.g. 1 in 100
AEP) based on the limited rainfall data available. This probability model has parametric
uncertainty, which increases as the length and quality of the rainfall data decreases.
There is structural uncertainty in the choice of the probability model for extreme rainfall,
and this is increased when the probability model is used to extrapolate to from shorter
rainfall time series to extreme events. This is particular problematic for sub-daily rainfall,
because records are typically shorter than daily rainfall data. There is regionalisation
26
Fundamental Issues
uncertainty because the design rainfall estimates are regionalised to areas with limited
gauged data.
This design rainfall for an event is then disaggregated into a time series using temporal
patterns, they have their own sources of data, parametric, structural and regionalisation
uncertainty, because they are estimated based on rainfall data from outside the
catchment of interest. If spatial patterns are used to distribute design rainfall spatially
across a catchment, then they will similar sources of uncertainty.
Considering the high spatial and temporal variability of rainfall process these uncertainties
in design rainfall are unlikely to be small.
When a catchment modelling approaches is used to predict design floods, the data,
parametric, structural and regionalistion uncertainty identified in Steps 1 and 2 will be
present. There are two sources of addition uncertainty, parameter uncertainty and
structural uncertainty. These sources of uncertainty are because the runoff-routing and
loss models in Step 1 are calibrated on runoff events are then extrapolated to larger
design flow events, e.g. 1 in 100 AEP. The source of uncertainty is whether the
parameters and model structural based on calibrations to (inevitable) smaller flood events
can be applied to the larger design flow events.
27
Fundamental Issues
2.8.6. Summary
This overview of the uncertainty in design flood frequency estimation has identified the two
different types of uncertainty in the context of design flood estimation, aleatory uncertainty
(due to natural variability) and epistemic uncertainty (due to knowledge uncertainty). It then
outlined the motivation for undertaking uncertainty analysis, which is to provide more
informed and transparent information on the uncertainty in the design flood estimates to
enable practitioners and design makers to make better judgements on the appropriate
design. The major sources of uncertainty in the context of design flood estimation were then
outlined, and include data (uncertainty in measurements), parametric uncertainty of the
models used, structural uncertainty in the models mathematical representation of the
physical process, regionalisation uncertainty when information is moved from data rich to
data poor catchments, and the total predictive uncertainty, which is composed of the
elements of the individual sources of uncertainty. To raise awareness of the sources of
uncertainty in the different techniques used for design flood estimation were identified. The
conclusion, was that comparing Flood Frequency Analysis and catchment modelling, due to
the larger number of components, the catchment modelling technique has a larger number
of sources of uncertainty than Flood Frequency Analysis, and hence this will likely lead to a
higher predictive uncertainty. However, the magnitude of the total predictive uncertainty is
catchment specific, depending the availability of data and knowledge of the processes that
driver design flood events.
2.9. References
Abbs, D. and Rafter, T. (2008), The Effect of Climate Change on Extreme: Rainfall Events in
the Westernport Region, CSIRO.
28
Fundamental Issues
Ang, A. and Tang, W. (2007), Probability Concepts in Engineering, 2nd edition ed. Wiley,
United States of America.
Bates, B., and Westra, S. (2013), Climate Change Research Plan - Summary. Report for
Institution of Engineers Australia, Australian Rainfall and Runoff Guideline: Project 1.
Bates, B., Evans, J., Green, J., Griesser, A., Jakob, D., Lau, R., Lehmann, E., Leonard, M.,
Phatak, A., Rafter, T., Seed, A., Westra, S. and Zheng, F. (2015), Development of Intensity-
Frequency-Duration Information across Australia - Climate Change Research Plan Project.
Report for Institution of Engineers Australia, Australian Rainfall and Runoff Guideline: Project
1. 61p.
Bates, B. C., Z. W. Kundzewicz, S. Wu, and J. P. Palutikof (2008), Climate Change and
Water. Technical Paper of the Intergovernmental Panel on Climate Change, Rep., 210 pp,
IPCC Secretariat, Geneva.
Beven,A manifesto for the equifinality thesis, Journal of Hydrolology, 320: 18-36.
Botto, A., Ganora, D., Laio, F. and Claps, P. (2014), Uncertainty compliant design flood
estimation, Water Resour Res, 50(5), 4242-4253. 10.1002/2013WR014981
CSIRO and Australian Bureau of Meteorology (2007), Climate Change in Australia, CSIRO
and Bureau of Meteorology Technical Report, p: 140. www.climatechangeinaustralia.gov.au
Church, J.A., White, N.J., Hunter, J.R. and McInnes, K.L. (2012), Sea level. In A Marine
Climate Change Impacts and Adaptation Report Card for Australia 2012 (Eds. E.S.
Poloczanska, A.J. Hobday and A.J. Richardson). Available at: http://
www.oceanclimatechange.org.au ISBN: 978-0-643-10928-5.
Fowler, H.J. and Ekstrom, M. (2009), Multi-model ensemble estimates of climate change
impacts on UK seasonal precipitation extremes, International Journal of Climatology, 29(3),
385-416.
Groisman, P.Y, Karl T.R., Easterling D.R., Knight R.W., Jamason P.F., Hennessy K.J.,
Suppiah R., Page C.M., Wibig J., Fortuniak K., Razuvaev V.N., Douglas A., Forland E. and
Zhai, P.M. (1999), Changes in the probability of heavy precipitation: Important indicators of
climatic change,Climatic Change, 42: 243-283.
Herman, J. D., Zeff, H.B., Reed, P. M. and Characklis, G. W. (2014), Beyond optimality:
Multistakeholder robustness tradeoffs for regional water portfolio planning under deep
uncertainty, Water Resour Res, 50(10), 7692-7713, 10.1002/2014WR015338
Hunter, J. (2007), Estimating sea-level extremes in a world of uncertain sea-level rise, 5th
Flood Management Conference, Warrnambool, Australia, [Accessed 12 October. 2007].
29
Fundamental Issues
Jones, M.R., (2012), Characterising and modelling time-varying rainfall extremes and their
climatic drivers. PhD Thesis, Newcastle University.
Langbein, W.B. (1949), Annual floods and the partial-duration flood series, Transactions,
American Geophysical Union, 30(6), 879-881.
Le Coz, J., Renard, B., Bonnifait, L., Branger, F. and Le Boursicaud, R. (2013). Uncertainty
Analysis of Stage-Discharge Relations using the BaRatin Bayesian Framework. 35th IAHR
World Congress, 08/09/2013 - 13/09/2013, Chengdu, China.
Nicholls, N. and Alexander, L. (2007), Has the climate become more variable or extreme?
Progress 1992?2006, Progress in Physical Geography, 31: 77-87.
Pappenberger, F. and Beven, K. J. (2006), Ignorance is bliss: Or seven reasons not to use
uncertainty analysis, Water Resour Res, 42(5), 8, W05302 10.1029/2005wr004820
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Renard, B., Kavetski, D., Kuczera, G., Thyer, M. and Franks, S.W. (2010), Understanding
predictive uncertainty in hydrologic modeling: The challenge of identifying input and
structural errors, Water Resour. Res., 46(5), W05521, 10.1029/2009wr008328.
Trenberth KE (2011), Changes in precipitation with climate change. Clim Res 47: 123-138.
Uusitalo, L., Lehikoinen, A., Helle, I. and Myrberg, K. (2015), An overview of methods to
evaluate uncertainty of deterministic models in decision support, Environmental Modelling &
Software, 63: 24-31, http://dx.doi.org/10.1016/j.envsoft.2014.09.017.
Westra, S., L.V. Alexander, F.W. Zwiers (2013), Global increasing trends in annual maximum
daily precipitation. Journal of Climate, in press, doi:10.1175/JCLI-D-12-00502.1.
White, C.J., Grose, M.R, Corney, S.P., Bennett, J.C., Holz, G.K., Sanabria, L.A., McInnes,
K.L., Cechet, R.P., Gaynor, S.M. and Bindoff, N.L. (2010), Climate Futures for Tasmania:
extreme events technical report, Antarctic Climate and Ecosystems Cooperative Research
Centre, Hobart, Tasmania.
Wood, E.F. and Rodriguez-Iturbe, I. (1975), Bayesian inference and decision making for
extreme hydrologic events, Water Resour Res, 11(4), 533-542, 10.1029/WR011i004p00533.
30
Chapter 3. Approaches to Flood
Estimation
Rory Nathan, James Ball
3.1. Introduction
Design flood estimation is a focus for many engineering hydrologists. In many situations,
advice is required on flood magnitudes for the design of culverts and bridges for roads and
railways, the design of urban drainage systems, the design of flood mitigation levees and
other flood mitigation structures, design of dam spillways, and many other situations. The
flood characteristic of most importance depends on the nature of the problem under
consideration, but it is often necessary to estimate peak flow, peak level, flood volume, and
flood rise. The analysis might be focused on a single location (such as a bridge waterway or
levee protecting a township) or it may be necessary to consider the performance of the
whole catchment as a system, as required in urban drainage design.
Design objectives are most commonly specified using risk-based criteria, and thus the focus
of this guidance is on the use of methods that provide estimates of flood characteristics for a
specified probability of exceedance (referred to as flood quantiles, see Book 1, Chapter 2,
Section 2).
The general nature of the estimation problem is illustrated in Figure 1.3.1. This figure shows
the annual maxima floods (blue circular symbols) from 75 years of available gauged records.
These flood maxima have been ranked from largest to smallest and are plotted against an
estimate of their sample exceedance probability (as described in Book 3). Such information
can be used directly to identify the underlying probability model of flood behaviour at the site
at which the data was collected. The flood peaks are usually considered to be independent
random variables, and it is often assumed that each flood is a random realisation of a single
probability model. The gauged flood peaks shown in Figure 1.3.1 do appear to be from a
homogeneous sample (ie. a single probability model), but in many practical problems the
relationship between rainfall and flood may change over time, and it may be necessary to
either censor the data or identify appropriate exogenous factors to condition the fit of the
adopted probability model.
The best estimate of the relationship between flood magnitude and Annual Exceedance
Probability (AEP) (Book 1, Chapter 2, Section 2) obtained by fitting a probability model is
shown by the solid red curve in Figure 1.3.1. The gauged data represent a finite sample of a
given size, and thus any estimate of flood risk using a fitted probability model is subject to
uncertainty, as illustrated by the increasingly divergent dashed red curves in Figure 1.3.1
(referred to as confidence limits). The computation of such confidence limits usually only
reflects the limits of the available sample, or perhaps the increasing uncertainty involved in
the extrapolation of the relationship between recorded stage and estimated flood peak. The
computed confidence limits are also conditioned on the assumed underlying probability
model. However, it needs to be recognised that these factors only represent the
uncertainties most easily characterised; other factors, such as the influence of a non-
stationary climate, changing land-use during the period of record, and the changing nature of
31
Approaches to Flood
Estimation
flood response with event magnitude, confound attempts to identify the most appropriate
probability model. Accordingly, the true uncertainty around such estimates will be larger than
that based solely on consideration of the size of the available sample. Of course, data are
rarely available at the location of design interest, and additional uncertainty is involved in the
scaling and/or transposition of flood risk estimates to the required site.
Figure 1.3.1. Illustration of Stochastic Influence of Hydrologic Factors on Flood Peaks and
the Uncertainty in Flood Risk Estimates Associated with Observed Flood Data
One of the great advantages of fitting a probability flood model to observed data is that the
approach avoids the problem of considering the complex joint probabilities involved in flood
generation processes. Floods are the result of the interaction between many random
variables associated with natural and anthropogenic factors; natural factors include
interactions between the characteristics of the rainfall event, antecedent conditions, and
other stochastic factors such as tide levels and debris flows; anthropogenic factors might
include the influence of dam and weir operations, urbanisation, retarding basins, flood
mitigation works, and land-management practices.
Figure 1.3.1 also illustrates the influence of natural variability on flood generation processes,
and is based on the stochastic simulation of flood processes using 10 000 years of rainfall
data under the assumption of a stationary climate. The stochastic flood maxima were
obtained by varying key factors that influence the production of flood runoff, namely rainfall
depth, initial and continuing losses, and the spatial and temporal patterns of catchment
rainfalls. The flood peaks in Figure 1.3.1 are plotted against the AEP of the causative rainfall,
and the scatter of the stochastic maxima illustrates the natural variability inherent in the
production of flood runoff. While these maxima have been derived from mathematical
32
Approaches to Flood
Estimation
modelling of event rainfall bursts, an indication of this variability can be seen in the
relationship between observed rainfalls and runoff in gauged catchments (though of course
with real-world data we do not have 10 000 years of observations).
The scatter of stochastic flood maxima resulting from different combination of flood
producing factors illustrates the inherent difficulty in removing bias from “simple design
event” methods. Such methods use a flood model to transform probabilistic bursts of rainfall
(the design rainfalls as presented in Book 2) to corresponding estimates of floods. For
example it is seen from Figure 1.3.1 that the flood peaks resulting from 1% AEP rainfalls
range in magnitude between around 500 m3/s and 2000 m3/s; it is also seen that the rainfall
that might generate a flood with a 1000 m3/s peak might vary between a 20% and 0.1%
AEP. Traditional practice has been to adopt fixed values of losses and rainfall patterns for
use with design rainfalls to derive a single flood that is assumed to have the same AEP as its
causative rainfall (probability neutrality). If chosen carefully it is possible to select a set of
values that yields an unbiased estimate of the design flood for a particular catchment, but
without taking steps to explicitly cater for the joint probabilities involved, there is a
considerable margin for error (Kuczera et al., 2006; Weinmann et al., 2002).
Accordingly, a key difference between this and earlier versions of ARR is the focus on how
best to achieve “probability neutrality” between rainfall inputs and flood outputs when using
rainfall-based techniques. A number of more computationally intensive procedures are
introduced (such as ensemble event, Monte Carlo event, and continuous simulation
approaches) to help ensure that the method used to transform rainfalls into design floods is
undertaken in a fashion that minimises bias in the resulting exceedance probabilities. An
overview of these concepts is provided in Book 1, Chapter 3, Section 3, and more detailed
description of the procedures is provided in Book 4.
The methods discussed here are divided into two broad classes of procedures based on:
i. the direct analysis of observed flood and related data (Book 1, Chapter 3, Section 2); and
ii. the use of simulation models to transform rainfall into flood maxima (Book 1, Chapter 3,
Section 3).
All methods involve the use of some kind of statistical model (or transfer function) to
extrapolate information in space or time. Each method also has its strengths and limitations
and they vary in their suitability to different types of data and design contexts, and this is
discussed in Book 1, Chapter 3, Section 4.
33
Approaches to Flood
Estimation
exceedances across a region can improve the fit of the probability model at a single site with
a short period of record.
One drawback of frequency analyses is that it can only provide quantile estimates at sites
where data is available. Accordingly, a range of procedures have been developed to
estimate flood risk at sites with little or no data (Book 1, Chapter 3, Section 2). These
procedures generally involve the use of regression models to estimate the parameters of
probability models (or the flood quantiles) using physical and meteorological characteristics,
although simpler scaling functions can sometimes be used for local analyses.
Table 1.3.1. Summary of Common Procedures used to Directly Analyse Flood Data
Frequency Frequency At-Site/Regional Regional Flood
Analysis of Analysis of Rare Flood Frequency Frequency
Frequent Floods Floods Analysis Estimation
Inputs Peak-over- Annual Maxima Gauged flood Catchment
Threshold series Series at single site maxima at multiple characteristics and
of interest sites with similar flood quantiles (or
flood behaviour parameters) derived
from frequency
analyses
Analysis Selected Selected probability Information from Regression on
probability model model is fitted to multiple model parameters
is fitted to flood flood maxima (e.g. catchments is or flood quantiles
maxima (e.g. Log Pearson used to improve fit (e.g. RFFE
exponential III/GEV of probability method), or local
distribution fitted distributions fitted model (e.g. scaling functions
by L-moments) by L-moments) regional L- based on catchment
moments or characteristics
Bayesian
inference)
Outputs Flood quantiles Flood quantiles for Improved flood Flood quantiles at
for AEPs > 10% AEPs < 10% at a quantiles at ungauged sites
at a gauged site gauged site multiple sites of
interest
ARR Book 3, Chapter Book 3, Chapter 2, Book 3, Chapter 2, Book 3, Chapter 3
Guidance 2, Section 4 and Section 4 and Book Section 6
Book 3, Chapter 3, Chapter 2, (Bayesian
2, Section 7 Section 6 Calibration)
Flood Frequency Analyses can be broadly divided into three types of applications
(Table 1.3.1), namely:
34
Approaches to Flood
Estimation
• At-site - the parameters of the probability distributions are fitted to annual maxima series
to derive estimates of flood risk rarer than 10% AEP (or to peaks above a given threshold
for more common floods) solely using information at the site of interest.
• At-site/regional - the information used to fit the model parameters is obtained from the site
of interest as well as from other sites considered to exhibit similar flood behaviour.
• Regional - the information used to fit the model parameters is obtained from a group of
sites considered to exhibit similar flood behaviour, where, as described in the following
section, regression-based procedures may be used to estimate the model parameters (or
probability quantiles) at the ungauged sites of interest.
Flood frequency methods are particularly attractive as they avoid the need to consider the
complex processes and joint probabilities involved in the transformation of rainfall into flood.
However, the utility of these methods is heavily dependent on both the length of available
record and its representativeness to the catchment and climatic conditions of interest, as
they are based on the assumption of stationary data series. Details on what distributions
should be used, and how to select the sample of maxima and fit the distribution, are
provided in Book 3.
Book 3 provides details of the application of the latter approach to data sets for different
Australian regions in which the three parameters of the probability model are estimated from
catchment characteristics using a Bayesian regression approach (Rahman et al., 2014). The
developed procedure provides a quick means to estimate the magnitude of peak flows
between the 50% to 1% AEPs, with the additional attraction that uncertainty bounds are
provided. The regression equations presented in Book 3 were developed using parameters
obtained from at-site/regional flood frequency analyses, and thus represent a rigorous
example of Regional Flood Frequency Estimation based on parameter regression.
35
Approaches to Flood
Estimation
the procedure presented in Book 3 depends on the relevance of the data to the problem at
hand, and on the extent to which the assumptions of the fitted model have been satisfied.
Table 1.3.2 summarises the different characteristics of the event-based and continuous
simulation approaches. The three broad approaches to event-based simulation all use the
same hydrologic model to convert design rainfall inputs into hydrograph outputs, the main
difference is in the level of sophistication used to minimise bias in the probability neutrality of
the transformation. Continuous simulation approaches utilise model structures which
generally differ markedly from those used in event-based models.
Event-based approaches are based on the transformation of rainfall depths of given duration
and AEP (“design rainfalls”) into flood hydrographs by routing rainfall excess (obtained by
applying a loss model to rainfall depths) through the catchment storage. Such models can
include the allowance of additional pre- and post-burst rainfalls to represent complete storm
events, and can separately consider baseflow contribution from prior rainfall events to
represent total hydrographs. The defining feature of such models is that they are focused on
the simulation of an individual flood event and that antecedent conditions need to be
specified in some explicit fashion. Simple Design Event methods are applied in a
deterministic fashion, where key inputs are fixed at values that minimise the bias in the
transformation of rainfall into runoff. Alternatively, stochastic techniques can be used to
explicitly resolve the joint probabilities of key hydrologic interactions; ensemble techniques
provide simple (and approximate) means of minimising the bias associated with a single
hydrologic variable, whereas Monte Carlo techniques represent a more rigorous solution that
can be expanded to consider interactions from a range of natural and anthropogenic factors.
It should be noted that the guidance provided in ARR only focuses on the use of stochastic
techniques to cater for (random) variability of key inputs, and its use to characterise
epistemic uncertainty is assumed to be the domain of specialist statistical hydrologists.
36
Approaches to Flood
Estimation
i. a runoff production model - to convert the storm rainfall input at any point in the catchment
into rainfall excess (or runoff) at that location, and;
ii. a hydrograph formation model - to simulate the conversion of rainfall excess into a flood
hydrograph at the point of interest.
37
Approaches to Flood
Estimation
The AEP of the derived flood is assumed to be the same as the input rainfall. This
assumption is made on the basis that the hydrologic factors that control runoff production are
set to be probability neutral. In practice this means that factors related to the temporal and
spatial distribution of rainfall, antecedent conditions and losses, are set to “typical” values
(from the central tendency of their distributions) that are associated with the input rainfall.
Factors related to formation of the hydrograph are generally assumed to be invariant with
rainfall. Design events for different rainfall durations are simulated, and the one producing
the highest peak flow (corresponding to the critical rainfall duration) is adopted as producing
the design flood for the selected AEP (flood quantile).
The basis of the Monte Carlo event method is a recognition that flood maxima can result
from a variety of combinations of flood producing factors, rather than from a single
combination as is assumed with the design event approach. For example, the same peak
flood could result from a large, front-loaded storm on a dry basin, or a moderate, more
uniformly distributed storm on a saturated basin. Such approaches attempt to mimic the joint
variability of the hydrologic factors of most importance, thereby providing a more realistic
representation of the flood generation processes. The method is easily adapted to focus on
only those aspects that are most relevant to the problem. To this end, it is possible to adopt
single fixed values for factors that have only a small influence on runoff production, and full
distributions (or data ensembles) for other more important inputs, such as losses, and
temporal patterns, or any influential factor (such as initial reservoir level) that may impact on
the outcome. The approach involves undertaking numerous simulations where the stochastic
factors are sampled in accordance with the variation observed in nature and any
dependencies between the different factors. In the most general Monte Carlo simulation
approach for design flood estimation, rainfall events of different durations are sampled
stochastically from their distribution (Weinmann et al., 2002). Alternatively, the simulations
can be undertaken for specific storm durations (applying the critical rainfall duration concept)
and the exceedance probability of the desired flood characteristic may be computed using
the Total Probability Theorem (Nathan et al., 2002). The latter approach is simpler and more
aligned to available design information, and is more easily implemented by those familiar
with the traditional design event approach.
The simple design event approach gives a single set of design hydrographs that can be
used for subsequent modelling steps, such as input to a hydraulic model to determine flood
levels for a given exceedance probability. With the Ensemble and Monte Carlo event
methods an ensemble of hydrographs is produced and it is often not practical to consider all
these hydrographs in subsequent simulation steps. With both the ensemble and Monte Carlo
approaches a representative hydrograph can be simply scaled to match the probability
neutral estimate of the peak flood; the representative hydrograph needs to capture the
38
Approaches to Flood
Estimation
typical volume and timing characteristics for the selected duration and severity of the event,
though some of the advantages of ensemble and Monte Carlo event methods are lost if an
ensemble of events is not used through all the key modelling steps.
“Hybrid” approaches have the potential to capitalise on the advantage of both event-based
and continuous simulation approaches. Typically, hybrid approaches use statistical
information on rainfall events in combination with continuous simulation and event-based
models. With these approaches, long term recorded (or stochastic) climate sequences can
be used in combination with a continuous simulation model to generate a time series of
catchment soil moisture and streamflows. This information is used to specify antecedent
conditions for an event-based model, which is then used in combination with statistical
information on rainfall events to generate extreme flood hydrographs. For example, SEFM
(MGS Engineering Consultants, 2009) and SCHADEX (Paquet et al., 2013) are examples of
the hybrid approach. In both these models a continuous hydrological simulation model is
used to generate the possible hydrological states of the catchment, and floods are simulated
on an event basis. While there are a number of conceptual advantages to these methods,
significant development would be required for their implementation for routine design
purposes.
39
Approaches to Flood
Estimation
However, there are some practical disadvantages with the technique. The available peak
flood records may not be representative of the conditions relevant to the problem of interest:
changing land-use, urbanisation, upstream regulation, and non-stationary climate are all
factors that may confound efforts to characterise flood risk. The length of available record
may also limit the utility of the flood estimates for the rarer quantiles of interest. Also, peak
flow records are obtained from the conversion of stage data and there may be considerable
uncertainty about the reliability of the rating curve when extrapolated to the largest recorded
events. There is also uncertainty associated with the choice of probability model which is not
reflected in the width of derived confidence limits: the true probability distribution is unknown
and it may be that different models may fit the observed data equally well, yet diverge
markedly when used to estimate flood quantiles beyond the period of record.
Perhaps the most obvious limitation of Flood Frequency Analysis is that it relies upon the
availability of recorded flood data. This is a particular limitation in urban drainage design as
there are so few gauged records of any utility in developed catchments. But the availability of
representative records is also often a limitation in rural catchments, either because of
changed upstream conditions or because the site of interest may be remote from the closest
gauging station.
For this reason, considerable effort has been expended on the development of a regional
flood model that can be used to estimate flood quantiles in ungauged catchments (Book 3,
Chapter 3). The prime advantage of this technique is that it provides estimates of flood risk
(with uncertainty) using readily available information at ungauged sites; the estimates can
also be combined with at-site analyses to help improve the accuracy of the estimated flood
exceedance probabilities. The prime disadvantage of the technique is that the estimates are
only applicable to the range of catchment characteristics used in development of the model,
and this largely excludes urbanised catchments and those influenced by upstream
impoundments (or other source of major modification).
The main advantages and limitations of flood data based procedures are summarised in
Table 1.3.3. In addition to the points made above, specific mention is made of the
applicability of Peak-over-Threshold analysis to events more frequent than 10% AEP, and
the use of Annual Maxima Series for the estimation of rarer events. Also included in this
table is reference to the use of large scale empirical techniques. While these techniques
have the advantage of providing an indication of the upper limiting bounds on the magnitude
of floods using national and global data sets (Nathan et al., 1994; Herschy, 2003), it is
difficult to assign exceedance probabilities to such events and thus such procedures are
better seen as a complement, and not an alternative, to traditional regional flood frequency
techniques (Castellarin, 2007).
40
Approaches to Flood
Estimation
41
Approaches to Flood
Estimation
• Representativeness of the
gauges used
Large scale • Estimates readily • Enveloped characteristics • Useful as a sanity
empirical obtained once may not be relevant to site of check on results
relevant data sets interest obtained from other
have been sourced procedures
• Not suited to inferring
• Generally a useful probabilities of exceedance • Regional nature of
indicator of the information allows
upper bound of for application to
flood behaviour ungauged sites
However, these significant advantages are offset by the need to transform rainfalls into
floods using some kind of design event transfer function or simulation model. Common
examples of the former include the Rational Method and Curve Number method of the US
Soil Conservation Service; while such methods provide an attractive means of simplifying
the complexity involved in generation of flood peaks, their use in this edition has been
replaced by the more defensible implementation of the Regional Flood Model (Book 3,
Chapter 3). The focus of this guidance is thus on the use of event-based and continuous
simulation approaches. While these models provide a conceptually more attractive means to
derive flood hydrographs arising from storm rainfall events, they present the very real
potential for introducing probability (AEP) bias in the transformation. That is, the methods are
well suited to the simulation of flood hydrographs, but great care is required when assigning
exceedance probabilities to the resulting flood characteristic.
The continuous simulation approach has the major advantage that it implicitly allows for the
correlations between the flood producing factors over different time scales. This can be a
great advantage in some systems (such as a cascade of storages or complex urban
42
Approaches to Flood
Estimation
environments) where the volume of flood runoff is the key determinant of flood risk.
However, its major drawback for flood estimation is that considerable modelling effort is
required to reproduce the flood characteristics of interest; the structure of continuous
simulation models is geared towards reproduction of the complete streamflow regime, and
not on the reproduction of annual maxima. This has implications for model structure, as well
as for how the model is parameterised and calibrated to suit the different flood conditions of
interest. With continuous simulation, the vast majority of the information used to inform
model parameterisation is not relevant to flood events other than to ensure that the right
antecedent conditions prevail before onset of the storm. Under extreme conditions, many
state variables inherent to the model structure might be bounded, and the process
descriptions relevant to such states may be poorly formulated and yield outcomes that are
not consistent with physical reasoning; while this is the case for flood event models, the
more complex structure generally used with continuous simulation models may confound
attempts to detect the occurrence of such behaviour. In addition, if the length of historic (sub-
daily) rainfalls is not long enough to allow estimation of the exceedance probabilities of
interest, it will be necessary to use stochastic rainfall generation techniques (or some down-
scaling technique) to produce synthetic sequences of sufficient length. Lastly, given the
interdependence between model parameters and the difficulty of parameter identification, it
can be difficult to transpose such models to ungauged catchments.
43
Approaches to Flood
Estimation
Flood Frequency Analyses are most relevant to the estimation of peak flows for Very
Frequent to Rare floods. Flood Frequency Analysis methods can also be applied to other
flood characteristics (e.g. flood volume over given duration) but this involves additional
assumptions.
44
Approaches to Flood
Estimation
estimation of flood risks beyond 1% AEP and can greatly increase the confidence of
estimates obtained using information at a single site.
Figure 1.3.2. Illustration of Relative Efficacy of Different Approaches for the Estimation of
Design Floods
The RFFE model ( Book 3, Chapter 2, Section 6) (Rahman et al., 2014) provides estimates
of peak flows for Frequent to Rare floods for sites where there is no streamflow data. While
its primary purpose is for the estimation of flood quantiles, the resulting estimates can also
be used to develop scaling functions to support the transposition of results obtained from
rainfall-based procedures to ungauged sites. This is the same concept as the simple quantile
regression approach discussed above, but as it is based on a more rigorous statistical
procedure it is more suited to transposition of results where factors other than merely area
are important. The RFFE method is quick to apply and provides a formal assessment of
uncertainty, and thus is well suited to provide independent estimates for comparison with
other approaches.
Figure 1.3.2 also illustrates the areas of design application most suited to rainfall-based
procedures. These are applicable over a wider range of AEPs than techniques based
directly on the analysis of flow data as it is easier to extrapolate rainfall behaviour across
45
Approaches to Flood
Estimation
space and time than it is for flow data. But while these methods can capitalise on our ability
to extrapolate rainfall data to rarer AEPs and infill spatial gaps in observations more readily
than flows, their use introduces the need to model the transformation of rainfalls into floods.
Continuous simulation procedures are well suited to the analysis of complex systems which
are dependent on the sequencing of flood volumes as the method implicitly accounts for the
joint probabilities involved. Application of these methods require more specialist skill than
event-based procedures; for example, it is important that the probabilistic behaviour of the
input rainfall series relevant to the catchment (either historic or synthetic) is consistent with
design rainfall information provided in Book 2, and that the model structure yields flood
hydrographs that are consistent with available evidence. Transposition of model parameters
to ungauged sites presents significant technical difficulties which would require specialist
expertise to resolve. Given these challenges it is presently recommended that the main
benefit of continuous simulation approaches is for the extension of flow records at gauged
sites with short periods of record, where system performance is critically dependent on the
sequencing of flow volumes; if flow data are not available, then it may be appropriate to
consider their application to small scale urban environments where runoff processes can be
inferred from an analysis of effective impervious areas. Its position in Figure 1.3.2 indicates
the degree of accuracy of results that can be expected from this method relative to at-site
frequency analysis.
By comparison with continuous simulation models, event-based models are far more
parsimonious and more easily transposed to ungauged catchments; it is easier to fit the
fewer model parameters involved to observed floods, and their structure has been tailored
specifically to represent flood behaviour. However, while such models are easily calibrated
and their parameterisation is generally commensurate with the nature of available data, their
use generally involves the simulation of floods beyond the observed record. As such, it is
necessary to make assumptions about the changing nature of non-linearity of flood response
with flood magnitude and trust that the model structure and adopted process descriptions
are applicable over the range of floods being simulated. These assumptions introduce major
uncertainties into the flood estimates, and this uncertainty increases markedly with the
degree of extrapolation involved. This issue is discussed in greater detail in Book 8.
The event-based methods considered in these guidelines generally involve a similar suite of
storage-routing methods (Book 5). There are some conceptual differences in the way that
these models are formulated, but in general these differences are minor compared to the
constraints imposed by the available data. Australian practice has generally not favoured the
use of unitgraph-based methods combined with node-link routing models (Feldman, 2000);
in principle such models are equally defensible as storage-routing methods, and the
strongest reason to prefer the latter is the desire for consistency when used to estimate
Extreme floods that are well beyond the observed record, and also for the local experience
with regionalisation of model parameters.
Perhaps the greatest choice to be made with event-based models is the adopted simulation
environment (as discussed in Book 4). For systems that are sensitive to differences in
temporal patterns there is little justification to use simple event methods: the additional
computational burden imposed by ensemble event models is modest, and the resulting
estimates are much more likely to satisfy the assumption of probability neutrality. However,
this additional effort may not be warranted in those urban systems which are dominated by
hydraulic controls, and in such cases the most appropriate modelling approach is likely to be
a hydraulic modelling system with flow inputs provided in a deterministic manner. Monte
Carlo event schemes provide a rigorous solution to the joint probabilities involved, and the
solution scheme ensures expected probability quantiles that are probability neutral, at least
for the given set of ensemble inputs and distributions used to characterise hydrologic
46
Approaches to Flood
Estimation
variability in the key selected inputs. For those catchments or systems where flood outputs
are strongly dependent on the joint likelihood of multiple factors, it is necessary to adopt a
Monte Carlo event approach.
The greatest uncertainties in terms of both flood magnitude and exceedance probabilities
are associated with the estimation of Extreme floods beyond 1 in 2000 AEP. There is very
little data to support probabilistic estimates of floods in this range, and it is prudent to
compare such estimates with empirical analysis of maxima based from national (Nathan et
al., 1994) or even global (Herschy, 2003) data sets.
It should be noted that the procedures based directly on the analysis of flood data can
readily provide an assessment of uncertainty. Additional uncertainty is introduced when
transposing flood information to locations away from the gauging site used in the analysis,
and the Regional Flood Frequency Estimation Method (RFFE) is the only method where this
is provided in a form easily accessed by practitioners. The Monte Carlo event approach
provides an appropriate framework to consider uncertainty in a formal fashion, though this
will only provide indicative uncertainties: the greater the degree of extrapolation the greater
the influence of uncertainty due to model structure and this is a factor that is not easily
characterised. The uncertainty bounds shown in the top panel of Figure 1.3.2 are clearly
notional and merely reflect the fact that uncertainty of the estimates increase markedly with
event magnitude. It must be accepted that when the above procedures are applied to
locations not included in their calibration that the associated uncertainties will be perhaps up
to an order of magnitude greater.
Lastly, it needs to be recognised that the ranges of applicability of the different methods
illustrated in Figure 1.3.2 are somewhat notional, and that there is considerable overlap in
their ranges of applicability. It is thus strongly advisable to apply more than one method to
any given design situation, where adoption of a final “best estimate” is ideally achieved by
weighting estimates obtained from different methods by their uncertainty. Estimates of
uncertainty for flood frequency analyses and regional flood estimates are provided in Book 3,
and methods for use with rainfall-based techniques are provided in Book 4, with examples
showing how uncertainty propagates through to the design outcome being provided in Book
7. In practice, the information required to assign relative uncertainties to different methods is
either limited or difficult to obtain, and careful judgment will be required to derive a single
best estimate with associated confidence intervals.
3.5. References
Blazkova, S. and Beven, K. (2004), Flood frequency estimation by continuous simulation of
subcatchment rainfalls and discharges with the aim of improving dam safety assessment in a
large basin in the Czech Republic. Journal of Hydrology, 292(1-4), 153-172. doi:10.1016/
j.jhydrol.2003.12.025
Boughton, W. and Droop, O. (2003), Continuous simulation for design flood estimation - a
review, Environmental Modelling and Software, 18: 309-318.
Cameron, D., Beven, K. and Naden, P. (2000), Flood frequency estimation by continuous
simulation under climate change (with uncertainty), Hydrology and Earth System Sciences,
4(3), 393-405.
47
Approaches to Flood
Estimation
Castellarin, A. (2007), Probabilistic envelope curves for design flood estimation at ungauged
sites. Water Resources Research, 43(4), pp.1-12. doi:10.1029/2005WR004384
Chiew, F.H.S. (2010), Lumped Conceptual Rainfall-Runoff Models and Simple Water
Balance Methods: Overview and Applications in Ungauged and Data Limited Regions.
Geography Compass, 4(3), 206-225. doi:10.1111/j.1749-8198.2009.00318.x.
Cordery, I. and Pilgrim, D.H. (2000), The State of the Art of Flood Prediction. In: Parker DJ
(ed.) Floods. Volume II. Routledge, London. pp: 185-197.
Feldman, A.D. (2000), Hydrologic Modeling System HEC-HMS Technical Reference Manual.
CDP-74B, U.S. Army Corps of Engineers.
Haan, C. and Schulze, R. (1987), Return Period Flow Prediction with Uncertain Parameters.
American Society of Agricultural Engineers, 30(3), 665-669. doi:10.1109/IWCFTA.2011.52
Herschy, R. (2003), World catalogue of maximum observed floods. IAHS Publ No. 284, 285
pp.
Kuczera, G., Lambert, M.F., Heneker, T. Jennings, S., Frost, A. and Coombes, P. (2006)
Joint probability and design storms at the crossroads, Australian Journal of Water
Resources, 10(2), 5-21.
Ling, F., Pokhrel, P., Cohen, W., Peterson, J., Blundy, S. and Robinson, K. (2015), Australian
Rainfall and Runoff Project 12 - Selection of Approach and Project 8 - Use of Continuous
Simulation for Design Flow Determination, Stage 3 Report.
MGS Engineering Consultants (2009), General Stochastic Event Flood Model (SEFM),
Technical Support Model. Manual prepared for the United States Department of Interior,
Bureau of Reclamation Flood Hydrology Group.
Nathan, R.J., Weinmann, P.E. and Hill, P.I. (2002), Use of a Monte-Carlo framework to
characterise hydrologic risk. Proc., ANCOLD conference on dams, Adelaide.
Nathan, R.J., Weinmann P.E. and Gato S. (1994), A quick method for estimation of the
probable maximum flood in southeast Australia. International Hydrology and Water
Resources Symposium: Water Down Under, November, Adelaide, I.E. Aust. Natl. Conf. Publ.
No.94, pp: 229-234.
Paquet, E., Garavaglia, F., Gailhard, J. and Garçon, R. (2013), The SCHADEX method: a
semi-continuous rainfall-runoff simulation for extreme flood estimation, J. Hydrol., 495:
23-37.
Rahman, A., Haddad, K., Haque, M., Kuczera, G. and Weinmann, E. (2014), Project 5
Regional Flood Methods: Stage 3, report prepared for Australian Rainfall and Runoff
Revision Process.
Sih, K., Hill, P. and Nathan, R. J. (2008), Evaluation of simple approaches to incorporating
variability in design temporal patterns. Proc Water Down Under Hydrology and Water
Resources Symposium, pp: 1049-1059.
48
Approaches to Flood
Estimation
Smithers, J.C. (2012), Methods for design flood estimation in South Africa, Water SA, 38(4),
633-646.
WMAwater (2015), Australian Rainfall and Runoff Revision Project 3: Temporal Patterns of
Rainfall, Part 3 - Preliminary Testing of Temporal Pattern Ensembles, Stage 3 Report,
October 2015.
Weinmann, P.E., Rahman A., Hoang, T.M.T., Laurenson, .E.M., Nathan, R.J. (2002), Monte
Carlo simulation of flood frequency curves from rainfall - the way ahead, Australian Journal
of Water Resources, IEAust, 6(1), 71-80.
49
Chapter 4. Data
James Ball, William Weeks, Grantley Smith, Fiona Ling, Monique
Retallick, Janice Green
4.1. Introduction
Data, in a range of types, is essential for all water resources investigations, especially the
topics involving design flood estimation covered by Australian Rainfall and Runoff. This data
is needed to understand the processes and to ensure that models are accurate and reflect
the real world issues being analysed.
While standard hydrologic data includes rainfall, water levels and streamflow, a range of
other data is also useful or even essential for flood investigations. This chapter provides
some background on the types of data needed, and specific issues related to each of these.
It also needs to be pointed out that most the procedures and guidelines presented in
Australian Rainfall and Runoff could not have been developed without historical data, and
often the reliability of the methods presented depends on the extent of data that has been
used in its development.
4.2. Background
Because of variability in water resources data (especially in Australia), long historical records
are important to ensure that this variability is well sampled. Long records help to ensure that
extremes of both wet and dry periods have been sampled.
However, having long term records means that trends in the data may be important. Trends
may be natural or human-induced and may be difficult to detect because of variability and
the infrequent occurrence of rare events. Trends can result from human-induced climate
change, land use changes, or poorly understood long term climate cycles (e.g. related to the
Inter-decadal Pacific Oscillation and other large scale phenomena). Careful analysis is
needed to ensure that the long historical records are considered in the context of long and
short term natural variability and trends.
There are many organisations that collect and maintain data that is useful for flood
estimation. Some of these organisations are major authorities that can be clearly identified
and have well organised data in accessible formats. However, there is also a considerable
amount of data that is harder to find and often valuable information can be found in
unexpected locations. This chapter provides information on the types of data that may be
useful, sources where this data can be found and the accuracy that can be expected. A
useful discussion on the value of hydrologic data, specifically streamflow data, is included in
the paper by Cordery et al. (2006).
Routine data collection programmes are important, but it is often valuable to expend some
effort in finding and verifying other data for particular projects. It is also important to note that
data useful for these projects may be anecdotal rather than formal and often valuable
information can be gathered by simply holding discussions with local residents or other
50
Data
stakeholders. Many projects have a consultation programme which can uncover useful
information.
As well, specific formal data collection programmes are often needed for particularly large
projects, where the scale of the project justifies expenditure on data collection. For example,
this type of programme is often carried out as part of environmental impact studies for major
projects during the approval process.
Australia is the driest inhabited continent and has a more variable climate than other
continents. As a result water resources in Australia are often scarce and are therefore critical
to the nation’s prosperity. At the other end of the scale, large floods often cause devastating
damage to property and endanger lives. While present generations are benefiting from the
data collection activities of our predecessors, it is our responsibility to ensure that future
generations are not disadvantaged by the changes we are implementing now. Data
collection is about reducing the risks and increasing the benefits the current and future
generations receive from the expenditure of the limited funds available for water
management.
• Groundwater and dry-land salinity assessment and management (e.g. throughout the
Murray Darling Basin and much of WA, the Great Artesian Basin and inland sub-artesian
aquifers);
• River water quality (e.g. blue-green algae outbreak in the Darling River, habitat protection);
• Water supply for urban and rural communities (e.g. water restrictions and new dams);
• Irrigation for agriculture (e.g. the cap on extractions from the Murray Darling Basin);
• Water trading – agreed volumes and timing must be reconcilable, and be measured
accurately, compliance with licence entitlements; and
Considering the specific concerns for Australian Rainfall and Runoff, inadequate data or the
lack of data leads to uncertainty in the results of the analysis and will tend to require
additional freeboard allowance for example to compensate for the uncertainty. While there
are available procedures that are regional specific and can be implemented on ungauged
catchments, there will be more uncertainty in these applications and therefore an increase
51
Data
risk in the flood estimation application. Practitioners need to utilise as much local
information, even if this is anecdotal and limited, as possible to reduce this risk.
4.4. Stationarity
Detection of changes in river discharge and magnitude of flood peaks, whether it is abrupt or
gradual change is of considerable importance, being fundamental for planning of future
water resources and flood protection (Kundzewicz, 2004). Generally flood analysis and
planning design rules, including data collection programmes, are based on the assumption
of stationary hydrological data sets. If the stationarity assumption is proved to be invalid
through global climate change then the existing procedures for designing water-related
infrastructures will need revision. This has been recognized in the US with increased
emphasis on maintaining stations with long data records (National Research Council, 2004).
Long data sets and ongoing analysis are essential to promote accurate design of systems to
perform adequately for their design probability and not be over designed resulting in higher
costs or under-designed resulting in large damage bills, loss of life and perhaps ultimate
failure of structures with resultant community destruction.
A range of human activities including man-made structures such as dams, reservoirs and
levees can change the natural flow regime. Land cover and land-use changes including
deforestation and urbanization controls many facets of the rainfall–runoff process increasing
the peak flows and increasing the amount of runoff. Water conveyance in rivers is altered by
river regulation measures (such as channel straightening and shortening, construction of
embankments, construction of weirs and locks) or the rehabilitation of rivers with increased
stabilisation using trees and logs to provide a better environment for native species.
Abstractions from river systems can cause them to run dry and further change the natural
channel system and henceforth impinge on the magnitude of larger floods stage height by
the considerable amount of debris in the rivers.
Hydrologic data series have generally been considered to be stationary series i.e. there are
no long-term shifts in the time series statistical parameters. However, it is recognised that
with the “greenhouse effect” analyses might need to take into account the non-stationary
effects when performing hydrologic designs. There is therefore a demonstrated need to
continue data collection to avoid potential large errors in hydraulic structures design and
water resources management due to inadequate streamflow data (Wain et al., 1992).
All flood investigations need to consider the potential for non-stationarity in any data applied
to the project, and make appropriate adjustments as required.
4.5. Hydroinformatics
4.5.1. Introduction to Hydroinformatics
An important component for prediction of design flood characteristics is the consideration of
the data available for the purpose of predicting both, the magnitude and probability of a flood
characteristic. Since the publication of the previous edition of Australian Rainfall and Runoff
(Pilgrim, 1987), the increasing computational power available has seen changes in
availability and perceptions of data. These changing perceptions resulted in development of
hydroinformatics as a conceptual framework for various techniques and approaches to deal
with information about water in an electronic format.
Though Abbott (1991) first proposed the term "Hydroinformatics" as a generic term
describing the utilisation of information and data about water, the most encompassing and
concise definition was presented by Meynett and van Zuylen (1994) who stated:
52
Data
With this definition, it is clear that the term 'hydroinformatics' covers a wide range of subject
areas that are beyond the classical hydrological and hydraulic sciences involved in design
flood prediction and management. This definition also includes data and information from the
political, social, economic and legal spheres relevant to design flood events. As suggested,
though the scope of hydroinformatics is extensive, however, only those aspects relevant to
prediction of design flood characteristics will be discussed in this chapter.
Generally, for a system concerned with the prediction of design flood quantiles, the following
components are expected :
• Databases for the storage, retrieval and display of spatial, temporal and statistical data;
• Models for prediction of design flood quantiles using the information contained in the
relevant databases;
• Models for generation of data about catchment response to storm bursts or complete
events; and
This guide does not discuss all the aspects of a hydroinformatics system, rather the purpose
of this guide is to introduce the concept of a hydroinformatic system for design flood
estimation in sufficient detail for design flood analysts. Further information on development in
and application of hydroinformatics systems can be found in Vathananukij and
Malaikrisanachalee (2008), Malleron et al. (2011), Hersh (2012), Popescu et al. (2012), and
Moya Quiroga et al. (2013).
• Routine
• Project specific
Routine data collection includes the standard and widely available data, such as streamflow
or rainfall data collected on a routine basis by major government agencies such as the
Bureau of Meteorology. This data is collected to provide a long term understanding of
Australia’s climatic conditions for a representative selection of sites throughout the country.
These stations are the basis of many flood estimation procedures and for projects, though
the data is appropriate for many other applications.
53
Data
Project specific data is collected especially for a particular project, and may include
observations for major floods in the project area and other specific information to assist in a
particular project.
Major water authorities, such as the Bureau of Meteorology and state water agencies are
immediately obvious sources, and these organisations are expected to hold most of the data
from formal data collection programmes. These agencies generally have well designed
websites, where data can be reviewed and downloaded, and almost all data required for
flood investigations can usually be downloaded from these agency websites for no charge.
• Local authorities. Councils are responsible for planning process and flooding is an
important constraint to their planning. Councils therefore usually hold historical data on
flood levels as well as other observations.
• Transport agencies. The major state government road and rail agencies, as well as
privatised road and rail organisations usually hold extensive data on flooding as it affects
their infrastructure.
• Other government agencies. Activities of other agencies such as those responsible for the
primary industries, environment or mining are impacted by flooding and they will often hold
flood or other meteorological data relevant to flooding.
• Farmers and graziers. The weather is critical to agricultural industries and many farmers
operate a raingauge and can at least provide data for major storm events, but they may
also hold flood records as they affect their irrigation performance for example.
• Others. There may be specific stakeholders who could supply flood data for a particular
flood investigation, and these can vary depending on the actual project and location.
In general therefore, it is important to search widely for flood data during projects. Even
anecdotal information will usually be of value in setting model parameters and improving
local understanding of flood conditions.
54
Data
Significant events.
If a major event occurs, it is important for the Bureau of Meteorology, Council or other
appropriate agency to collect as much relevant information as possible soon after the
event and publish this, even if only in an internal report. Because major events occur
rarely and unexpectedly, it is often difficult to mobilise the resources in time and
appropriately. As well it may not always be obvious that this data will be useful, so there
may not be an immediate interest in the data collection. The Bureau of Meteorology
produce reports following major events and these reports usually contain information that
is very helpful in particular projects.
Historical events.
Where especially significant events have occurred in past, there are often historical
records of the event. These records may be in reports by relevant government agencies,
but often there may be useful information in newspaper reports, historical societies or
museums or information can be gathered from old long term residents. Particularly
significant events such as the Clermont storm of 1916 and the Brisbane floods of 1893
have good published data, but other events may be more difficult to locate.
The accuracy of this type of data may be extremely variable and careful review and checking
is essential. The requirements for checking are difficult to specify, but the checks should
involve review of the consistency between individual data points and a general check of
"reasonableness".
Usually this type of data is of variable quality, but with careful collection and checking, is
almost always very valuable in implementation of projects.
As well as "numerical data", other less formal data can be collected for historical events.
These can include photos or videos taken during the flood or eye witness descriptions and
accounts. While this type of information may not be directly applicable for model calibration,
it is invaluable in many applications to ensure that the model is representing the general flow
conditions and distribution.
This type of data is often sourced from local residents during consultation programmes,
when the flood specialist specifically searches for it.
55
Data
because the concept of the terms data, information and knowledge frequently overlap each
other.
The main difference between these terms is in the level of abstraction being considered.
Data is the lowest level of abstraction, while information is the next level, and knowledge is
the highest level among all three however, data on its own carries no meaning. For data to
become information, it must be interpreted and should take on a meaning. For example, the
height of Mt. Everest is considered generally as 'data', but a book on the geological
characteristics of Mt. Everest may be considered as 'information', while a report containing
practical information on the best way to reach the peak of Mt. Everest may be considered as
'knowledge'. This distinction between the terms is consistent with Beynon-Davies (2002) who
uses the concept of a sign to distinguish between data and information; data are symbols
while information occurs when symbols are used to refer to something.
In the following discussion of types of data, the lowest level of abstraction is used, namely
data has a value but no meaning.
4.7.1.2. Deterministic
Deterministic data can be defined as data that has a unique value in spatial and temporal
dimensions. There are many examples of deterministic data used in prediction of design
flood quantiles; these examples include Digital Elevation Model (DEM) of the catchment, the
surface roughness parameter, and the continuing loss rate parameter.
4.7.1.2.1. Probabilistic
In contrast to deterministic data, probabilistic data does not have a unique value, rather it
has a range of values described by a statistical relationship. Each time a data value is
sought from probabilistic data, the data value will be different. An example of a probabilistic
parameter would be the continuing loss rate for use in a Monte Carlo simulation; in this
example, the continuing loss rate will be sampled from distribution of potential continuing
loss rates, with each sample likely to differ from previous samples.
4.7.1.3. Spatial
Data can be unique to a point or cover a spatial extent. Additionally, the data may vary in the
spatial dimensions but be invariant with time. There are many examples of spatial data types
in design flood prediction, including:
• Soil types;
4.7.1.4. Temporal
Temporal data, in contrast to spatial data, consists of data that varies in the time dimension
but, usually, has a fixed location . There are many examples of temporal data types in design
flood prediction, including:
56
Data
4.7.1.5. Meta-data
For optimal utility in use of data, it is imperative that potential users have the possibility of
tracing the data passage stored in an electronic database to its initial source. Questions
such as “How were the values obtained?”, “What is the reliability of the values?”, “What
editing of the data has occurred?” are vital in assisting the user to interpret the data in a
manner appropriate for resolution of the issue under consideration. The development of
hydroinformatic systems offers the possibility of facilitating access to meta-data and
enhancing the utility of data. The inclusion of meta-data, therefore, is an essential and
necessary aspect for suitable use of data in a hydroinformatic system.
A further example of the utility of meta-data for interpretation of data is obtained from
consideration of the data necessary for floodplain management. It is possible that a review of
the meta-data contained within the hydroinformatic system used for flood management of
the catchment may result in the finding that all data within the hydroinformatic system was
derived from application of catchment modelling systems and that complementary monitored
data was not available. The interpretation of the stored data, therefore, will be different from
what would have been the case if the meta-data were not available and it had been
assumed that the data were from catchment monitoring.
It is worth noting that prior to the widespread availability of computerised data (i.e.
hydroinformatic systems), the analyst preparing the recommendation for freeboard probably
would have been aware of the data sources (i.e. the meta-data) and, therefore, would have
incorporated this knowledge into the interpretation and ultimate recommendation.
Consequently, the inclusion of meta-data in the hydroinformatic system does not generate
new knowledge in itself but merely incorporates knowledge currently available only in a non-
electronic form.
Conceptually, the data cycle is analogous to the hydrologic cycle where the passage of data
through the data cycle can be considered analogous to the passage of water through the
hydrologic cycle. Also, similar to the hydrologic cycle, the data cycle can be considered in a
systems format with the data flowing between different components.
If the data necessary for prediction of design flood characteristics is considered in this
manner, then it is apparent that the components of the data cycle are relevant to the design
flood problem. Hence, the concepts associated with hydroinformatics and the data cycle are
relevant to the prediction of design flood quantiles.
One of the conceptual views of components forming the data cycle is shown in Figure 1.4.1.
The conceptual components shown in the figure are Generation, Editing, Storage, Analysis
and Presentation. Also, as indicated , there is a circularity in the flow of information, which
arises from analysis of data resulting in generation of new data that needs to be edited and
stored in a manner similar to previous data. Considered in this manner, an individual
component within a hydroinformatic system is both, a supplier and the data user.
57
Data
Within each of these conceptual components of the data cycle, the pertinent aspects are:
• Generation - In this component the data is sourced; this can be restated as this is the
component where the data is created or collected. Furthermore, it is the component where
data is recognised relevant for prediction of the design flood characteristics.
For example, in a catchment monitoring program aimed at collecting discharge data, the
necessary steps, as discussed by Chow et al. (1988), would be:
58
Data
Further details on data creation through monitoring programs are presented Book 1,
Chapter 4, Section 12.
In addition to the data generation through technical approaches, there are other methods
of data generation, where data can be collected through social surveys, census surveys,
and historical reviews for example. This data, similar to the technical data, will require
management using the concepts of hydroinformatics and the data cycle.
For monitored data, items of primary relevance include issues regrading how the data was
observed, the reliability of monitoring (for example, the sensitivity of the sensing
equipment, the robustness of the rating table for monitored discharges, the detection limit
of contaminants for water quality constituents), and editing changes to the data and the
philosophy behind these changes. These meta-data are of great importance for catchment
monitoring since at least some of the monitored data will be inaccurate; in other words,
some of the monitored data will contain undiscovered errors.
The step involving data editing is a vital part of the data cycle and should prioritised above
data insertion into relevant database and its subsequent usage by others not involved in
its generation. The inclusion of the meta-data and its availability to future users is
becoming increasingly important as the availability of digital data increases and data users
become more remote from the generation of the data.
• Storage - The storage of the data is performed in this component of the data cycle. In
general, the storage of data will comprise the insertion of the data into a digital database.
It is important to note that the manner of data storage should ensure that its retrieval is
both practical and feasible. If retrieval is not easy, there is no addition to the data available
for design flood estimation.
There are many different forms of data stored in a database which in turn influences the
data storage design. Commonly, spatial databases are referred to as Geographic
Information Systems (GIS) while databases used to store temporal data can be referred to
as a Time-Series Managers (TSM). These computerised storage facilities have
superseded, in general, the previous techniques based on data storage through maps and
charts. There are a large number of alternative GISs and TSMs that can be used for data
storage. It is not the purpose of Australian Rainfall and Runoff to recommend a particular
GIS or TSM but rather to note their use for storing data relevant to design flood estimation.
Since 2008, the Bureau of Meteorology (BoM) has been responsible for delivering water
data throughout Australia. As part of this role, the Bureau of Meteorology has been
59
Data
collecting water data from more than 200 organisations across the nation and is using this
data to report on water availability, condition and use in a nationally consistent way. To
facilitate this role, the Bureau of Meteorology is building the Australian Water Resources
Information System (AWRIS) as a secure repository for water data and as a means to
deliver high quality water data to all Australians.
The aim of AWRIS is to allow the Bureau of Meteorology to process and publish water
data in new and powerful ways. The Bureau of Meteorology will be able to merge historical
water data records with current observations to suit a variety of user needs. By spatially
enabling this data we will be able to query and report the data in many different ways.
Data stored in AWRIS will be delivered to the web, to mobile devices and various
hydrologic forecasting systems to be operated by the Bureau of Meteorology.
• Analysis - It is common that analysis of data will be required. The analysis techniques form
this component of the data cycle. The steps involved in this component can be
summarised as data retrieval, and data usage. It is worth noting that the data obtained
from the analysis could be considered as part of the data generation component. Hence
there is some similarity between the generation and analysis components.
There are many alternative analysis techniques. For design flood estimation purposes, the
most commonly used analysis techniques would be statistical modelling (for example,
Flood Frequency Analysis) and catchment response modelling using a catchment
modelling system. Both of these techniques result in the generation of additional data that,
in turn, requires editing and storage.
• Presentation - The final conceptual component is the presentation component. Within this
step, the stored or analysed data is presented in a manner that is understandable to
relevant stakeholders. The technical level of the presented data would not be constant for
all presentations but, rather, would vary with the technical expertise of the audience. The
important point about the presentation of data is that it is presented in a manner that is
clear and precise for the audience.
The data types that are needed are as follows, with discussion or each in the following
sections.
• Rainfall;
• Water levels;
• Streamflow;
• Catchment data, including topography, survey, digital terrain, land use and planning data;
and
• Other hydrologic data, including tidal information, meteorological, sediment movement and
deposition and water quality.
60
Data
This data needs to be collected, reviewed for completeness and accuracy and then archived
and disseminated to practitioners as required. Discussion on specific details is included
below.
The Bureau of Meteorology is the primary agency responsible for collection of rainfall data in
Australia, but there are many other agencies which have significant records of rainfall data.
The other agencies include local authorities and water agencies, but some organisations
have particular local data programmes that may be useful in specific projects. As well rainfall
data can be collected from various sources for major historical floods as discussed further
below.
In many major flood events, it is often valuable to look for unofficial rain gauges where data
has been collected by members of the public. In rural areas, most property owners have rain
gauges and they are also not unusual in towns and cities.
Data endorsed by the Bureau can be regarded as accurate, but some checks for
consistency and the reasonableness of the data should also be carried out. In particular,
tests for missing (accumulated) data need to be considered but it is also possible that gauge
overflows mean that the larger events are not well measured. Data from other agencies may
be also of a high standard, but these agencies sometimes have poorer quality data. More
careful checks are needed on this data. Data collected at unofficial rain gauges operated by
members of the public may be sometimes of very poor quality, with poor exposure for
example, and records from these sources must be checked very carefully. However the
value of this data means that it is often worth further analysis to ensure that useful data is
not discarded. Data from unofficial gauges is especially important for major events where it
can be used to supplement information from official gauges.
Most publicly available rainfall data should be available through the Bureau of Meteorology’s
AWRIS database.
• Pluviometer records.
Normally data endorsed by the Bureau of Meteorology can be relied upon. However, users
should check the data for consistency and logic, before application. In particular, tests for
missing or accumulated data need to be considered, along with assessing the potential for
gauge overflows.
61
Data
calibrated cylinder. Any excess precipitation is captured in the outer metal cylinder. Most
manually read gauges are used for daily observations.
Daily rainfall is nominally measured each day at 9:00 am local time. At most rainfall sites,
observations are taken by volunteers who send in a monthly record of daily precipitation at
the end of each month. A subset of observers at strategic locations, as well as automatic
weather stations, send observations electronically to the Bureau of Meteorology each day.
Very few stations have a complete unbroken record of climate information. Missed
observations may be due to observer illness or equipment failure. If, for some reason, an
observation is unable to be made, the next observation is recorded as an accumulation,
since the rainfall has been accumulating in the rain gauge since the last reading.
62
Data
Advantages of the TBRG are claimed to include unattended, automatic operation, and the
ability to record the rate at which the rain is falling. Operation of a TBRG is based on the
generation of an electronic pulse when the water volume collected in the bucket results in
bucket tipping. While the usual volume of water collected is equivalent to a depth of 0.2 mm,
some early TBRGs required a depth of 0.5 mm before the bucket would tip. When analysing
data from TBRGs, users should check the bucket size to ensure the validity of the analysis;
this information should be available from the meta-data attached to the recorded data.
Traditionally rainfall is measured to the nearest 0.2 mm (prior to 1974 records were in
Imperial units and measurements were to the nearest 1 point, approximately 0.25 mm).
However, in recent years some observations have been reported to 0.1 mm. Hence, users
check the meta-data attached to the data records to note the measurement accuracy rather
than the inferred accuracy from the database records.
63
Data
The Bureau of Meteorology undertake a number of quality control processes to detect errors
in the rainfall data forwarded from the many volunteer and professional readers. This data
checking includes:
• Inconsistent observations (for example, high rainfall combined with clear skies); and
While the Bureau of Meteorology undertakes these checks it is recommended that individual
users ensure that their rainfall data is suitable for purpose. This may entail undertaking
additional quality control processes.
• Accumulated records. Rainfall data, especially from daily read gauges may have missing
days of record. In some cases, these missing days are simply not recorded while on other
occasions, the total for a number of days is accumulated. This occurs since thee rainfall is
collected in the raingauge and several days record are recorded on a single day at the
end of the accumulated period. These records need to be reviewed in conjunction with
records from neighbouring gauges and adjustments made as necessary. Accumulated
records may give an excessively high daily record for the day where the records are
accumulated.
• Missing data. In some cases, for both daily read and continuous gauges, there may be
missing periods of record. In this case, the record should be reviewed carefully in
conjunction with records from neighbouring catchments and appropriate adjustments
made.
• Gauge quality. Rain gauges operated by the Bureau of Meteorology are expected to meet
the Bureau of Meteorolgy’s standards, however other gauges, especially privately
operated gauges which may be used to supplement rainfall records for major events, may
not meet the Bureau of Meteorolgy’s stringent standards. Where privately operated
gauges appear inconsistent with nearby stations, the siting of the gauge needs
consideration and it may be necessary to remove the gauge from the analysis.
64
Data
The most common observation type stored in ADAM is daily rainfall. Dating back to the
mid-1800s, these total more than 200 million records from a network of over 16 000
locations. Other types of weather data that are stored in ADAM include air temperature,
humidity, wind velocity, sunshine, cloud cover, soil temperatures, upper atmospheric wind
and temperature, and observed weather phenomena (for example, thunder, frost and dust).
To support this large database, the ADAM system contains supporting database tables and
software tools required to enter, retrieve and quality control data efficiently. A set of detailed
rules and procedures ensure consistent treatment of information.
However, for many stations the following meta-data are available also:
• Photos for each of the four main compass points showing siting, clearance and proximity
to trees, buildings and other factors likely to influence measurement of rainfall;
• Record of dates of site visits, maintenance undertaken, problems identified and resolution
adopted.
65
Data
Firstly extensive statistical analysis of rainfall data has been carried out to prepare the
Intensity Frequency Duration (IFD) input applied to many assessments. Details of the rainfall
data analysed for the IFD development is discussed in Book 2, Chapter 3.
Secondly rainfall data is applied for analysis of historical events for flood analysis, and in this
application, recorded rainfall data is required for these historical events.
The Bureau of Meteorology and specialist agencies in mountainous and southern regions
collect this data, and use it for specific studies and supply it on request.
66
Data
A major source of water level data is at formal stream flow stations operated by the major
water authorities. This data is usually converted into streamflow data as discussed further
below, but there are some locations where only water level data is collected or published.
Water authorities publish data and it is usually freely available. The published data usually
has an indication of the accuracy and completeness, but some checking is needed.
In addition, there are many stations, especially those operated by the Bureau of Meteorology
and local authorities, mainly for flood forecasting and warning, where stream flows are not
calculated and water levels only are available. The larger agencies usually make this data
available, sometimes on line.
Water level data may be in the form of continuous records monitored by an automatic
recorder or as manually read records. Most of the earlier records were manually read at staff
gauges, but many of these are now replaced by automatic water level recorders. Manually
read records were usually recorded once a day with supplementary more frequent readings
during flood events. Because of the rapid response of many streams, the manually read
records may not provide the peak levels and may even totally miss short duration flood
events. Manually read records are usually better quality for large slowly responding streams
and this data can be used with confidence, but smaller catchments may be significantly in
error. More often than not, manually read records show smaller flood peaks and lower
discharges than automatic recorders. However practically all records from before the 1960s
are manually read, so this data forms the only available information for long term stations.
Therefore data from manually read stations is usually the only available record and has to be
used, but careful consideration is needed to make sure the records are interpreted correctly.
In addition to the formal water level records, informal records can also be obtained usually
following a major flood event. These records are usually obtained by a Council or other
stakeholder who sends surveyors on-site soon after an event to survey flood marks to
indicate the maximum water levels reached. This data provides an indication of the variation
of water levels across the floodplain and an indication of the flow patterns. The quality of this
data may sometimes be questionable, and the records need to be carefully checked. These
checks can include checking for consistency and reasonableness as well as a review of the
reliability of the agency or person who has collected the data. When this type of data is
collected, it is important that the records include careful descriptions of the circumstances of
the collection and an indication of the expected accuracy.
Common concerns with this data is the level of observed debris marks, whether the water
levels have been collected at the peak level of the flood and the source of the water level,
either local drainage or backwater for example. Therefore while very useful data can be
obtained, it must be carefully reviewed otherwise the data may lead to incorrect conclusions
in the resulting analysis.
However water levels are often only used as the source of streamflow or discharge data, as
discussed below, and while water levels are useful in many applications, streamflow data is
usually of far greater value for many water resources studies.
67
Data
clear (refer to Book 1, Chapter 4, Section 12). These records are very important as, if intact,
they will show the complete hydrograph (i.e. the rise, peak and fall of the flood). Historical
records for continuous water level recorders are typically stored by government agencies
and one needs to be careful with the datum of these records as may not be to AHD
(Australian Height Datum) and some conversion may be required. In case of large events,
these recorders can fail and the data needs to be inspected for 'flat' areas, which may
indicate failure of the gauge or they may rise steeply in case of a landslip occurs. Typically,
each stage record has an accuracy code assigned and these should be noted before use.
Debris marks can be inaccurate for a number of reasons. They can be influenced by
dynamic hydraulic effects such as waves, eddies, pressure surges, bores or transient
effects, which may not be accounted for in a hydraulic model. For example, if the debris
mark is located within a region of fast flowing floodwater it is possible that the floodwater has
pushed the debris up against an obstacle, lodging it at a higher level than the surrounding
flood level. More common though is the fact that debris often lodges at a level lower than the
68
Data
peak flood level. The reason for this is that for debris to be deposited it needs to have
somewhere to lodge and this elevation is not always at the peak flood level. For example,
the classic place for debris lodgement is a barbwire fence with horizontal strands of wire. If
the flood level almost reached the top strand of barbwire, debris will not lodge in the top
strand but rather on the second from top strand, which may be about 0.3 m lower than the
peak flood level. It is recommended that the surveyor be asked to record as much
information as possible about the mark itself (e.g. debris on barbwire fence spacing 0.35 m)
so the modeller is able to consider reasons for discrepancies in the calibration process, if
they arise.
As with water level data, the major water authorities have well established systems for
storage and dissemination of their streamflow data. This data is usually available from these
agencies, often on line, and almost always free of charge. The data dissemination systems
are well organised and data can be supplied accurately and quickly.
Different agencies around Australia maintain appropriate databases of their records. These
systems include considerable detail on the type, accuracy and reliability of the gauged data
including rating curve accuracy, periods of missing record, number of gaugings and variation
in rating curves. In many cases, the system includes photos and maps of the station. This is
valuable information and is valuable to ensure that the data is used most effectively. The
water agencies are state based and there are differences between the states. Their on-line
69
Data
documentation should be consulted to ensure that the quality and limitations of the record
are understood.
There are few other agencies that collect streamflow data, because of the difficulties of
calculating the flows from water levels. Where there are other sources of this data, it is
usually limited as a part of the research project for a limited duration and locality, and the
data is sometimes difficult to obtain. This data is also usually only for a short period of
record, but sometimes, it may include an important event. However it is also possible for
practitioners to calculate discharge records from water level data using individually
developed rating curves that can be prepared from hydraulic models or other theoretical
methods.
There are many checks needed when analysing streamflow data. The principal check is on
the accuracy and completeness of the stage-discharge relationship. This can be checked by
assessment of the number of discharge measurements that have been taken and the
maximum discharge (as compared to the maximum recorded water level). As well the
variability in the stage-discharge curve indicates that the relationship has changed over time
and therefore may be less reliable for particular events. The stage-discharge relationship
may be poor for the lower flows because of regular changes in low flow controls. As well it
may also be poor at higher flows because of the lack of discharge measurements at higher
flows. There are difficulties in extrapolation of the relationships, where there is a change in
conditions, for example where the river overtops the banks.
Different gauges in the same catchment can be compared to test for consistency and the
water balance and there is a range of other checks that can be carried out. Having more
than one gauge in a catchment though is not particularly common.
Poor quality streamflow data may mean poor quality model calibration, so a high standard
for checks of data is important. However it is noted that it is very difficult to check the
accuracy of the discharge records for a station, and poor quality data may be accepted.
Streamflow records are the basic data source used in developing reliable surface water
resources because the records provide data on the availability of streamflow and its
variability in time and space. The records, therefore, are used in planning and design of
surface water related projects, and are also used in management or operation of projects
after construction of the projects is complete.
In addition, streamflow records are used for calibrating catchment modelling systems that,
for example, are used for predicting flood behaviour and for predicting hazards arising from
design flood events. Records of flood events obtained at gauging stations also serve as one
of the basic data sources for design flood estimation, necessary for designing bridges,
culverts, dams and flood control reservoirs, floodplain delineation and flood warning
systems; and in development of methods applicable to locations where data is not available.
It is essential, to have valid records for a full range of streamflows. The streamflow records
referred to above primarily are continuous records of discharge at stream-gauging stations; a
gauging station being a site instrumented and operated so that a continuous record of stage
and discharge can be obtained. A network of continuous recording gauging stations,
however, often is augmented by auxiliary networks of partial record stations to fill a particular
need for streamflow data at a relatively low cost. For example, an auxiliary network of sites,
instrumented and operated to provide only instantaneous peak level data, is often
established.
70
Data
4.12.2.1. Introduction
Gauging stations are installed where the need for streamflow records at a site has been
recognised. This gauging station will comprise of instruments for measuring the river stage
and a location selected to take advantage of the best locally available conditions and
discharge measurement and for developing a stable stage-discharge relationship,
sometimes referred to as a rating curve. While there are instruments that simultaneously
monitor river stage and discharge, the more common instrumentation requires the use of a
stage-discharge relationship to convert the monitored river stage into an equivalent river
discharge rate. Artificial controls such as low weirs or flumes are constructed at some
stations to stabilise the stage-discharge relationships in the low discharge range. These
control structures are calibrated by stage and discharge measurements in the field.
Selection of the gauging station site and the development of the stage-discharge relationship
are important components in the management of a gauging station and hence the
discussion herein will focus on these aspects of management of a gauging station. While
there are many other aspects important in management of a gauging station, these two
aspects have the most significant impact on prediction of design flood characteristics.
It is rare to find an ideal site for a gauging station and it is more common that the limitations
of a site must be considered with respect to the desired data from the site. There are
numerous guides on gauging station site selection with the aim of these guides on ensuring
the data is reliable and criteria suggested by ISO (2013) are often adopted. When applying
data recorded at a stream gauging station, the practitioner must review details of the station
carefully and make an assessment of the quality and limitations of the record.
New technology, especially in the field of electronics and computer based management of
field data, has led to a number of innovations in sensing, recording, and transmitting gauge
height data. In the past most gauging stations used floats in stilling wells as the primary
method of sensing gauge height and these are still in common use today. However, the
current trend is toward the use of submersible or non-submersible pressure transducers
which do not require a stilling well. Additionally, electronic data recorders and various
transmission systems are being used more extensively.
71
Data
Details of the instruments and associated structures for a stream gauging station are
outlined by ISO (2013), Rantz (1982), and World Meteorological Organisation (2010) and
hence are not repeated herein. Nonetheless, users of stream discharge data for design flood
estimation should ensure that they are conversant with the instruments and structures
employed for the collection of streamflow data.
The two attributes of a satisfactory control are stability and sensitivity. If the control is stable
the stage-discharge relationship will be stable. If the control is subject to change, the stage-
discharge relationship will be subject to change and frequent discharge measurements will
be required for the continual re-calibration of the stage-discharge relationship. This
increases the operating cost of the gauging station and also increases the uncertainty of
streamflow records extracted from the database. Additional data about controls can be found
in Herschy (1995).
The traditional way in which a stage discharge relationship is derived for a particular gauging
station is the measurement of discharge at convenient times. Traditionally, this measurement
is undertaken with a current meter measuring the discharge velocity at enough points over
the river cross-section so that the discharge rate can be obtained for that particular stage. By
taking such measurements for a number of different stages and corresponding discharges
over a period of time, a number of points can be plotted on a stage-discharge diagram, and
a curve drawn through those points, giving what is hoped to be a unique relationship
between stage and discharge, the stage discharge relationship, as shown in Figure 1.4.4.
This rating curve is used in a manner whereby the routinely measured stages are converted
to discharges by assuming that the corresponding discharge can be obtained from the curve.
72
Data
There are a number of factors which might cause the rating curve not to give the actual
discharge, some of which will vary with time. Fenton (2001) quotes (Boyer, 1964) as
describing a list of factors affecting the rating curve, or what he called a shifting control.
These include:
• The channel and hydraulic control changing as a result of modification due to dredging,
bridge construction, or vegetation growth;
• Sediment transport - where the bed is in motion, which can have an effect over a single
flood event, because the effective bed roughness can change during the event. As a flood
increases, any bed forms present will tend to become larger and increase the effective
roughness, so that friction is greater after the flood peak than before, so that the
corresponding discharge for a given stage height will be less after the peak. This will also
contribute to a flood event showing a looped curve on a stage discharge diagram as
shown on Figure 1.4.5. Both Simons and Richardson (1962) and Fenton and Keller (2001)
have examined this phenomenon and presented approaches for dealing with this issue;
• Unsteadiness - in general the discharge will change rapidly during a flood, and the slope
of the water surface will be different from that for a constant stage, depending on whether
the discharge is increasing or decreasing. The effect of this is for the trajectory of a flood
event to appear as a loop on a stage-discharge diagram as shown in Figure 1.4.5;
• Variable channel storage - where the stream overflows onto floodplains during high
discharges, giving rise to different slopes and to unsteadiness effects; and
73
Data
In addition to these generic problems associated with the use of rating curves, there are
several problems associated with the use of rating curves for prediction of a design flood
characteristic. These include:
• The assumption of a unique relationship between stage and discharge, in general, is not
justified;
• Discharge is rarely measured during a flood, and the quality of data at the high discharge
end of the curve typically is quite poor because there are usually few velocity
measurements at high flow. As a result estimation of the peak discharge of a flood event
usually involves extrapolation of the stage-discharge relationship beyond the recorded
data points;
• The relationship is usually a line of best fit through the data points defining the stage-
discharge relationship. The approach recommended for estimation of this line of best fit in
many guidance documents (for example, World Meteorological Organisation (2010)) is a
visual fit. This approach provides minimal data on the uncertainty of the relationship and
the reliability of any extrapolation of the relationship. This limits the estimation of the
propagation of the uncertainty in the flood characteristic prediction approach; and
74
Data
• It has to describe a range of variation from no discharge through small but typical
discharges to very large extreme flood events.
As highlighted in the previous discussion, the unsteadiness of the discharge during a flood
event (i.e. the variation of discharge with time) and its influence on a discharge estimate is
ignored in the traditional application of a rating curve. In a flood event the slope of the water
surface for a given stage will be different from that for the same stage during steady flow
conditions; this difference will depend on whether the discharge is increasing or decreasing.
As the flood increases, the surface slope in the river is greater than the slope for steady flow
at the same stage, and hence, according to conventional hydraulic theory more water is
flowing down the river than the rating curve would suggest. The effect of this is shown in
Figure 1.4.5. When the water level is falling, the slope and, hence, the discharge inferred is
less. The effects might be important - the peak discharge could be significantly
underestimated during highly dynamic floods, and also since the maximum discharge and
maximum stage do not coincide, the arrival time of the peak discharge could be in error and
may influence flood warning predictions. Finally, the use of a discharge hydrograph derived
inaccurately by using a single-valued rating relationship may distort estimates for resistance
coefficients during calibration of an unsteady flow model.
• An interpolation zone where the relationship is within the range of the stage
measurements used to develop the relationship; and
• An extrapolation zone where the relationship is not defined by gaugings taken to develop
the relationship.
While it is preferable that all stage measurements are within the interpolation zone, the
nature of the data needed for design flood estimation, and for flood prediction in general, the
reliability of data from measurements within the extrapolation zone will require consideration
of the extrapolation methodology. The need for extrapolation is shown in Figure 1.4.7 where
the discharges for the Annual Maxima Series extracted for the Stream Gauge are plotted as
a function of the rating ratio (the rating ratio is the ratio of the recorded discharge to the
75
Data
highest gauging used to develop the stage-discharge relationship). All points in the Annual
Maxima Series where the rating ratio is greater than 1 require use of the extrapolation zone
of the stage-discharge relationship.
As shown in Figure 1.4.7, a number of the values in the Annual Maxima Series are in the
extrapolation zone. The accuracy of the values in the extrapolation zone has the potential to
influence fitting of the statistical model to the Annual Maxima Series, thereby influencing the
predicted design flood quantiles. Fitting a statistical model to data points where the higher
values are subject to estimate errors is discussed in Book 3, Chapter 2.
There are a number of alternative techniques for development of the extrapolation zone of
the stage-discharge relationship, with a logarithmic extrapolation being often recommended.
This approach however may not be applicable because in many cases, the extrapolation
may extend from a confined channel into a floodplain.
An alternative approach is the use of a hydraulic model to develop the extrapolation zone of
the stage-discharge relationship. Similar to the application of a logarithmic technique, the
suitability of this approach needs to be confirmed prior to its application. Of particular
concern is the modelling of the energy losses associated with flow in the channel and
adjacent floodplains where it is necessary to assume that the parameter values obtained
during calibration are suitable for the larger discharges being simulated in the extrapolation
zone of the stage-discharge relationship.
The important point in this discussion, however, is a recognition that the values of the data
extracted from a discharge record for fitting of a statistical model will contain values where
the conversion of the recorded level to an equivalent discharge occurred through
extrapolation of the stage-discharge relationship. Consideration of this in the fitting of a
statistical model to the Annual Maxima Series is discussed later in Book 3, Chapter 2.
76
Data
Uncertainty and accuracy are terms that are sometimes used interchangeably even though
they have two very distinct meanings. Accuracy (or error) refers to the agreement, or
disagreement, between the measurement of stream discharge and the true or correct value
of the discharge at the time of measurement. Since we can never know the true value of the
discharge, we can never know the exact amount of error in the discharge measurement.
• Measurement - these are the uncertainties associated with taking the measurements. The
primary component in this category is instrument accuracy. The accuracy of both the
velocity meter and the level recorder need to be considered.
• Methodology - these are the uncertainties associated with the analysis of the recorded
measurements to enable the development of a point on the stage-discharge relationship.
In this category, features such as the assumption of linear variation in the cross-section
between the bathymetric points, the assumption of a logarithmic vertical velocity profile,
wind effects, changing stage during measurement, etc need to be considered.
• For rod suspension, the standard error ranges from 2% for an even, firm, smooth and
stable streambed, to 10% for a mobile, shifting sand, or dunes streambed;
• For cable suspension, the standard error ranges from 2% to 15% for an unstable
streambed, high velocity, and vertical angles; and
• For acoustic depth measurements, the standard error ranges from 2% for a stable
streambed to 10% for a mobile, shifting sand, and dunes streambed.
77
Data
different sources and with a range of accuracies. Generally, it is advisable that practitioners
seek the most suitable data in each instance and assess the required accuracy of that data
in respect of the desired accuracy of the outputs.
• Topographic and infrastructure data including structures within the floodplain including
culverts, bridges, and pipe networks;
• Soil data.
4.13.3.1. General
Topographic data is an important component of any design flood investigation. Proper
scoping of topographic and infrastructure data collection can have a significant impact on the
cost effective delivery of flood investigations. The scope of the required topographic and
infrastructure data is driven by the nature of flood behaviour for a given area. The desired
elements of topographic and infrastructure data include:
• Catchment extent;
• Catchment slope;
• Drainage topology (i.e. the drainage flow paths and network of channels);
• Channel cross-sections;
• Waterway structures (weirs, levees, regulators, dams, culverts and bridges etc);
There are a number of alternative approaches to obtaining the necessary topographic data
including:
• Field survey;
78
Data
• Levees; and
While it is possible to obtain existing field survey data from other sources, it is important to
assess its suitability for the intended purpose. In other words, it is necessary to obtain both
the data and the meta-data, which includes items related to date of capture, accuracy, etc.
4.13.3.2.1. Cross-Sections
A survey of cross-sections is required only when the design flood estimation requires
application of a catchment modelling system to generate data that is not available from a
catchment monitoring program. Hence, the scope of a cross-section survey depends on the
type of the catchment modelling being used. Where catchment modelling is focussed on
hydrologic simulation (using, for example, a conceptual rainfall-runoff model), the cross-
section data required is minimal. However, where the catchment modelling requires
hydraulic simulation (using, for example, a one dimensional network model), the cross-
section data required will be more extensive.
The important characteristics of the cross-section data include the lateral extent and
longitudinal spacing of cross-sections. The lateral extent of the cross-section must be
sufficient to include key in-bank elements and extend to above the highest flood level, which
often extends onto the floodplain and outside the stream channel. When surveying in-bank
cross-sections, good field notes and/or photos describing the nature of the channel are vital
for proper interpretation post collection. There are numerous references such as Stewardson
and Howes (2002) that describe flood study cross-section survey requirements in detail. The
overriding principle being that the cross-section data is adequate to estimate the shape and
slope of the channel so that suitable estimates of the flow conveyance capacity of the
channel can be calculated.
The influence of in-bank features on flood discharge behaviour tends to reduce as the
magnitude of the flood discharges increases. For example, bank-full capacity of a river
channel may represent 100% of a 0.2 or 0.5 EY discharge but less than 10% of a 1% AEP
discharge.
Structures in the waterway and on the floodplain may have a significant influence on flood
behaviour. Structures requiring survey include:
79
Data
• Levees;
• Fences; and
Often these structures constrict and obstruct flood discharges thereby influencing the design
flood estimation. The effects on flood behaviour may be intentional (such as a weir) or
unintentional (such as blockage at a culvert). Where the effect on flood discharge is
unintentional, it is worth noting that the effect may be stochastic and hence the likelihood of
the effect needs to be considered by a suitable joint probability technique. Irrespective of
whether the influence is intentional or unintentional, these structures will have an influence
on the estimation of the design flood characteristic.
Hydraulic structures generally act to control the discharge behaviour in accordance with a
particular management strategy. As most management strategies are concerned with
frequent discharges, there are control structures designed to operate under flood conditions
as part of an operational flood management strategy. Hence, the impact of these structures
will vary with the management strategy and flood magnitude.
Similar to field survey of cross-sections, field practicalities such as vegetation, access and
water depth and flowrate may influence the location and details surveyed for a given
structure.
The techniques used for collection of field survey data (typically by surveyors) are discussed
in this section. Details of three techniques are presented; namely Traditional Ground Survey,
Real Time Kinematic (RTK) Global Positioning Systems (GPS)/ Differential GPS, and
Photogrammetry. Typical accuracies for each of these techniques are provided in
Table 1.4.1. The techniques to measure topography and other survey features generally fall
into two main categories:
• Direct measurement - where the survey technique involves a ground based instrument
measuring features by physical contact and relating the measurement directly related to
know ground control such as a State Survey Mark (SSM); or
• Remote sensing - where features are measured without physical contact with the object,
and generally refers to measurement by an airborne or satellite mounted instrument.
80
Data
Traditional ground survey (that is, survey collected by traditional or total station ground
survey techniques) is the most accurate survey technique with vertical and horizontal
accuracies as shown in Table 1.4.1. However, this technique is manual and labour intensive
and is therefore best suited to small and/or difficult areas for other techniques, to supplement
data obtained from other techniques, and to validate data obtained from other techniques.
In the context of a design flood estimation, traditional ground survey methods are often used
for:
• Areas that are impenetrable from the air (for example, satellites and aeroplanes have
difficulty in sensing the ground in areas like the bank of channels and/or heavily
vegetated areas);
• Areas that are critical for the data being sought from the catchment modelling system
(for example, critical hydraulic controls such as levees and weirs) where topographic
accuracy is important.
RTK GPS has the advantage that it can collect a reasonable amount of data at higher
accuracy than remote sensed data and much faster than by traditional means. The
disadvantage is that if the vehicle in which the RTK GPS is mounted is unable to access an
area, the system may have to be dismantled to gain access by some other means or
measurement of data is not possible in the inaccessible area. RTK GPS methods also rely
on the instrument having line of sight access to an array of satellites. This can be a limitation
to the technique in areas underneath a tree canopy.
81
Data
Before embarking on any aerial data capture, it is worth liaising with other agencies that may
hold topographic data to ensure that the existing spatial extent of data does not cover the
desired area.
At present, the two principal techniques used in aerial survey for obtaining topographic data
are:
• Photogrammetry; and
82
Data
It should be noted that neither photogrammetry nor ALS can penetrate water surfaces. Only
the water surface level at the time of capture can be measured. If the bathymetry under the
water surface is relevant in the context of the numerical modelling, bathymetric data must be
collected and incorporated separately. Similarly, neither of the two methods can penetrate
dense vegetation (such as trees, sugar cane and mangroves) to produce ground elevations.
Hence, ground survey may be necessary to fill gaps in the topographic data under heavy
vegetation.
It is important in a DEM to ensure key linear features such as levees, embankments and
other infrastructure are adequately represented. These features can be incorporated into the
topographic description using field data as breaklines.
The ANZLIC Committee on Surveying and Mapping have developed guidelines on the
acquisition of LiDAR (ANZLIC, 2008) in terms of accuracy, data formats and meta-data.
They have also developed the National Elevation Data Framework.
Further discussion of these two aerial survey techniques is provided in the following
sections.
4.13.3.4.1. Photogrammetry
Photogrammetry is a measurement technique where the three dimensional (x,y,z)
coordinates of an object are determined by measurements made from a stereo image
consisting of two (or more) photographs; usually, these photographs are taken from different
passes of an aerial photography flight. In this technique, the common points are identified on
each image. A line of sight (or ray) can be built from the camera location to the point on the
object. It is the intersection of its rays (triangulation) that determines the relative 3D position
of the point as the known control points can be used to give these relative positions absolute
values. More sophisticated algorithms can exploit other information on the scene known as
priori.
The accuracy of the photogrammetric data is a function of flying height, scale of the
photography and the number and density of control points. Typically, the accuracy requested
when scoping photogrammetric data collection for flood study purposes ranges from +/- 0.1
m to +/- 0.3 m. Note that the accuracy of the developed design flood profile cannot have an
accuracy better than the catchment data used to estimate the design flood profile.
As the technique is based upon the comparison of photographic images, shading and
obscuring of the ground surface by vegetation can reduce coverage in specific areas.
83
Data
However, as photogrammetric analysis can utilise manual inspection of the stereo pair of
photographs, the photogrammetrist is sometimes able to pick the odd ground surface visible
through tree or crop cover. In this way, photogrammetry is sometimes able to provide some
reliable points in vegetated areas.
Shown in Figure 1.4.9 is a region over which photogrammetric data coverage and ALS data
coverage will be demonstrated. Shown in Figure 1.4.10 is a typical sample of the raw data
obtained from a photogrammetric technique while shown in Figure 1.4.11 is the raw data in
finer detail for the area indicated in Figure 1.4.10. These figures demonstrate the following
specifically in relation to photogrammetry:
• The measured points are well spaced, but not always on a grid. Some manual
manipulation has occurred in locating these points when necessary.
• Breaklines (seen as intervals along which measured elevations form the vertices of the
interval) are evident along tops and bottom of banks. These are most likely to have been
manually derived by viewing the stereo pair of photographs.
84
Data
85
Data
Photogrammetry is also often used to develop contours of the land surface directly as
polylines with an attributed elevation; these contours are then used to create the desired
elevation data set.
ALS produces a dense cloud of points (See Figure 1.4.12). These points can be classified as
ground or non-ground points. While ALS requires little ground control in acquisition, ground
control is important for quality control of the ALS measurements. For example, while it may
be easy to scan inaccessible or sensitive areas without ground survey, the accuracy and
reliability of the collected data may be low; in other words, ground control is important for
ensuring the accuracy and reliability of the ALS measurements.
86
Data
The vertical and horizontal accuracy of ground surface level measurement by ALS is a
function of the laser specification, flying height, ground control and the surface coverage.
Hard road surfaces, for example, normally are able to be measured accurately, but other
surface types (for example, swamps or heavily vegetated areas) are not easy to measure
and hence the measurements must be treated with caution. It is useful to note that many
quoted accuracy values for ALS data are in reference to the data accuracy for clear hard
ground. For clear, hard ground (that is ground with no surface coverage), the nominal
accuracy for technology commonly applied in Australia is:
• Horizontal accuracy:
• 1/3000 x altitude at which the aeroplane is flown; for a flying height of 1000 m, the
horizontal accuracy is about +/-0.33 m; and
• Elevation accuracy:
The width of the land terrain sampling per pass, commonly referred to as the swath varies
with the flying height. While typical values are given in Table 1.4.2, users are advised to
obtain the meta-data regarding their ALS to ensure suitability for purpose.
A disadvantage of the ALS data capture method (compared to, for example, low-level
photogrammetry) is the absence of breaklines in the data to define distinct, continuous
topographic features and significant changes in grade. While the horizontal density of points
87
Data
usually is quite high ( average point spacing of 1 to 2 metres depending on flying height and
sampling frequency), features such as narrow banks/levees or channels will only be resolved
if the data are sampled on a very small grid (less than 1 to 2 m grid). This can result in large
and unwieldy terrain files.
There are a number of approaches that can be taken in relation to the treatment of
breaklines in ALS data sets:
1. Sample the entire survey area at a fine resolution - say on a regularly spaced 1 m Digital
Elevation Model (DEM) grid and manually identify important salient topographic features
and hand enter these features into the models;
2. Use local knowledge, GIS, aerial photos, satellite imagery or historic plans to identify
locations of important features and hand-digitise over the fine resolution DEM in a manner
similar to the first approach (approach 1) and then drape values from the DEM to develop
3D breakline strings;
3. Use observations as in approach 2 to determine locations of key features and then use
field survey to develop 3D breakline strings; and
4. Use auto-processing/filtering algorithms to extract breaklines from the raw ALS data.
Experienced users favour a combination of technique approaches (2), (3) and (4). While
approach (4) nominally provides the widest coverage and extracts the most information from
the ALS data, the processes cannot be considered reliable as no method has been
developed for testing the validity of the breaklines produced.
Although requiring a greater manual input, it is considered that approaches (2) and (3) are
better approaches for the provision of reliable estimates of the surface level at critical
locations with in the floodplain. This arises from the manual checking that occurs during the
progress of approaches (2) and (3). Furthermore, long-sections from the ALS can be
checked for consistency and a sub-sample tested against field measurements as a
validation process.
Capturing ALS data results in surface level estimates from various ground coverages
including bare earth, vegetation, and buildings. Thorough processing of this raw data is
required to ensure a true representation of the ground surface is obtained.
Raw ALS data files contain all returns can be very large leading to difficulties with data
storage and the manipulation of this data. As a result, a common approach is to use 'thinned'
ALS data; this is data that has been processed to remove data points providing limited
additional definition of the terrain surface. Shown in Figure 1.4.13 is an example of the
processed ALS data set to produce the thinned ALS data set. Illustrated in this figure are the
following aspects:
• The measured spot heights are provided as a grid; in other words, each spot height is
representative of an area defined by the grid dimensions. This grid has been created by
"thinning" the raw ALS point cloud data set. While the dimensions of the grid are defined
by the user, it is useful to note that the smaller the grid, the larger and perhaps more
unwieldy the data set being used but the surface topography will have an apparent higher
definition. Conversely, the larger the grid, the smaller the data set but the surface
topography will have an apparent lower definition. It is also worth noting that definition of
the surface topography only needs to be adequate to provide the necessary information
and that any additional definition will not provide either additional flood data or accuracy of
the predicted flood data.
88
Data
• There are no breaklines in the model of the surface topography. Breaklines can be created
according to the approaches discussed previously but typically are not determined solely
from the ALS data.
• Areas where there are no measurements and hence no points available to build the model
of the surface topography. In these areas, the ground surface was not visible due to, for
example, heavy vegetation or surface water. These areas can be removed during the
processing as "non-ground" points. When using a processed ALS data set, there will be
situations where a non-ground elevation point (for example, a point that has hit the canopy
of trees) has not been removed during processing. As a result, the spot elevation remains
in the data set and would be treated as a surface elevation. In some situations. Errors of
this type are obvious and can be removed manually. In other situations, they may not be
obvious and hence will form part of the DEM (Digital Elevation Model) used in the
analysis. Users need to be cognisant of errors of this type in ALS data and the consequent
significant impacts on the predictions obtained from the catchment modelling system.
An important point to note is that the accuracy of the collected data is checked and
confirmed. The resultant stated accuracy typically does not apply to the DTM derived from
the survey data but rather to the individual elevation points sampled during the data
generation program. This is an important distinction and users of a DTM derived from aerial
survey data needs to be aware of this feature.
89
Data
There are a number of methods that may be used to check the accuracy of the aerial survey.
1. POINT v DTM SURFACE: Independent field surveys of selected quality check points can
be compared to the DTM. However, this surface has been interpolated from the aerial
elevation data and, unless the accuracy stated by the aerial surveyor referred to the DEM,
the correlation in accuracy values is not guaranteed. But, this method can provide a
preliminary indication of accuracy, particularly if the independent field survey points with
known accuracy are already available from another source;
2. POINT v POINT: Independent field surveys of selected quality check points can be
compared to the individual aerial data elevation points. The selected check points must
exactly match the coordinates of the aerial data points to ensure that a valid comparison
is being made. To do this, the aerial survey must first be received in order to select the
points at which comparison will be made, which may slow the data collection phase.
Alternatively, early liaison with the aerial surveyor may allow the location of a number of
points to be known before data provision, which may save time; and
If the water body is shallow or small, then a traditional surface survey technique may be
suitable. For deeper, larger water bodies, a specialised bathymetric survey may be required.
Instruments such as echo sounders, side scan sonar systems and acoustic doppler profilers
may be used for this purpose.
In most cases, the bathymetric survey will need to be merged with ground surface data.
While bathymetric surveys have been conducted for most major rivers, if the data is not
recent and the riverbed is subject to change then the data should be checked for suitability
prior to its use.
90
Data
A consequence of this is the location of features on an aerial photograph will have a degree
of uncertainty. For example, if a rectified aerial photograph is used to locate a flood mark, the
attributed location will be subject to a tolerance. In an area of high flood gradient this can
result in differences between observed and simulated flood levels that do not accurately
represent the true differences.
Aerial photographs, while not providing quantitative data directly, can provide additional
information about flowpaths, flow obstacles and floodplain vegetation that is not always
immediately evident or accessible on a site inspection. Additionally, aerial photos can be a
useful guide when defining parameters for floodplain characteristics (for example, roughness
coefficients) and can be used to develop a spatial map of the floodplain parameter.
Another example of the use of aerial photography is its use in urban areas to define building
outlines or fence lines where these are to be included within the hydraulic model and can be
a reasonable source of information for assessing the total imperviousness of a catchment
(see Book 5, Chapter 4).
Conditions at each of the relevant historical points in time must be established and used in
the model development; this is discussed in more detail in Book 7. Changes to conditions
that may affect flood behaviour include dam construction, initial dam storage levels, dredging
or siltation of river channels and particularly of river mouths, construction of levees and other
associated flood mitigation works, road construction including the raising or duplication of
roads, the realignment of road embankments, the construction of new culverts and/or
bridges, upgraded drainage networks both in rural and urban environments, developments
on the floodplain, the construction of new weirs or the removal of old weirs, different crop
types or stage in the growing season, and others that have not been mentioned here.
Depending upon the length of time since the occurrence of the calibration event, record of
these changes may be only available anecdotally.
The availability of data for historical events needs to be considered when the event is used
as part of the design flood estimation. For example, there may be anecdotal or even good
91
Data
formal measurement evidence of a record flood that occurred 100 years ago but details of
this flood event may not be adequate for its use as a calibration event for validation of a
catchment modelling system. On the other hand, the data may be adequate for it to be
included as a high discharge censored event in a flood frequency analysis (Book 3, Chapter
2).
Land use data is used in hydrology models to determine suitable parameters to calculate
runoff and is also used in hydraulic models to assist in the determination of Manning's n
values.
Land use data is normally supplied as a map or a GIS layer. When obtained from local
authorities, the data is usually supplied on request for no charge.
Information on land use can be used in the hydrologic model to determine percentage
impervious or in hydraulic models to inform hydraulic roughness. Land use information may
be sourced from:
• Local or State Government Authorities in spatial layers of future development zonings; and
• Vegetation maps;
Care needs to be taken with vegetation maps as, in general, the maps are based on limited
sampling and inferring the results of this survey to be the representative of a larger area.
Additionally, the individual species within an area designated as one vegetation type may
vary.
92
Data
• Geofabric Surface Catchments - Catchment boundaries derived from the 9 second Digital
Elevation Model;
It is worth noting that the catchment area is a function of the scale at it was estimated and
therefore is likely to have inaccuracies at a fine scale. The current data is based on a 9
Second DEM and GA GEODATA TOPO 250K Series 1 (GEODATA 1) and GEODATA TOPO
250 K Series 3 (GEODATA 3).
Subsequent versions of the Geofabric will have upgrades to data and include (Bureau of
Meteorology, 2015):
• Digital Elevation Model (DEM) derived streams and catchment boundaries based on a 1
second resolution DEM.
Soil property data is available spatially for the whole country from the Atlas of Australian
Soils (McKenzie et al., 2000); this information is available in a GIS format from the Australian
Soil Resource Information System website. These maps are broad scale (typically 1:250000
- 1:500000) and were completed between 1960 and 1968. State based maps are available,
also. Care should be taken when using soil maps as variations in soil over short distances
occur frequently and cannot be resolved by the reconnaissance style mapping used in their
development (McKenzie et al., 2000).
93
Data
Property databases form the basis of most flood damage assessments. These databases
typically require a description of the property attributes and features on a property by
property basis. Typical information required for each residential property includes:
• Street address;
• Building construction type (e.g. brick veneer, timber, slab on ground, on piers etc);
• Building age;
• House size.
Commercial and industrial properties require similar information, but also require information
on the type of business undertaken at the site as this can have a significant bearing on the
value of flood damages from business to business.
Ideally, this data are collected via field survey. However, it can be a costly process
depending upon the number of properties for which data are required. Alternatively, there
may sometimes be records available from the local authority, other government agency or
the census. For broad assessments, property data may be estimated. A panel of people with
relevant skills should review the method of estimation for soundness. As an example,
property data may be estimated from aerial photography or from a general understanding of
local conditions.
A number of national data sets are also available from Geoscience Australia such as:
• Water Observations from Space historical surface water observations derived from
satellite imagery for all of Australia for the period of 1998 to 2012; and
94
Data
AMG - Australian Map Grid. Care should be taken when translating from one projection to
another; of particular concern is the use of the correct local conversion as these conversions
are not the same across the country.
Historical tidal data for particular events can be useful model calibration while long-term
records can be used for statistical analysis for design flood estimation purposes (refer to
Book 6, Chapter 5). With increased sea levels induced by global warming, long-term records
of tides and sea levels need to be checked for stationarity; refer to Book 1, Chapter 6 for a
discussion of climate change data.
Tidal data is collected regularly by relevant government agencies, being concerned for the
coastal environment or engineering. This data is published in handbooks or websites. In
addition, there are research or other short-term projects carried out in coastal areas, which
may include data on tides. However, these projects are generally localised and of short
duration.
The Bureau of Meteorology is the principal agency that collects this data, however there are
records held by water agencies and agricultural departments as well as small localised
records held by different organisations. Regional maps of key meteorological data,
especially pan evaporation and evapotranspiration are published by the Bureau of
Meteorology, and this regional information is often adequate for many requirements.
Meteorological data is usually available free of charge from the Bureau of Meteorology and
major agencies.The records from other organisations may be difficult to locate and then
there may be contractual difficulties in obtaining and using the data.
Data collection on sediment movement is particularly difficult and there is only limited
available data. Most routine data collection programmes are carried out by water and
environmental agencies, but these are usually somewhat limited. Tthere are some small
95
Data
specialised programmes carried out for specific projects, often as part of an environmental
impact study. However these programmes are generally limited in scope and also limited in
the duration for the data collection programme.
This data is normally available only directly from the agency or organisation that collects the
data, and may be difficult to find that it exists and then it may be difficult to obtain.
Once data is located, it is then often difficult to access and use because of differences in the
methods adopted for collection, analysis and processing. There are also differences in the
treatment of bed and suspended loads and measurements of turbidity, all of which are used
at times to measure sediment movement.
Sediment deposition may be monitored by owners of affected assets, but the data is difficult
to apply in investigations.
Therefore application of sediment movement and deposition data is difficult and needs
considerable skill to interpret and apply. Where this is an important aspect of a project,
efforts should be exerted to find and use the data.
As with sediment, data collection on this topic is particularly difficult and there is only limited
available data. Most data collection programmes are carried out by water and environmental
agencies, and some of this data (especially the routine parameters, such as salinity) is
available in formal data archiving systems. In addition to these programmes, there are some
small specialised programmes carried out for specific projects, often as part of an
environmental impact study.
Some types of water quality data, especially salinity and nutrients are available as historical
records that can be used to calibrate models and assess changes in conditions with time,
but much of the data is short term and variable. Considerable skill and expertise is needed to
apply this data to project requirements.
96
Data
Observed data is often used to investigate whether there are any trends apparent in
historical flood data. The data used to project the impacts of future climate on flooding is
generally sourced from climate modelling. “Any useful technique for the assessment of future
risk should combine our knowledge of the present, our best estimate of how the world will
change, and the uncertainty in both” (Hunter, 2007). To study the impact of climate change, a
plausible and consistent description of a possible future climate is required. The construction
of such climate change scenarios relies mainly on results from model projections, although
some information from past climates can be used (IPCC, 2001). Refer to Book 1, Chapter 6
for more information on climate change and flood estimates.
One of the most fundamental issues in detecting trends in precipitation or discharge data
due to climate change is separating the influences of climate variability from long-term
climate change in a relatively short record. Due to the normal range of climate variability,
there is limited information available to establish the probability of a flood event currently,
even without consideration of climate change (White et al., 2010). Local and regional
changes in precipitation due to climate change are greatly affected by patterns of
atmospheric circulation. Patterns of change in precipitation associated with ENSO can
dominate the global patterns of variations, particularly in the tropics and over much of the
mid-latitudes (Trenberth, 2011). There is little observed data available to investigate
relationships between hemispheric scale modes of the atmosphere (such as ENSO) and
climate change.
The observed data available to directly estimate the impacts of climate change on
antecedent conditions includes seasonal rainfalls, evapotranspiration, and soil moisture.
Detecting changes in these parameters can be undertaken with a trend analysis, however
long records with appropriate spatial coverage are required for this task. As for detecting
changes in extreme precipitation or discharge events, the difficulty is in separating the
impacts of climate change from natural variability or the influence of changes in catchment
conditions, in the relatively short records available. The coverage of directly measured
97
Data
evapotranspiration data is relatively sparse across Australia and is very limited globally
(Bates et al., 2008). Evapotranspiration data available from global analysis data is sensitive
to the type of analysis and the uncertainty in the data makes it unsuitable for trend analysis
(Bates et al., 2008). Direct measurements of soil moisture are available for only a few
regions and are often very short in duration (Bates et al., 2008).
Changes in storm surge events can be investigated using data from tide gauge records. Tide
gauge data can be used to evaluate the Annual Exceedance Probabilities of extreme sea
levels, however a reliable analysis of the risk of extreme events or trends in the data is
limited by the short duration of records collected at many gauges. Church et al. (2006) found
only two gauges of sufficient length for use in this type of analysis in Australia, and only
nineteen records with lengths of 40 years or greater (and some of these were intermittent).
The limited number of tide gauges means that there is no data available for large stretches
of coastline, which inhibits the assessment of this hazard even under present climate
conditions, let alone future conditions due to climate change (McInnes et al, 2007).
Whilst observed historical data can be used to investigate trends, Global Climate Models
(GCMs) are most often used to generate data to investigate the impacts of climate change
into the future on a global or continental scale.
In order to address the uncertainty in future greenhouse gas emissions, a range of plausible
futures are often run with the GCMs. IPCC (2007) developed a range of emissions scenarios
(SRES emissions scenarios) that are commonly used with GCMs. The SRES emissions
scenarios are divided into six families based on different likely emissions considering future
technological and societal changes (Corney et al., 2010). The current GCM results for
Australia can be accessed through the Climate Change in Australia website (http://
www.climatechangeinaustralia.gov.au/en/) and the Climate Futures web tool (http://
www.climatechangeinaustralia.gov.au/en/climate-projections/climate-futures-tool/
introduction-climate-futures/).
98
Data
Figure 1.4.14. Global Greenhouse Gas Emissions Scenarios for the 21st Century (from
IPCC, 2007)
The fact that climate model fundamentals are based on established physical laws, such as
conservation of mass, energy and momentum, along with a wealth of observations gives
some confidence in the ability of models to represent the global climate. The models have
shown a good ability to simulate important aspects of the current climate, and reproduce
features of past climates and climate changes (IPCC, 2007).
99
Data
the science of climate change, in setting initial conditions for the models, and in the
representation of the global climate by the models (Hunter, 2007). There are also
deficiencies in the simulation of tropical precipitation, the El Niño- Southern Oscillation and
the Madden-Julian Oscillation. Most of these errors are due to the fact that many important
small-scale processes cannot be represented explicitly in GCMs, and so must be included in
approximate form as they interact with larger-scale features. This is partly due to limitations
in computing power, but also results from limitations in scientific understanding or in the
availability of detailed observations of some physical processes (IPCC, 2007).
The outputs from GCMs give information at a global scale with a limited resolution, so
cannot provide a detailed picture of climate variables at the regional scale required to
investigate the factors influencing flood events (Corney et al., 2010). Some form of
downscaling is required to investigate the impacts of future climate on specific variables at a
local scale, in particular for precipitation and climate extremes. To address the decrease in
100
Data
confidence in the changes projected by global models at smaller scales, other techniques,
including the use of regional climate models and downscaling methods, have been
specifically developed for the study of local-scale climate change (IPCC, 2007). Three
methods are commonly used to scale outputs from GCMs: temperature scaling, statistical
downscaling, and dynamical downscaling (Westra, 2011). Combinations of these methods
are also used.
• Temperature Scaling
Temperature scaling has been used give downscaled estimates of precipitation from GCMs.
Extreme precipitation is directly related to the water holding capacity of the atmosphere. A
warming climate leads to an increase in the water holding capacity of the air, which causes
an increase in the atmospheric water vapour that supplies storms, resulting in more intense
precipitation (Trenberth, 2011). The Casius-Clayperon relationship gives an increase of
water holding capacity of approximately 7% per degree Celsius of warming (Trenberth,
2011). This relationship has been found to hold for some sub-daily rainfalls, however daily
extreme rainfalls have been found increase at a lower rate (Lenderink and van Meijgaard,
2008). The relationship also appears to hold only to a threshold temperature (Hardwick-
Jones et al., 2010). The simple scaling of rainfall with temperature does not reflect all the
processes that produce rainfall events. The true scaling relationship is more complex and is
affected by the extremity and duration of the rainfall event, the atmospheric temperature, and
access to atmospheric moisture (Westra et al., 2013). This results in differing local impacts
of climate change and, in particular, different impacts are seen dependent on the duration of
the rainfall event.
• Statistical Downscaling
Statistical downscaling uses relationships between large-scale climate variables and local
scale weather to develop estimates at a local scale. In the simulation of extreme rainfalls, a
common approach is to use extreme value distributions to describe precipitation extremes
(Abbs and Rafter, 2009). Another approach is to use a model to simulate precipitation and to
then analyse the extremes (Mehrotra and Sharma, 2010). The advantage of statistical
downscaling is that it is not computationally intense, and can be undertaken relatively quickly
over large areas. The limitations of statistical downscaling include the assumption that the
current observed relationships between large scale climate variables and local climate will
persist in a changed climate regime. Another limitation is that the observational data set
being used for the downscaling should cover the range of projected future climate responses
(Grose et al., 2010).
• Dynamical Downscaling
Dynamical downscaling takes the outputs from a host GCM as inputs to either a limited area
model or stretched grid global climate model. The result is a fine scale dynamical model over
the area of interest, often called a Regional Climate Model (RCM). Because a RCM focuses
on a small area, it can provide more detail over that area than is possible with a GCM alone
(Grose et al., 2010). Dynamical downscaling allows representation of local scale features,
such as orographic effects, land-sea contrast and other land surface characteristics, and
smaller scale physical processes that influence extreme precipitation (Marauan et al., 2010).
By modelling the atmosphere and local environment at a much finer scale than is possible
using a GCM, it is expected that the specific processes that drive regional weather and
climate will be better represented. Bias corrected outputs from dynamically downscaled
models have been shown to be able to be used directly in projections of changes in extreme
precipitation (White et al., 2010). A number of studies have used RCMs to investigate
changes to daily precipitation extremes, however a lack of available sub-daily RCM data has
101
Data
limited studies on shorter duration events (Hanel and Buishand, 2010). The advantages of
dynamical downscaling are in the ability to represent changes in rainfall spatial and temporal
patterns, as well as impacts of local scale features. The disadvantage of dynamical
downscaling is in computational time. There are assumptions inherent in the structure of
each RCM and ideally a range of RCMs would be used in conjunction with a range of GCMs
to give a more comprehensive description of local climate. The ability to undertake such
studies is inhibited by the computational intensity of the task, and thus studies are generally
limited to use of one or two RCMs with a range of GCMs.
4.17. References
ANZLIC (2008), ICSM Guidelines for Digital Elevation Data Version 1, August 12.
Abbs, D. and Rafter, T. (2009), Impact of Climate Variability and Climate Change on Rainfall
Extremes in Western Sydney and Surrounding Areas: Component 4 - dynamical
downscaling, CSIRO.
Ball, J.E. and Cordery, I. (2000), Information and hydrology, Proc. Hydro 2000 - Hydrology
and Water Resources Symposium, Perth, WA, Australia, pp: 997-1001.
Bates, B. C., Z. W. Kundzewicz, S. Wu, and J. P. Palutikof (2008), Climate Change and
Water. Technical Paper of the Intergovernmental Panel on Climate Change, Rep., 210 pp,
IPCC Secretariat, Geneva.
Chow, V., Maidment, D. and Mays, L. (1988), Applied hydrology. New York: McGraw-Hill.
Church, J.A., Hunter, J.R., McInnes, K.L. and White, N.J. (2006), Sea-level rise around the
Australian coastline and the changing frequency of extreme sea-level events. Aust. Met.
Mag., 55(4), 253-260.
Cordery, I., Weeks, B., Loy, A., Daniell, T., Knee, R., Minchin, S. and Wilson, D. (2006),
'Water Resources Data Collection and Water Accounting', Australian Journal of Water
Resources, 11(2), 257-266.
Corney, S.P., Katzfey, J.J., McGregor, J.L., Grose, M.R., Bennett, J.C., White, C.J., Holz,
G.K., Gaynor, S.M. and Bindoff, N.L. (2010), Climate Futures for Tasmania: climate
modelling technical report, Antarctic Climate & Ecosystems Cooperative Research Centre,
Hobart, Tasmania.
102
Data
Fenton, J.D. (2001), Rating curves: Part 1-Correction for surface slope. OR Fenton, J.D.
(2001) Rating curves: Part 2-Representation and approximation.
Fenton, J.D. and Keller, R.J. (2001), the Calculation of Streamflow from Measurements of
Stage, Technical Report Report 01/6, Cooperative Research Centre for Catchment
Hydrology, Monash University, Clayton, Vic, Australia.
Grose M.R., Barnes-Keoghan I., Corney S.P., White C.J., Holz G.K., Bennett J.B., Gaynor
S.M. and Bindoff N.L. (2010), Climate Futures for Tasmania: general climate impacts
technical report, Antarctic Climate & EcosystemsCooperative Research Centre, Hobart,
Tasmania.
Hanel, M. and T.A. Buishand (2010), On the value of hourly precipitation extremes in
regional climate model simulations, Journal of Hydrology, 393: 265-273.
Herschy, R.W. (1995), Streamflow Measurement, E & FN Spon, Chapman & Hall, London,
UK. Huber and Dickinson, (1988)
Hunter, J. (2007), Estimating sea-level extremes in a world of uncertain sea-level rise, 5th
Flood Management Conference, Warrnambool, Australia, [Accessed 12 October. 2007].
IPCC (Intergovernmental Panel on Climate Change) (2001), Climate Change 2001, The
Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the
Intergovernmental Panel on Climate Change [Houghton, J.T.,Y. Ding, D.J. Griggs, M.
Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge
University Press, Cambridge, United Kingdom and New York, NY, USA, 881 p.
103
Data
Johnson, F., K.Haddad, A.Rahman, and J.Green (2012), Application of Bayesian GLSR to
estimate sub daily rainfall parameters for the IFD Revision Project. Hydrology and Water
Resources Symposium 2012, Sydney. Engineers Australia.
Jones, D.A., Wang, W. and Fawcett, R. (2007), Climate Data for the Australian Water
Availability Project: Final Milestone Report. National Climate Centre, Bureau of Meteorology,
Melbourne.
Jones, M.R., (2012), Characterising and modelling time-varying rainfall extremes and their
climatic drivers. PhD Thesis, Newcastle University.
Lenderink, G. and E. van Meijgaard (2008), Increase in hourly precipitation extremes beyond
expectations from temperature changes, Nature Geoscience, 1, pp: 511-514
Malleron, N., Zaoui, F., Goutal, N., and Morel, T. (2011), On the use of a high-performance
framework for efficient model coupling in hydroinformatics. Environmental Modelling &
Software, 26(12), 1747-1758.
Marauan, D.F., Wetterhall, A.M., Ireson, R.E., Chandler, E.J., Kendon, M., Widmann, S.,
Brienen, H.W., Rust, T., Sauter, M., Theme, V.K.C., Venema, K.P., Chun, C.M., Goodess,
R.G., Jones, C., Onof, M., Vrac, I. and Thiele-Eich (2010), Precipitation downscaling under
climate change: recent developments to bridge the gap between dynamical models and the
end user, Reviews of Geophysics, 48.
McInnes, K.L., Hubbert, J.D, Macadam, I. and O'Grady, J.G. (2007), Assessing the Impact of
Climate Change on Storm Surges in Southern Australia, In Oxley, L. and Kulasiri, D. (eds)
MODSIM 2007 International Congress on Modelling and Simulation. Modelling and
Simulation Society of Australia and New Zealand, ISBN: 978-0-9758400-4-7.
McKenzie, N.J., Jacquie, D.W, Ashton, L.J., and Cresswell, H.P. (2000), Estimation of soil
properties using the atlas of Australian soils, CSRIO Land and Water, Canberra ACT.
Mehrotra, R., and Sharma, A. (2010), Development and Application of a Multisite Rainfall
Stochastic Downscaling Framework for Climate Change Impact Assessment, Water
Resources Research, 46.
Meynett, A.E. and van Zuylen, H.J. (1994), On the concept of hydroinformatics, Proc.
Hydroinformatics '94, Delft, The Netherlands, AA Balkema, pp: 19-24.
Milly, P.C.D., Betancourt, J., Falkenmark, M., Hirsch, R.M., bigniew, W.Z., Kundzewicz, Z.W.,
Lettenmaier, D.P. and Stouffer, R.J. (2008), Stationarity is Dead: Whither Water
Management? Science, 319(5863), 573-574.
Moya Quiroga, V., Mano, A., Asaoka, Y., Kure, S., Udo, K., and Mendoza, J. (2013), Snow
glacier melt estimation in tropical Andean glaciers using artificial neural networks. Hydrology
and Earth System Sciences, 17(4), 1265-1280.
National Research Council (2004), Assessing the National Streamflow Information Program,
National Academy of Sciences, US.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
104
Data
Popescu, I., Jonoski, A., and Bhattacharya, B. (2012), Experiences from online and
classroom education in hydroinformatics, Hydrol. Earth Syst. Sci., 16: 3935-3944, doi:
10.5194/hess-16-3935-2012.
Rantz, S.E. (1982), Measurement and computation of stream flow. Volume 1: Measurement
of stage and discharge; Volume 2: Computation of discharge. US Geological Survey water-
supply paper, 2175, 631.
Raupach, M.R., Briggs, P.R., Haverd, V., King, E.A., Paget, M. and Trudinger, C.M. (2009),
Australian Water Availability Project: CSIRO Marine and Atmospheric Research Component:
Final Report for Phase 3, CAWCR Technical Report, No.013, July.
Simons, D.B. and Richardson, E.V. (1962), The effect of bed roughness on depth-discharge
relations in alluvial channels, U.S. Geological Survey Water-Supply Paper 1498-E.
Stewardson, M.J. and Howes, E.A. (2002), The number of channel cross-sections required
for representing longitudinal hydraulic variability of stream reaches. Proceedings of the 27th
Hydrology and Water Resources Symposium. Engineers Australia, pp: 21-25.
Toppe, R. (1987), Terrain models - A tool for natural hazard Mapping, Avalanche Formation,
Movement and Effects, IAHS Publication Number 162: 629-638.
Trenberth KE (2011), Changes in precipitation with climate change. Clim Res 47: 123-138.
Wain, A.T., Atkins, A.S. and McMahon, T.A. (1992), The Value of Benefits of Hydrologic
Information, AWRAC Research Project P87/24, Centre for Applied Hydrology, University of
Melbourne, p: 59
Westra, S., L.V. Alexander, F.W. Zwiers (2013), Global increasing trends in annual maximum
daily precipitation. Journal of Climate, in press, doi:10.1175/JCLI-D-12-00502.1.
White, C.J., Grose, M.R., Corney, S.P., Bennett, J.C., Holz, G.K., Sanabria, L.A., McInnes,
K.L., Cechet, R.P., Gaynor, S.M. and Bindoff, N.L. (2010), Climate Futures for Tasmania:
extreme events technical report, Antarctic Climate and Ecosystems Cooperative Research
Centre, Hobart, Tasmania.
105
Chapter 5. Risk Based Design
Duncan McLuckie, Rhys Thomson, Leo Drynan, Angela Toniato
5.1. Introduction
Floods can cause significant impacts where they interact with the community and the
supporting natural and built environment. However, flooding also has the potential to be the
most manageable natural disaster as the likelihood and consequences of the full range of
flood events can be understood enabling risks to be assessed and where necessary
managed.
Design flood estimation plays a key role in understanding flood behaviour and how this may
change with changes within the floodplain and catchment and in climate and how these
changes can influence both decisions that influence the growth and management of flood
risk.
Design flood estimation provide essential information on a range of key factors that need to
be considered in understanding and managing the consequences of flooding. These include:
flood frequency; flow rates, velocities and volumes; flood levels and extents; duration of
inundation. ARR provides essential analytical tools to assist in estimating design floods and
in understanding these factors. Estimates of design floods are an essential element in
understanding flood behaviour and making informed decisions on:
• Managing flood risk through a risk based decision making process (AEMI, 2013). Such
approaches generally provide an understanding of flood behaviour across the full range of
flood events, up to and including extreme events, such as the Probable Maximum Flood
(PMF). They can inform decision making in flood risk management, a broad range of land
use planning activities, emergency management for floods and dam failure, and in
estimating flood insurance premiums;
• Managing flood risk through the use of design standards related to the probability or
frequency of occurrence, rather than the broader assessment and management of risk;
• Managing flood risk in short term projects through a risk based decision making process
considering the life of a project; and
• Understanding and managing the impacts changes within the catchment and floodplain
may have on flood behaviour and risk.
This chapter provides advice and examples on using the analytical tools outlined in ARR to
inform decision making for flood risk management, road and rail design, mining and
agriculture and design of dams. It does not provide details on the design standards, risk
assessment frameworks, the assessment of impacts on the community or built or natural
environment, nor to estimate residual risks. Further information on risk management
106
Risk Based Design
approaches, processes and frameworks can be found in (AEMI, 2013), ANCOLD (2003) ISO
(2009a).
• Book 1, Chapter 5, Section 6 discusses managing flood risk to mining, agriculture and
infrastructure projects;
• Book 1, Chapter 5, Section 7 discusses the management of flood risk in relation to dams.;
• Book 1, Chapter 5, Section 8 discusses the management of flood risk using basins;
• Book 1, Chapter 5, Section 10 discusses how flood risk changes over time due to a range
of factors;
Flood risk is not simply the probability of an event occurring. The International Standard on
Risk Management, (ISO, 2009a) defines risk as the effect of uncertainty on objectives. In
addition, ISO Guide 73:2009 Risk Management Vocabulary (ISO, 2009b) notes that
uncertainty is the state, even partial, of deficiency of information related to, understanding or
knowledge of an event, its consequence, or likelihood. An effect is a positive or negative
deviation from the expected outcome. Objectives can have different aspects (financial,
health and safety, environmental) and apply at different levels (local, state, site).
AEMI (2013) and ANCOLD (2003) express risk in terms of combinations of the likelihood of
events (generally measured in terms of Annual Exceedance Probability (AEP)) and the
severity of the consequences of the event (see Figure 1.5.1). Risk is higher the more
frequently an area is exposed to the same consequence or when the same frequency of
event has higher consequences.
107
Risk Based Design
AEMI (2013) discusses the consequences of flooding on the community. The consequences
depend upon the vulnerability of the community and the built environment to flooding.
Vulnerability varies with the element (people, property and infrastructure) at risk and within
the different cohorts within the elements outlined below. AEMI (2015) advises that this may
be measured in terms of the impacts upon:
• The economy and assets - in terms of reduced economic activity and asset losses;
• The social setting – in terms of consequences to the community as a whole (rather than
individuals) that can lead to the breakdown of community organisations and structures.
This can include the temporary or permanent loss of community facilities or culturally
important objects or events;
• Public administration – in terms of changes to the ability of the governing body for the
community to be able to deliver its core functions;
The consequence to different elements from the same exposure to flooding can be different.
For example, a flood may have major consequences for community assets (such as a water
or sewerage treatment plant) but have only minor or moderate consequences in terms of
potential fatalities and injuries in the population.
108
Risk Based Design
The likelihood of exposure to flooding and therefore flood risk varies significantly between
and within floodplains and flood events of different magnitudes. Figure 1.5.2 shows areas
exposed to flooding from events of different AEPs.
Figure 1.5.2. Map Showing Different AEP Flood Extents Including an Extreme Event
Risk analysis involves understanding the varying likelihood of events (that result in a
consequence), and the severity of their consequences. It should also involve an assessment
of confidence, which considers factors such as the divergence of opinion, level of expertise,
uncertainty, quality, quantity and relevance of data. These factors are combined to assign a
relative risk rating for an event through development of a risk matrix or other tools. Risk
analysis may be quantitative or qualitative. In both cases the probability of events affecting
communities may be able to be estimated through design flood estimates.
Quantitative analysis is often used where both the probability and the consequences can be
measured. For example, consequences may be estimated in terms of tangible flood damage
to the community for events of different AEPs. Tangible damages are those damages that
are more readily able to be estimated in economic terms and lend themselves to quantitative
assessment, including:
109
Risk Based Design
• Direct damages to structures and their contents due to water contact; and
• Indirect damages of clean-up of debris and removal of damaged material, loss of wages,
sales, production and costs of alternative accommodation, and opportunity costs due to
loss of services.
Qualitative analyses are generally undertaken where consequences are difficult to quantify.
For example, these can include social and environmental impacts and the costs of fatalities
and injuries which are intangible damages that cannot readily be put in economic terms.
Table 1.5.1 provides an example of a qualitative risk matrix.
Risk analysis can be used to inform decisions on both the acceptability of residual risk and
the effective and efficient use of scare resources to better understand and manage risk.
• Managing changes within the floodplain that may alter flood behaviour;
Managing risk needs to consider the different elements at risk which may require different
management techniques and standards. It also needs to consider the risks to the existing
community and built and natural environment and the additional risk created by introducing
new development and infrastructure into the floodplain.
• Development (including filling) within the floodplain (particularly in flow conveyance and
flood storage areas);
• Development within the catchment even though outside the floodplain; and
110
Risk Based Design
These activities may result in significant changes to flood behaviour including changes to
flow paths, peak flow and velocities, flood levels and extents, distribution of flood waters, and
the timing and duration of flooding. These changes can lead to adverse impacts on the
existing community and the built and natural environment and the built and natural
environment through changes to flood behaviour and the ability of the community to
effectively respond to flood emergencies.
New developments and infrastructure projects generally have constraints placed on them
through government approvals processes relating to negating or minimising adverse impacts
of the project on existing development and the environment.
Broader processes such as the floodplain specific management process AEMI (2013) often
examine changes through scenario testing. These may be managed though floodplain,
catchment or community based techniques as discussed in Book 1, Chapter 5, Section 5.
Risk based management approaches are generally more complex than the use of design
flood and level of service standards.
Design flood standards are generally aimed at limiting the frequency of exposure to flood
risk. For example, the use of a minimum floor levels for a building relative to a design flood
level aims to reduce the exposure to flooding by excluding flooding from above the floor level
of the building in the design flood event. This approach is based upon accepting that
consequences that result from the building flooding in events larger than the design flood
event.
111
Risk Based Design
Design flood standards are typically adopted across an entire floodplain, or government
service area. Generally, there is only limited ability to incorporate location specific issues into
the design flood standards.
When used in isolation, this approach makes the assumption that the residual risks
remaining after development is constructed to standards are acceptable. It also assumes
that:
• the development will not impact upon flood behaviour and therefore have an adverse
impact elsewhere in the community; and
• the impacts of flooding on the building and its occupants, including the associated residual
risks, are acceptable or can be managed by other means.
Where used in isolation this can limit the effectiveness of this approach and may lead to
decisions that leave individuals, communities and the built environment exposed to risks that
may be considered unacceptable to the community.
Design flood standards are also generally based upon existing floodplain, catchment and
climatic conditions, what can be called stationary conditions. However, these conditions can
change over time. Consideration in estimating design floods is discussed in Book 1, Chapter
5, Section 5 to Book 1, Chapter 5, Section 5. Further discussion on the factors that can lead
to non-stationarity and current literature on non-stationary risk assessment is provided in
Book 1, Chapter 5, Section 10.
AUSTROADS, the national association of road transport and traffic authorities in Australia
use both the SLEP approach and the AEP method depending on the context.
“Freeways and arterial roads – should generally be designed to pass the 50 or 100 year ARI
flood without interruption to traffic. However for arterial roads in remote areas, a reduced
standard is commonly adopted where traffic densities are low, Austroads (1994).”
“All bridges are to be designed so that they do not fail catastrophically during a flood that has
a 5% chance of being exceeded during the Design Service Life of the structure. Assuming a
100 year Design Service Life, this equates to a flood with an ARI of 2000 years’ Austroads
(1994).”
112
Risk Based Design
The SLEP approach may be more readily understandable for short term or temporary
structures (refer to Book 1, Chapter 5, Section 6). Similarly, where infrastructure may be
particular susceptible to damage from overtopping (e.g. a bridge superstructure that is not
designed for inundation or a secondary spillway), then it may be important to understand the
likelihood of being exceeded during the structure’s effective service life (discussed in Book 1,
Chapter 5, Section 9).
• Road will not impact upon flood behaviour and have adverse impact on the community;
• Impacts of flooding on the road are acceptable or can be managed by other means. For
example, the road is expected to overtop and the design allows for damage minimisation
or replacement in such events;
• Level of service provided is acceptable. For example, loss of access along the road is
expected during a flood event and this is considered in emergency management planning;
and
Similarly to design flood standards, the assumptions of this approach can limit its
effectiveness, particularly where it is used in isolation.
• for management of risks for non-standard and critical infrastructure where a broad design
flood standard may not be appropriate.
• Dam safety risks (ANCOLD, 2003), and design of spillways and outlets;
113
Risk Based Design
• Risk management decisions for projects with a relatively short time frame, such as
construction projects as well as temporary infrastructure (such as a coffer dam during
construction);
• Critical infrastructure that may result in significant consequences should they fail or be
inundated. This may include economically important infrastructure (for example
transportation routes such as those between ports and major economic hubs, trunk
communication networks (internet, phone) or key elements of the electricity network) or
emergency response infrastructure (e.g. hospitals, evacuation centres etc).
In undertaking a risk based design process to develop management strategies for non-
standard and critical infrastructure, it is important to understand the stakeholders involved in
the decision making process. In many cases, these stakeholders may have different risk
preferences (risk averse versus risk accepting). For example, a mining company may have a
different risk preference for inundation of a mine, versus the community preferences for flood
impacts downstream of the mine. It will be important to fully understand these different
preferences to assist in informing the appropriate mitigation measures that might be required
as the same likelihood and consequences could lead to a higher risk categorisation where
there is a risk averse rather than a risk accepting preference.
For example, in a particular instance the use a design flood event as a standard for
development within a community may reduce the frequency of exposure of people and
property to flooding. However, additional management measures may be required to
address residual risks. For example:
• the degree of flood damage to new buildings built to design standards mean that risks to
property remain high. This may result in consideration of the use of a larger design flood
event as a standard for development or other damage reduction approaches to broadly
reduce flood damages.
• limitations to the ability to effectively warn and evacuate the community during the
available warning time may mean that risk to life may remain high. Implementation of
options to reduce risks to life, such as:
• the effect development has on flood behaviour outside the development area may be
significant. This may result in the need to implement, additional management measures
114
Risk Based Design
such as allowance in design for areas within the floodplain to continue to provide their
essential flood conveyance and storage functions.
Best practice in flood risk management in Australia (AEMI, 2013) works towards the vision:
“Floodplains are strategically managed for the sustainable long-term benefit of the
community and the environment, and to improve community resilience to floods.”
Best practice promotes the consideration and, where necessary, management of flood
impacts to existing and future development to improve community flood resilience using a
broad risk management hierarchy of avoidance, minimisation and mitigation to: reduce the
health, social and financial costs of occupying the floodplain; increase the sustainable
benefits of using the floodplain, and improve or maintain floodplain ecosystems dependent
on flood inundation (AEMI, 2013).
Managing flood risk provides an informed basis for the effective and efficient use of scare
resources to:
• Manage the growth in flood risk to the community due to the introduction of new
development into the floodplain; and
This enables investment to be focused on understanding and managing flood risk where the
need and benefit is greatest.
Different treatment solutions may be necessary depending upon the element at risk (people,
property and infrastructure) and the location. Treatment options may involve a combination
of flood mitigation, emergency management, flood warning and community awareness,
together with strategic and development scale land-use planning arrangements that consider
the flood situation and hazards.
Different options are also used dependent upon whether the aim is to manage risk to
existing or future development within the community.
For the existing development it is important to understand the current exposure of the
community to the full range of flooding, how the associated risks to different elements within
the community are currently being managed and whether changes would be required to
reduce risks to a more acceptable level. Where treatment options to reduce risks are being
considered the impacts these measures may have on flood behaviour need to be
understood and considered in decision making.
For flood risk to future development it is important to understand how the flood behaviour
varies across the floodplain so that the constraints that this may place on development can
be considered in deciding where to develop (and where not to develop), the types of
development that may be suitable in different areas, and the flood related development
115
Risk Based Design
constraints necessary to reduce risks to acceptable levels (in areas suitable for
development).
It is also important to consider how flood behaviour and the associated risk will change over
time due to development in the catchment and due to climate change and its impacts on
both sea level and the intensity of flood producing rainfall events (discussed in Book 1,
Chapter 6). Assessment of these changes on design flood estimates are discussed in Book
1, Chapter 5, Section 5 to Book 1, Chapter 5, Section 5.
More information on understanding and managing flood risk is available in AEMI (2013).
The use of historic flood event information in isolation without an understanding of the
potential range and severity of flood events at a location and an understanding of how this
may vary within a floodplain can result in poor management decisions – leaving the
community unsustainably exposed to risk.
Knowledge and experience of historic flood events provides a starting point for
understanding flood risk. Modelling historic flood events can assist to:
• Calibrate and validate models against known data and the community’s experience of
flooding;
• Better understand historic flood events by filling in gaps in our knowledge of flood
behaviour and its variation along a watercourse and across the floodplain; and
• Understand the probability of floods of the scale of historic flood events being exceeded in
future.
The consequences of historic flood events can also provide valuable information for
understanding flood risk. An appropriately calibrated and validated model can provide a
sound basis for updating of the model to current conditions in light of changes in the
catchment and floodplain since historic flood events.
116
Risk Based Design
• Understanding flood behaviour (flow paths, distribution, velocity, depth, level, timing and
length of inundation) and risk and how this varies across the floodplain, over the duration
of a flood event, and between flood events of different magnitudes;
• Understanding how flood behaviour, hazards and risks may change due to floodplain,
catchment and climate changes;
• Making decisions on the need for risk treatment, comparing and assessing treatment
options, and deciding on which options to implement; and
Design flood estimation for the full range of flood behaviour provides the basis for assessing
the frequency and severity of flood exposure of different parts of the floodplain and the
consequences of flooding to the community,providing a spatial understanding of:
• The variation in the flood functions of flow conveyance and storage within the floodplain
for key events. Areas with these functions are generally areas where change in
topography, vegetation or development can significantly alter flood behaviour which may
lead to detrimental impacts to the existing community;
• The variation in hazard across the floodplain for key flood events (refer to Book 6, Chapter
7). This can delineate where flood behaviour in events is hazardous to people, vehicles
and buildings (AEMI, 2014b); and
• The variation in flood evacuation difficulty from areas within the floodplain (AEMI, 2014c).
Outputs from design flood estimation and flood risk management processes are essential in
informing government and industry through input to information systems. This can improve
the accessibility of information on flood risk so it can be considered in investment and
management decisions by government, industry and the community.
The significance of these impacts can be assessed by altering calibrated and validated
models of existing conditions to allow for changes and assessing the associated impacts.
Management of impacts can lead to modifying the design of the infrastructure in
consideration of its impacts upon the community or examination of ways to offset impacts
upon the community.
117
Risk Based Design
The natural flood functions of flow conveyance and flood storage occur in flow conveyance
and flood storage areas. Filling of flow conveyance areas can impact upon upstream flood
levels and can result in redistribution of flows, with the potential for new flow paths being
activated, affecting other areas. Filling of flood storage areas can affect both upstream and
downstream flood levels.
Any decision to modify the flow conveyance and flood storage characteristics of the
floodplain need careful consideration as these ramifications can be significant. The
significance of these impacts can be assessed by altering calibrated and validated models of
existing conditions to allow for changes and assessing the associated impacts. Management
of impacts can lead to modifying the allowable changes in these areas in consideration of its
impacts upon the community.
Flows at a downstream point in the catchment can also change significantly with upstream
development within the catchment, even where flow conveyance and flood storage areas
maintain their essential flood functions. These changes can occur due to increase in
impervious areas and flow paths being shortened or having a higher proportion of
impervious area. This can reduce losses leading to a higher proportion of rainfall running off
and the time of concentration of flows within the catchment being reduced.
Assessment of the potential cumulative impacts of such broad changes is best undertaken in
community rather than development scale flood investigations. It can provide the basis for
understanding the relative significance of this change and where considered significant,
assessing options to offset these impacts on a catchment basis. For example impacts on
peak flood flows may, in some cases, be able to be managed using centralised or strategic
scale basins or distributed (development site related) treatment measures.
Sea level rise will directly influence ocean conditions that can influence flood behaviour in
coastal waterways. Any rise in ocean conditions will directly translate to an increase in any
relevant ocean boundary condition that is used in flood studies and will influence both the
scale and balance of interaction of oceanic inundation with catchment flooding. Book 6,
Chapter 5 provides information on the potential for coincidence of oceanic inundation and
catchment or river flooding. Other guidance, such as OEH 2015 for NSW, may also be
relevant and need consideration in particular jurisdictions.
The significance of these impacts of flood behaviour can be assessed by altering calibrated
and validated models of existing conditions to allow for changes in flood producing rainfall
118
Risk Based Design
events and the coincidence of oceanic inundation. Understanding these impacts can lead to
an understanding of where they occur and whether management may be necessary.
Management may involve strategies to allow for changes upfront or strategies that allow for
adaptation over time.
Climate change may also have influence on the effective service life of infrastructure. This is
discussed in Book 1, Chapter 5, Section 9.
Land use planning systems often use a flood standard as a basis for many flood related
controls and decisions. These systems may also consider changes in climate and the
influence development within the catchment will have on flood behaviour. These
considerations are often made in addition to existing conditions to provide an understanding
of how changes may impact upon flood behaviour and the existing community.
Systems may also require consideration of larger or extreme flood events to examine
whether additional development constraints are necessary to deal with residual risks to the
new development, particularly risk to life.
As new development on the floodplain can impact upon flood behaviour and the flood risk
faced by the existing community, land use planning systems generally require these impacts
to be assessed and managed.
Developing on the floodplain places the new development and its occupants at risk from
flooding. These issues need to be considered in setting strategic directions for the
community and determining development constraints.
Design flood estimation provides essential information for understanding constraints (see
Book 1, Chapter 5, Section 5) that need consideration in setting strategic land use planning
directions for the community, including:
• Information to inform decisions on where to (and where not to) develop and the limits on
what type of development to place in different areas of the floodplain. For example,
development within a flow conveyance area may have significant impacts upon flood
behaviour or cause significant damage to structures. Development in this area should be
restricted to enable the flow conveyance area to perform its natural flood function. A
further example, such as an area with evacuation issues that is classified as flooded,
isolated and submerged (AEMI, 2014b) would not necessarily be an appropriate area for a
development whose occupants may be vulnerable in emergency response and therefore
difficult to evacuate e.g. aged care facilities or a hospital;
• The assessment of the cumulative impacts of development within the catchment and
floodplain on flood flows and behaviour. For example, the assessment of the cumulative
impacts of development of the catchment can enable the examination of catchment scale
solutions to offset the impacts of development on flood flows in an efficient manner. Such
solutions may include a single series of basins whose interaction is considered (see Book
1, Chapter 5, Section 8); and
119
Risk Based Design
For individual development proposals design flood estimation can advise on the:
• Limits on the scale of development to limit impacts on the existing community; and
• Development conditions necessary to manage residual risk to the new development and
its occupants and any impacts of the development upon the existing community.
Site specific studies do not generally provide advice for setting strategic land use direction of
the community.
Figure 1.5.3 shows the area affected by the design flood event used to set design standards
for developments in the area. It does not provide any breakdown of the floodplain to highlight
the varying flood function within this area, the varying degrees of hazard; or the differences
in emergency response classification within the floodplain. Nor does it provide information on
more frequent or more extreme floods. It therefore provides limited information for effective
management.
Using the information from Figure 1.5.3 alone, Locations A, B, C and D appear to be
exposed to the same degree of risk. The availability of this limited information would, most
likely, result in the same development restrictions being applied to each location.
120
Risk Based Design
However, Figure 1.5.4 to Figure 1.5.6 shows the same floodplain but with information on how
flood hazard, flood function and the flood emergency response classification varies across
the floodplain. This information shows the flood situations impacting upon Locations A to D
to be different and may require different management treatments.
• Location A is easy to evacuate and outside the impacts of flood function and flood
conditions are not hazardous to buildings;
• Location B is in a flow conveyance area and development may impact upon flood
behaviour and the flood risk of others in the community;
• Location C is isolated and completed inundated by larger floods and has a more difficult
flood evacuation situation; and
If these additional risk factors are considered in setting development constraints flood risk at
a location can more effectively be managed at the different locations reducing the residual
risks remaining. Mapping the constraints from Figure 1.5.4 to Figure 1.5.6 can provide more
clarity on where to apply different development controls as shown in Figure 1.5.7.
121
Risk Based Design
122
Risk Based Design
Figure 1.5.6. Map of Flood Extents and Flood Emergency Response Classification
123
Risk Based Design
The additional information provided in Figure 1.5.4 to Figure 1.5.7 identifies additional risk
factors in different locations without which:
• The need to consider these additional risk factors in decision making may not be evident;
and
• Where it has otherwise been recognised that these additional risk factors may need to be
considered in decision making it is likely that any associated constraints would be applied
broadly across the floodplain. This would require studies for individual developments to
determine and address these risk factors.
The additional information provided in Figure 1.5.4 to Figure 1.5.7 has the added benefit of
enabling the provision of improved clarity for development conditions by enabling these to be
more effectively inform land use planning systems. McLuckie et al. (2016) discusses the
extension of this work as part of the development of best practice guidance on flood
information to support land use planning being developed by the National Flood Risk
Advisory Group and expected to be releases in the second half of 2016.
Design flood estimation can provide the understanding of flood behaviour, and the drivers for
this behaviour, across a range of flood events to support management of the flood risk. It can
be combined with other information to:
• Assess the consequences of flooding on the community and the natural and built
environment. One quantitative measure of consequences is the estimation of flood
damages. This section provides an example of the use of flood damage estimation in flood
risk management;
• Assess the impacts of floods on community infrastructure such as electricity, water supply,
the sewerage system, medical facilities and emergency management infrastructure
(evacuation routes and centres), and provide information to consider in recovery planning
for the community; and
• Examine the effectiveness of treatment options to reduce this risk where warranted.
In this example, for events more frequent than a 10% AEP event the consequences to
people, and the community are insignificant and therefore risks are low. However, the
consequences for property are minor and therefore risk is medium. Consequences to people
are major for floods rarer than the 10% AEP flood event. Consequences to property rise to
124
Risk Based Design
major in unlikely, rare, very rare events and extremely rare floods. Impacts upon the
community and its supporting infrastructure are moderate for events greater than the 10%
AEP and do not reach major levels.
Figure 1.5.8 shows the risks to people and property for events between a 10% and 0.01%
AEP event. The risk to the community is low except in floods between a 10% and 0.01%
AEP event where they are medium.
A range of methods are used to derive damages for individual properties. These include:
• Rapid assessment techniques - which rely on flood extents to determine the number of
properties of different development types (residential, commercial and industrial) affected
and apply a fixed damage per property.
• Techniques based upon the use of stage damage curves - for different development types
and in some cases styles and sizes of buildings (an example for residential buildings
derived from DECC (2007) is provided in Figure 1.5.9). This technique provides for
variation in damage to structures and yards and their contents with depth above ground
level and structure floor level. Approaches, such as this allow for the use of representative
buildings within the floodplain. Whereas other approaches may require the style and size
of houses to be determined and individual buildings to be considered in more detail.
125
Risk Based Design
Figure 1.5.9. Indicative Stage Damage Curve for some Residential House Types
Note: There are many different stage damage curve relationships for residential
development. Different damage relationships also exist for different types of development
(e.g. commercial and industrial). These may be determined based upon the use of historical
damage, insured loss information or based upon building component damage at different
depths. However, no definitive set of curves exist and work in this area continues to evolve.
Commercial and industrial damages can be very complex given the changing nature of the
occupation of individual sites. The damage to the structure of the building will not generally
change significantly with use but the contents damage can vary significantly. For example,
the same light industrial storage area could house aluminium cans for recycling or computer
components for assembly and therefore the damages to contents due to flooding would vary
greatly.
Assessment based upon stage damage curves requires information on flood extents to
determine which properties are affected and flood levels. This information can be used with
location, ground level and structure floor level information (determined using survey or
approximation methods) to estimate damages at an individual site. These can then be
aggregated to estimate damages to a community or area.
Figure 1.5.10 provides an example of an aggregated flood damage curve across the full
range of flood events for a community. This provides a quantitative understanding of the
impacts of flooding upon the community and the built environment. It can also provide a
baseline for considering the benefits of management options or infrastructure projects. Each
point on the flood damage curve has a probability of exceedance in any given year.
Examining this curve shows that there is no damage in a 20% AEP event with damage in the
0.5% AEP event being approximately $20 Million.
To use this information for flood risk management, particularly when examining the benefits
of management measures, this information needs to be translated into an Annual Average
Damage (AAD). This is achieved by determining the area under the curve. Book 1, Chapter
5, Section 12 provides an example of calculation of Annual Average Damages based upon
the flood damage curve in Figure 1.5.10.
126
Risk Based Design
Figure 1.5.10. Example of Flood Damage Curve for a Range of AEP Flood Events
Comparing this information to the existing flood situation can indicate changes in flood
behaviour, and in combination with other information, changes in the consequences of
flooding on the community. Flood extents, flood function, flood hazard and emergency
response classification and damages may alter for specific areas and different design flood
events. Where changes in behaviour are significant there are likely to be areas where flood
impacts are reduced and other areas where they may be increased. These changes in
consequence can be used to assess the benefits and costs to the community and the
limitations of the treatment option.
Section 9.4 and Table 9.3 of AEMI (2013) outline some of the issues that should be
considered when selecting and comparing treatment options. The benefits and costs of
treatment options may be assessed singularly as well as in combination with complimentary
measures as it is rare for a single treatment option used in isolation to effectively manage
flood risk to a community. For example, a levee may be built to reduce flood damage in a
town in combination with a flood warning system to provide additional warning and upgraded
evacuation routes to improve community safety during floods.
One quantitative way of determining the financial efficiency of the project involves
understanding the benefits in reduction in flood damages and comparing this to the costs of
achieving and maintaining this benefit. A reduction in flood damages can be assessed by
determining the reduction in Annual Average Damages and exposure of the community to
flooding with treatment options in place. For example the use of minimum floor levels based
upon the 1% AEP flood for new development, or a levee designed to exclude a 1% AEP
flood from an existing flood affected area will reduce flood damages for events up to but not
127
Risk Based Design
exceeding the design flood event (in this case 1% AEP event). However, the consequences
of floods rarer than the design floods may not change significantly and there may still be
substantial impacts upon the community.
Annual Average Damages calculated across the full range of flood events provides a sound
basis for understanding the financial benefits and limitations of the project so this can be
considered in decision making and enables the calculation of Annual Average Benefits.
Figure 1.5.11 provides an example of the estimation of the financial benefits of a treatment
option. It shows the damage curve for the same flood situation as shown in Figure 1.5.10 but
both without any treatment and with a treatment option in place.
In this example the treatment option is a levee. The aim of the levee is to reduce flood
damages and the frequency of community exposure to flooding and associated risks for
events up to the design flood event, in this case the 1% AEP event. Whilst there are some
benefits for rarer floods these can be seen to diminish quickly in rarer events. In a 0.2% AEP
event the damages with and without the treatment options would be the same.
Figure 1.5.11. Example of Flood Damage With and Without Treatment Options for a Range
of AEP Flood Events
The reduction in Average Annual Damages (AAD) or the Average Annual Benefit (AAB) can
be used to determine the net present value of the benefits. An example is provided in Book
1, Chapter 5, Section 12.
This can then be compared with the Net Present Value (NPV) of life cycle costs of the
treatment options to determine the Benefit Cost Ratio (BCR), which provides a measure of
the financial efficiency of the project. Book 1, Chapter 5, Section 12 provides an example of
estimation of Net Present Value of life cycle costs.
128
Risk Based Design
Book 1, Chapter 5, Section 12 provides an example of estimation of the Benefit Cost Ratio.
Lifecycle costs and lifecycle benefits for individual years are shown in Figure 1.5.12.
Figure 1.5.13 shows the same figures altered under current day dollars assuming a 7%
discount rate.
The Benefit Cost Ratio calculated can be used in conjunction with consideration of other
benefits, such as reduction in risk to life, reduction in the impacts upon community function
and infrastructure and with similar information for other treatment options (including those
providing protection for different AEP events) to inform decisions on managing risk.
129
Risk Based Design
Figure 1.5.13. Example of Lifecycle Benefits and Costs Adjusted to Todays $ Using a 7%
Discount Rate
Figure 1.5.14 re-examines the risks identified in Figure 1.5.8 to highlight how these have
changed through the implementation of treatment options to provide protection for the 1%
AEP event. This example shows a reduction in risk to property from a maximum of high to a
maximum of medium in events above a 1% AEP event but low in events less than the design
event.
However, risk to people is still high due to the impacts of events greater than the 1% AEP
event. This may warrant additional treatment options being considered which, depending
upon why this risk remains high, may include flood warning systems, improved emergency
management planning and improvements to evacuation routes.
Note: there is no change to the risks in extreme events. The risk to property has been
reduced in rare events due to the reduction in damages as a result of risk management
measures.
130
Risk Based Design
• May impact upon flood behaviour with detrimental impact to others in the community.
In many cases, a design flood standard may not be available or appropriate, and a risk
management approach as described in Book 1, Chapter 5, Section 4 may need to be
undertaken. A general overview of some of the issues to be considered are included in Book
1, Chapter 5, Section 6 to Book 1, Chapter 5, Section 6.
Some of these projects or related projects with building of infrastructure can be considered
short term projects due to their short term exposure to risk. Assessment of the risks
associated with short term projects is discussed in Book 1, Chapter 5, Section 6.
In the same way that short term projects need special consideration, potential changes over
the effective service life may need to be considered specifically for longer term infrastructure.
Effective service life is discussed in Book 1, Chapter 5, Section 9, while potential
implications of changes over the life of the project are discussed in Book 1, Chapter 5,
Section 10.
5.6.1. Mines
Mines developed in the floodplain may require levees or similar flood mitigation measures.
These levees need to be designed to an appropriate level of flood immunity and may also
131
Risk Based Design
have an impact on flood levels outside the levee. The risk and potential damage caused by
flood inundation both inside and outside the mine needs to be analysed, with a similar
process to that used for community development discussed in Book 1, Chapter 5, Section 5.
• Risks associated with inundation of the mine and its operations. The risk for mining may
be from flooding of the mine pit, emplacements, infrastructure, machinery or underground
workings, which may disrupt production, damage equipment and result in a risk to life; and
The key difference between the two elements is that there will generally be two distinct
groups of stakeholders in the risk assessment. These different stakeholders may have
different risk profiles and this will need to be considered as a part of the assessment. Risks
associated with the inundation of the mine will be typically be associated with the mining
company, which may also incorporate workers’ unions and insurance companies. Risks
associated with the community may be associated with government, the local community,
community interest and environmental groups.
As a result of these different stakeholders, the risks and associated risk profiles may need to
be considered separately. There is also unlikely to be specific design flood standards
associated with inundation of the mine, as it may be more driven by acceptable closure
periods etc. Therefore, a full risk assessment (Book 1, Chapter 5, Section 4) may be
required to derive appropriate management measures.
5.6.2. Agriculture
Flooding in agricultural regions can have concerns for crops, livestock and infrastructure.
Crops may be damaged by inundation either: directly by floodwaters; or due to the extended
period of inundation (where crops may be susceptible to longer term rather than short term
inundation). Livestock may be lost if unable to be relocated to areas outside flood limits.
In order to determine appropriate risk mitigation measures for agriculture, the specific
implications for livestock and crops need to be considered, and the risk assessment will
need to incorporate these factors. In addition, agricultural infrastructure such as irrigation
pipes, fences, buildings and machinery may be damaged and these can have significant
value. Book 1, Chapter 5, Section 4 provides guidance on determining appropriate mitigation
measures incorporating some of these different factors.
In some areas of high value agriculture, the farm land may be protected by levees (or other
infrastructure) and these have similar issues to levees built for other flood mitigation
purposes. The appropriate protection level of these infrastructure would be based on a risk
assessment considering the above factors, as well as potential impacts to the community
upstream and downstream.
The key stakeholder groups for undertaking a risk assessment may include:
• The farming operation(s) who will be directly impacted by the flooding; and
132
Risk Based Design
It also needs to consider the intended function of a road during a flood event, particularly
where it has an important role in community evacuation or recovery plans. Design of
embankments associated with these structures requires analysis of the road or rail level as
well as the sizes, locations and types waterway openings. This is to ensure an acceptable
level of flood immunity, duration of closure and damage from floods as well as an acceptable
impact on upstream flood levels. Discussion of flood assessment and flood risk for road and
rail projects can be found in the Austroads Guide to Road Design – Part 5 (Drainage)1.
• Construction projects. For example, a coffer dam protecting an excavated area for a
period of 6 months;
• A planned festival or community event in the floodplain, which occurs over a 2 day period;
and
• Short term mining operations. For example, a levee to protect a portion of a quarry for a
period of 3 months.
With short term projects, it is particularly important to understand the likelihood component of
the risk assessment as well as the effective service life (refer Book 1, Chapter 5, Section 9).
Flood design standards are typically developed for long term projects and are based on an
assumed effective service life that is generally many years. For example, a residential house
might have an effective service life of 50 years. Therefore, when a 1% AEP design flood
standard is adopted for the floor level, for example, that is equivalent to an approximate 39%
chance that the floor level will be exceeded during the effective service life (using a SLEP
terminology, as identified in Book 1, Chapter 5, Section 4).
However, if a 1% AEP flood design level is adopted for a coffer dam for excavation for an
effective service life of 6 months (ie. a 6 month construction period), then the chance that it
1http://www.austroads.com.au/road-construction/road-design/resources/guide-to-road-design
133
Risk Based Design
will be exceeded will be roughly 0.5% during its service life. Therefore, assuming that the
consequences remain the same, then the risk profile is significantly more conservative. If a
39% chance of exceedance during its effective service life was assumed to be more
appropriate, then that would be equivalent to somewhere between a 50% and a 100% AEP
event.
Therefore, a SLEP approach to flood design standards and flood levels can be more readily
understandable for short term infrastructure.
However, a full risk assessment will typically be required to understand all the likelihoods
and consequences and therefore the risks. For example, the risk to life for a coffer dam may
be significant where sufficient warning is not available.
The dams industry has used risk assessment over the past two decades as a valuable
means to establish upgrade priorities and justify the urgency of completing dam safety
actions in a transparent and rational manner. However, the national ANCOLD guidelines only
support risk assessment as an enhancement to traditional standards-based solutions for
important and conclusive decision making. The guidelines are currently being revised, and it
is likely that there will be an increased focus on the use of risk-based criteria for final
decision making in accordance with changing practice (for example, NSW Dam Safety
Committee, 2006).
One of the key differences between dam safety management and floodplain management is
the probability domain of interest: the scale and nature of life safety risks posed by dams are
generally considerably greater than encountered in floodplain management. The tolerable
risks associated with these potential consequences are three to four orders of magnitude
rarer than those associated with natural floods. The concept of annualising a high
consequence risk based on a 10-6 loading condition is mathematically straightforward, but
such analyses are not easily combined with more common risks and have little practical
utility.
Accordingly, risk assessment for dam safety is focussed on reducing the risks to life (and
property) to as low a level as reasonably practicable. It is unusual for dam safety decisions
to be governed by the need to balance the costs of upgrading works against damages
avoided, and more typically such decisions are dominated by life-safety considerations.
The most relevant guidance on dam safety in this document is provided in Book 8. This
provides guidance on the procedures most relevant to the extreme flood risks of interest. It
also includes procedures relevant to estimation of the Probable Maximum Flood, which
represents the upper limiting magnitude of flood is relevant to standards-based decision
making.
134
Risk Based Design
to offset the impacts of upstream development on downstream flood risk. A basin may be
used in isolation or as part of a series of basins within a catchment to reduce peak design
flood flows and risks for the design event(s) at key downstream locations. The design
performance requirement is therefore generally either:
• To reduce peak flows for a certain design event(s) to a certain maximum amount. For
example, for a basin designed to offset the impacts of upstream new development this
may be the pre-development peak downstream flow. For a basin designed to reduce
downstream flood impacts this may be to reduce peak basins discharges to a level that
reduces flood impacts on the downstream community to an agreed level; and
• To maximise the potential benefit of a basin at the location on downstream flood behaviour
to reduce impacts on the community.
An effectively designed basin has to balance restriction of outlet capacity with having
available storage capacity near the peak of a flood event. This enables the peak of flood
flows to be stored and the stored volume discharged later in the event, as illustrated in the
example in Figure 1.5.15.
This can significantly alter the critical storm duration with the peak flood flow entering a basin
likely to be derived from a shorter duration storm than the peak discharge flow from the
basin. Storm pattern can also have a significant impact on basin operation and the storage
volume available when the peak of flood flows arrives. For example, if the peak arrives later
in the storm the available storage volume may be lower and therefore the basin may have
less impact on downstream peak flows.
135
Risk Based Design
In addition, critical storm durations are also likely to vary with the AEP of the flood being
modelled. It is not unusual for the peak basin discharge in a more frequent flood than the
design event to occur in a longer duration storm event as storm volumes are lower and the
basin storage will have more impact upon peak discharge. However, for events larger than
the design event the opposite is true. There is likely to be less storage volume available at
the storm peak so it will have less influence on peak basin discharge. Therefore the critical
downstream discharge from rarer events than the design event will likely occur from shorter
duration storms. For extreme events this is likely to be closer to the critical storm duration at
the location without the basin.
Therefore with a basin in place the peak downstream flood flow is sensitive to both the storm
duration and the storm pattern and the routing of flows through the basin. As such basin
design can be particularly sensitive to both storm temporal pattern and critical storm
duration.
Robust design approaches for basins that test and consider a wide range of storm durations
and a range of potential variations of storm pattern for each of these storm durations are
recommended for the full range of flood events. Variations in storm pattern should include
testing of early, centrally and late weighted storm patterns for the same time duration to test
whether the basin can meet the required design criteria with this variation.
Other key points to consider in modelling and designing detention basins include:
• Considering the impacts of events larger than the design event -The basin will generally
be designed to reduce flood flows in a particular design event. However, in the majority of
cases it is unlikely to have significant impacts on peak flows in extreme events. This can
mean that there is a significantly larger difference between the extreme and design event
flows entering and discharging the basin. Figure 1.5.16 provides an example. This
situation is likely to result in a faster rate of rise of downstream flood levels for events
larger than the design event (as these events result in high level spillway operation) than
would have occurred without the basin. It is essential that this difference is understood and
considered in basin and high level outlet design to manage the flood risk downstream of
the basin. Residual risk downstream of the basin, including any limitations in emergency
response and associated planning, needs to be considered and may require additional
management measures including flood warning, community awareness, flood related
development controls;
• Upstream impacts of basins - The construction of a basin can also have significant
upstream impacts on flood behaviour and these are an important consideration in the
design of a basin. This should be examined for the full range of flood events to ensure that
any impacts on upstream flood risk and the management of this risk (including emergency
management planning) are understood;
• Detentions basins act as dams during flood events - Therefore, basin design needs to
consider dam safety aspects as discussed in Book 1, Chapter 5, Section 7 and Book 8;
and
• The use of multiple basins in a catchment - Where multiple basins are designed to provide
more strategic benefits, ie., away from the downstream boundary of their individual
locations they should be designed on a catchment wide basis to ensure their interaction
does not result in adverse impacts upon flood behaviour. Use of multiple basins in a
catchment without consideration of interaction has the potential to result in adverse
impacts on flood behaviour.
136
Risk Based Design
Figure 1.5.16. Example of Difference in Impacts of a Basin on Flood Flows in a Design Event
Compared to an Extreme Event
• Economic Service Life - The total period to the time when the asset, whilst physically able
to provide a service, ceases to be the lowest cost option to satisfy the service requirement;
• Design Service Life (DSL) - The total period an asset has been designed to remain in use;
or
• Effective Service Life (ESL) - The total period an asset remains in use, regardless of its
Design Service Life.
Currently most guidelines are based around evaluating design service life. However, the
difference between the ESL and DSL can be significant and should be recognised in risk
assessment of a proposal (Figure 1.5.17).
ESL can be enhanced by factors which increase life such as maintenance, or diminished
due to factors that reduce life such as significant weather events. It is considered that a
137
Risk Based Design
Figure 1.5.17. Design Service Life versus Effective Service Life (derived from United States
Environment Protection Agency – 2007)
• Magnitude of infrastructure – very large infrastructure projects are more likely to remain in
place longer than smaller projects (e.g. bridges, dams), due to the difficulties in replacing
them;
• Integrated development – where the project forms part of a broader piece of infrastructure
or on-going service there may be economic incentives to continue utilisation of the project,
particularly where a change to one component would require a change to others (e.g. road
alignment – a road may be reconstructed/ rehabilitated over time, but due to other
constraints, will not be able to be changed in terms of elevation or geometry).
138
Risk Based Design
degradation of construction material (e.g. metal, plastic pipes) will also vary with
circumstance (e.g. saline vs non-saline conditions). Maintenance and rehabilitation of
infrastructure may also seek to extend a project’s expected service life (for example, a lining
installed in a stormwater pipe).
Table 1.5.2 summarises some of the typical life expectancies (and range in life expectancies)
for various infrastructure types. Table 1.5.2 shows that the range within and between
infrastructure is high. Within Australia, data for long-lived assets is limited as the majority of
the infrastructure has not yet reached its effective service life.
For projects which have a relatively short design / effective service life (e.g. less than five
years), it may be reasonable to assume the risk profiles faced are static for the duration of
the project as the likelihood or magnitude of changes to any risks may be negligible in
comparison to the overall risk level. In contrast, longer life-span projects will be exposed to a
higher level of non-stationary risk which could be considered in design.
“In many cases, flood studies reflect current conditions at best, and more likely past
conditions since the studies often rely on old data …. …..flood risk criteria used to site and
design a project should rely on conditions the location is likely to experience during the
project’s lifetime, not past or current conditions.” (Floodplain Regulations Committee, 2010).
• Seasonality - the seasonality can alter the statistical likelihood of flood events occurring as
some events. For instance some catchments tend to have are prone to flooding
associated with summer climates;
139
Risk Based Design
• Climatic Variability - Various weather patterns such as El Niño influence the likelihood of
flooding. El Niño events have a life-cycle during which the impacts vary, both in terms of
spatial extent and timing;
• Climate Change - Fundamental changes in the climate will alter the likelihood of flooding.
This is discussed in Book 1, Chapter 5, Section 5 and Book 1, Chapter 6; and
• Changes within the catchment and floodplain that alter flood flows and flowpaths. This is
discussed in Book 1, Chapter 5, Section 5.
• Land-use change – change to a more vulnerable land use or to the degrees of exposure
of development to flooding.
• Changes to the community exposed to risk through long term or seasonal changes in the
total population or its demographics. For example increases in population at holiday
destinations and during festivals. A change in community demographics to include a
higher proportion of people more vulnerable in emergency response can lead to increased
consequences due to a flood.
• level of flood awareness / education in the community. The higher the level of community
awareness of flooding the more the community will be to flood risk due to both their
understanding of the need to and how to respond to a flood and the knowledge that they
may need to take measures, such as having flood insurance to address some of their
residual risk to flooding.
• Altered risk profiles – Individuals tolerance for risk varies with their past and recent
experience community attitudes to flood risk can vary significantly before and after a flood
event; or
• Risk discounting – The value of a risk realised at a future time is typically considered of
less significance than a risk realised at a current time. The rate at which this is applied
may vary over time; typically the discount rate used reflects the economic discount rate in
financial systems.
140
Risk Based Design
5.10.4. Literature
There is a range of literature that discusses how changes over time non-stationarity can be
applied in risk assessments. Much of the work to date has occurred in the academic space
(such as those of (Rootzen and Katz, 2013; Åström et al., 2013; Salas and Obeysekera,
2013)). This work indicates that there is potential for non-stationary models to be
incorporated into design considerations and that the scale of catchment change may be of
sufficient magnitude in some catchments to warrant consideration in design criteria.
However, the costs of developing such models and assessments is likely to be prohibitive
and unnecessary for the majority of flood- affected infrastructure, and that utilisation of
traditional static risk profiles remains the more appropriate form of assessment.
For example, a method for incorporating design flood standards and design life into risk
assessments is presented in (Rootzen and Katz, 2013). The paper proposes two methods to
quantify risk for engineering design in a changing climate:
• The Design Life Level aims to achieve a desired probability of exceedance (or risk of
failure) during the Design Service Life. This method is a SLEP approach (Book 1, Chapter
5, Section 4); or
• The Minimax Design Life Level is closely related, and complementary, but instead focuses
on the maximal yearly probability of exceedance during the Design Service Life. This
method is an AEP based approach.
The Design Life Level uses a Generalised Extreme Value (GEV) cumulative distribution
function (cdf) to present the extremes in year t, and with increasing location and scale
parameters (the shape parameter is constant) related to t, the probability changes. The
example likens the increase in location parameter to a possible increase in water level, and
scale parameter to an increase in climate variability. Another parameter is also introduced,
the Expected Waiting Time (EWT) - the amount of time until a particular level u is exceeded.
Under this approach, if what is considered an acceptable level of risk is constant, it may be
desirable to design mitigation measures (e.g. Design Flood Standards) such that the
likelihood of a given consequence is constant in time. Figure 1.5.18 shows that if risk is
increasing through time, then to keep the standard of risk protection constant, it would be
necessary to continuously raise a defense. Clearly for many projects it is not possible to
continually increase the capacity of flood protection measures, therefore if the mitigation
measure is of fixed capacity the standard of risk protection varies with time (Figure 1.5.18).
Figure 1.5.18. Flood Risk Plot Versus Constant Risk Plot (derived from Rootzen et al 2012)
141
Risk Based Design
Similarly, Salas and Obeysekera (2013) present a framework for addressing non-stationarity
in risk assessment. The non-stationarity is considered in terms of increasing frequency of
events, decreasing events and random shifting events, with standard return period and risk
parameters. In the case of increasing extreme events, the exceedance probability of floods
affecting structures also varies through time ie. p1, p2, p3, …..,pt. The sequence of p will also
be increasing (Figure 1.5.19):
Figure 1.5.19. Schematic of a Design Flood with Exceeding (Pt) and Non-Exceeding (qt = 1-
Pt) Probabilities Varying with Time (Salas & Obeysekera, 2014)
This means that if the probability of the first flood exceeding the Design Flood Standard at
time x = 1 is p1, then the probability at time x =2 is 1 − �1 �. In general, the probability that
the first flood exceeding the Design Flood Standard will occur at time x is given by:
� � = � � = � = 1 − �1 1 − �2 1 − �3 … 1 − �1 − � �� (1.5.1)
However, there is broad guidance, such as AEMI (2013) that highlights the importance of
considering how the floodplain and catchment will change overtime by encouraging both the
understanding of cumulative impacts of new development and also considering the influence
of a changing climate. These are generally undertaken separately to identify the sensitivity of
142
Risk Based Design
changes due to these different changing factors. It is rare that all non-stationarity factors are
considered together.
Once the effective service life has been determined an assessment of the potentially
likelihood of changes to risk (likelihood or consequence factors) should be undertaken. A
discussion on climate change, and whether this is important for consideration, is provided in
Book 1, Chapter 6.
• If the effective service life is less than 20 years a stationary risk assessment should be
undertaken;
• If the effective service life is greater than 20 years but less than 50 years it is
recommended that a non-stationary risk assessment be considered, except in areas in
which the likelihood of change in local and regional land uses is minimal; and
• If the effective service life is greater than 50 years then it is recommended that a non-
stationary risk assessment be undertaken.
The above is a general guidance, and does not take into account project specific issues. For
both short-term and long-term infrastructure it is recommended that an initial review be
undertaken that evaluates whether or not changes in likelihood and/or consequence (as
listed above) are likely to occur over the Effective Service Life of the project and considering
whether such changes would impair the project’s ability to perform its intended function.
Rather than adopt the more complex models identified in Book 1, Chapter 5, Section 5, it is
recommended to adopt a simple “time slice” approach. Primarily there are three approaches
able to be adopted:
• Undertake risk assessment at the point in time of highest overall risk (T(max)) :
Typically, this may be at the end of the project’s ESL. By applying the risk assessment at
143
Risk Based Design
T(max), and determining appropriate design criteria for this point, the proponent will
effectively design its infrastructure to be acceptable at all points of its ESL. This is
considered to be the most conservative approach and will lead to relative over-engineering
of infrastructure at some points of its life, particularly early in its life.
• Undertake risk assessment based on the existing environment (T(0)) : This approach
accepts that non-stationary components have impacts on flood behaviour, risks will rise.
This approach assumes that this growth in risk will be acceptable and therefore it will lead
to under-engineering relative to current minimum design requirements as risks rise. This is
considered the least conservative approach and the most likely to result in higher long
term risks.
• Undertake risk assessment based on the existing environment (T(0)a) and commit
to managing residual risk as it arises : This approach will require periodic
reassessment of risks associated with the project at agreed points in time. This approach
may lead to under-engineering towards the end of the re-evaluation period and is
considered the second least conservative approach.
• Undertake risk assessment at a representative point in the projects ESL (T(x)) and
commit to managing the residual risk: This approach will likely lead to over-engineering in
the initial (pre – T(x)) period, after which it will require periodic reassessment of risks
associated with the project at agreed points in time.
Each of these four options revolve around the choice between conservatively over-
engineering to ensure risk levels are satisfied, against programs of continuous upgrades in
which changes in risk may be responded to through adaptation in design.
In general the T(max) approach may be identified as the preferred approach where:
In contrast, The T(0)a and T(x) approaches are more likely to be favourable where:
There may be thresholds or tipping points at which the frequency of flooding or the
consequences (e.g. loss of life, damage to residential property) of that flooding or the
associated risks are considered unacceptable to the community. If these can be identified
they can provide a basis for considering the limit of tolerability (LoT). With knowledge of the
anticipated rate of change in the likelihood or consequence of flood events over time, it may
be possible to approximate the time at which the LoT will be exceeded for any one
consequence. Based upon current understanding of any such limits, beyond this point,
flooding in association with a given project may be considered generate unacceptable risks.
144
Risk Based Design
This concept is basically a trigger based concept. For example, residential properties may be
designed to a 1% AEP level, which is acceptable to the local authority. However, they are
willing to tolerate a 2% AEP level if unavoidable. Currently, a residential development is built
to a 1% AEP level. However, due to changes in the catchment (increase in impervious areas
in the catchment) and climate change, this is expected to reduce to a 2% AEP level after 30
years. This represents a trigger point at which time a mitigation measure may need to be
implemented.
Such points may be utilised as the minimum points at which risks, and the appetite for risk,
are reassessed (T(0)a). Where current planning allows for adaptation, these can be
considered the protected timeframe where this adaptation may be necessary based upon
current knowledge and projections (T(x) Figure 1.5.20).
Figure 1.5.20. Change in Realised Risk Through Adopting a T(x) Design Approach.
From a practical perspective, consideration of the potential need for and likely methods for
mitigating future growth in risk as part of the original decision can enable this work to be
incorporated into upfront decisions (DECC, 2007). For example, land can be set aside or
easements established to enable construction and maintenance of future mitigation
measures. If this does not occur the mitigation measure may not be able to be implemented
when required. Consideration of the costs of future mitigation in current decisions may in
some cases influence the original decision on protection levels.
It is also noted that any option based around the future upgrade of infrastructure poses
potential legal and commercial risks to both proponents and approval authorities. Given the
extent of timeframe over which the infrastructure may be in place, the responsibility (and
cost) of re-evaluation and upgrade in the future may change between individuals and there
is the potential that the decision to upgrade at that time is not viable. In such circumstances
145
Risk Based Design
the project may be decommissioned (these costs should be considered in any supporting
economic analysis).
A simple approach for incorporating the changes over time is to take two or more time slices.
For example, the flood inundation damages are calculated in year 0 and at the end of the
Effective Service Life. Then the damages are assumed to change linearly between these two
points. If the change is expected to occur differently, the more time slices may be required.
Adopting the methodology in Book 1, Chapter 5, Section 5, the key change is that the
Annual Average Damage will progressively change over the Effective Service Life of the
project. This can then be included into the economic assessment as described in Book 1,
Chapter 5, Section 5.
Analysis Period
The inclusion of non-stationarity results in benefits and costs that vary through time. The
important implication of this is that an economic outcome can be dependent on when the
economic analysis period commences. For example, if we pay to construct a levee now to
protect against climate change, then the cost is incurred in the present, but no real benefit
may be realised for some time, for example twenty years. When the time value of money is
considered in this scenario, it may be worth not investing in the levee until twenty years have
passed.
Discount Rate
A challenge of non-stationary factors like climate change is that impacts which are
experienced further into the future are diluted by standard discount rates. There is significant
research that has been undertaken on appropriate discount rates for very long time periods.
This is based on the argument that intergenerational equity should be considered, and that
future generations should not be unfairly weighted compared with existing generations
146
Risk Based Design
(Stern, 2006). This becomes particularly important for projects which are expected to have a
relatively long effective service life, and therefore the benefits will span across multiple
generations. Some recent studies, such as the Stern Review (Stern, 2006) have adopted
very low discount rates (in the case of the Stern Review, 1.4%).
Some economists argue for zero discount rates when applied to environmental impacts.
Quiggin (1993) goes further and suggests that the use of zero discount rates is in agreement
with sustainability, where
“The interests of future generations should be given equal weight with our own in making
decisions affecting the long term future”
“those changes that would enhance or degrade the human life support capacity of the
ecosystem…. would not be discounted at all”
Philbert (2003) reviewed a number of approaches, and discussed that the discount rate may
be derived not from the consumer perspective, but on a more general society perspective,
based on “interpersonal comparisons”. There are inherent difficulties with this analysis.
Philbert (2003) argues that the current generation will incur costs for the benefit of future
generations, but there will reach a point at which perceived gain for the next generation is
not seen as worth the cost for the present generation. Therefore, a zero discount rate should
not apply, as it would imply “very high investment by the present generation”. Philbert (2003)
suggests a discount rate that decreases over time, approaching the “lowest reasonably
foreseeable rate” of economic growth. Philbert (2003) also suggests potentially increasing
the value of environmental assets, as these are progressively consumed moving into the
future.
Most of the discussion in relation to varying of discount rates focuses in on climate change
and environmental policies. These policies, such as emission trading schemes, can result in
costs now, in order to off-set potential significant costs in the future and impacts that may not
be possible to be rectified by future generations.
The challenge with varying the discount rate is that it results in difficulties in comparing
across projects. For example, if one project adopts a discount rate of 7% and another a
discount rate of 4%, then it is difficult to directly compare the BCR results of such studies
without thoroughly understanding the underlying assumptions.
For the purposes of the majority of water related infrastructure, and to ensure consistency
across projects, it is recommended to adopt typical discount rates. Treasury guidance may
recommend discount rates to provide consistency across competing projects. However, the
above should be kept in mind for policies and planning that is targeting impacts well into the
future.
Alternatively, a more robust economic analysis approach would be the use of Real Options
Analysis (ROA). ROA is a recognised approach to address the uncertainties of future
conditions in flood risk management by accounting for flexibility in investment decision
(World Bank, 2010; Short et al., 2012; Park et al., 2014; HM Treasury and DEFRA, 2009).
The standard CBA approach is a relatively coarse mechanism in which costs and benefits
assessed are considered as a whole and do not allow for discernment of the manner in
which they accrue (e.g. changes in the rate at which benefits are received may not justify
147
Risk Based Design
development of all project components as part of initial project construction). Neither does
the analysis recognise that estimates of cost, benefit and risk into the future are inherently
uncertain. As such, deferring decisions on infrastructure investment until a later date when
more information is available may be the preferred approach. ROA allows the value of
deferring investment decisions to be assessed.
In ROA, “options” represent predefined choices over a project’s Effective Service Life that
strategically or operationally affects the course of the project. For example, a project may be
to build a levee. If we build the levee in such a way that it is possible for it to be upgraded at
a later date, a real option may be to increase levee height by 0.5m. The analysis defines
decision points at which these choices are made (e.g. T(x)). The decision points can be
points may be fixed in time or variable and triggered by internal or external events (e.g.
occurrence of a 1% AEP event). Based on the Black-Scholes model utilised in financial
options analysis, ROA utilises estimates of volatility (ie. the likelihood of a particular flood
event or level of damage occurring in one year) to evaluate expected values/damages that
are likely to be incurred over the Effective Service Life. The establishment of appropriate
volatility measures is critical to ROA, and not all systems will have readily discernible
volatilities.
For example, a flood levee may be required and it is known that currently it needs to be built
to a 20% AEP in order to maintain an acceptable level of risk. However, it is also forecast
that due to climate change, over 50 years, the magnitude of the 20% AEP event will be
equivalent to the magnitude of the current 10% AEP event and that levee would need to be
increased by one metre in height to provide the same level of risk. Assume, also that the
volatility is such that there was 50% chance that the cost of flooding would increase by 5%
per year and a 50% chance that the cost of flooding would increase by 1% per year. Utilising
a numerical method for the pricing of options (e.g. the Binomial Method, Black-Scholes
Model), and based on this volatility year on year, it would be possible to estimate the value of
implementing an option in a given year (ie. the value of waiting to make a decision on
investing until more information is available). For example, if it turns out that by year 10 that
there have been 10 consecutive 5% increases, then the benefits of increasing the height of
the levee at that point will be significantly greater than if there had been 10 consecutive 1%
increases per year.
Shand, T. D., Cox, R. J., Blacka, M. J. and Smith, G. P. (2011). Australian Rainfall and
Runoff Revision Project 10: Appropriate Safety Criteria for Vehicles - Literature Review.
Australian Rainfall and Runoff Revision Project 10. Stage 2 Report. Prepared by the Water
Research Laboratory. P10/S2/020. February 2011.
Opper S, Emergency Management Planning for the Hawkesbury Nepean Valley, 40th
Floodplain Management Authorities Conference, Parramatta, 9th – 12th May 2000.
148
Risk Based Design
NFRAG (2008) Flood Risk Management in Australia. The Australian Journal of Emergency
Management. Vol 23, No. 4, November 2008.
McLuckie, D., Babister M., (2015). Improving national best practice in flood risk
management, Floodplain Management Association National Conference, Brisbane, May
2015.
McLuckie, D., Babister M., Smith G., Thomson R., (2014). Updating National best practice
guidance in flood risk management, Floodplain Management Association National
Conference, Deniliquin, May 2014.
McLuckie, D., (2013). A guide to best practice in flood risk management in Australia,
Floodplain Management Association National Conference, Tweed Heads, May 2013.
McLuckie, D., (2012). Best Practice in Flood Risk Management in Australia. Presentation to
Engineers Australia Queensland Water Panel, Brisbane, May 2012.
Babister, M., Retallick, M., 2013. Benchmarking Best Practice in Floodplain Management,
Floodplain Management Association National Conference, Tweed Heads, May 2013
Babister, M., McLuckie, D., Retallick, M., Testoni, I. (2014). Comparing Modelling
Approaches to End User needs, Floodplain Management Association Conference,
Deniliquin, May 2014
5.12. Examples
5.12.1. Calculation of Average Annual Damages for a
Community
�1 + �2 �2 + �3 �� − 1 + ��
AAD = Sum from 1 to n of, �2 − �1 + �3 − �2 + ..., �� − �� − 1
2 2 2
Where:
AAD = Annual Average Damages in $
d = damage in $
p = probability
Example:
Determine the damage under the curve for Figure 1.5.3. The table below provides the
points on the curve.
149
Risk Based Design
0.5% $20,000,000 3
1.0% $17,000,000 4
2.0% $9,000,000 5
5.0% $3,000,000 6
10% $1,000,000 7
20% $0 8
AAD =
28, 000, 000 + 22, 000, 000 22, 000, 000 + 20, 000, 000
( �(0.002 − 0.0001)) + ( �(0.005 − 0.002))
2 2
20, 000, 000 + 17, 000, 000 17, 000, 000 + 9, 000, 000
+( �(0.01 − 0.005)) + ( �(0.02 − 0.01))
2 2
[
9, 000, 000 + 3, 000, 000 3, 000, 000 + 1, 000, 000
+( �(0.05 − 0.02)) + ( �(0.1 − 0.05))
2 2
1, 000, 000 + 0
+( �(0.2 − 0.1))
2
AAD = 49, 750 + 63, 000 + 92, 500 + 130, 000 + 180, 000 + 100, 000 + 50, 000
150
Risk Based Design
AAB = AAD����0 − �������1
Where:
AAB = annual average benefits in $
Example:
Determine the Average Annual Benefits
Step 1
Determine Annual Average Damages for the base case – undertaken in Book 1, Chapter
5, Section 12
Step 2
Determine Annual Average Damages for the treated case. See table below.
Step 3
Determine Annual Average Damages for the base case – undertaken in Book 1, Chapter
5, Section 12
Step 4
Determine Annual Average Damages for the treated case. See table below.
151
Risk Based Design
Where:
AAB = Annual Average Benefits in $ - calculation see Book 1, Chapter 5, Section 12
dr = discount rate
Note:
A range of discount rates may be used to give a range of NPVs which can in turn be
used to determine a range of Benefit Cost Ratios (see Book 1, Chapter 5, Section 12) to
test how financial benefits may vary with different financial situations.
Example:
Calculating Net Present Value of benefits
Step 1
Determine the Average Annual Benefits – undertaken in Book 1, Chapter 5, Section 12
Step 2
Assess Net Present Value of benefits. See table below.
NPVBenefits = $6, 099, 257 for a discount rate (dr) of 7%. This could vary from $4,473,916
for a dr of 10% to $9,055,194 for a dr of 4%.
152
Risk Based Design
153
Risk Based Design
Where:
Capital cost is all relevant upfront costs.
dr = discount rate
Note:
A range of discount rates may be used to give a range of NPVs and in turn a range of
Benefit Cost Ratios (see Book 1, Chapter 5, Section 12) to test how financial benefits
may vary with changing financial situation.
Example:
Calculating Net Present Value of lifecycle costs
Step 1
Determine capital costs and Annual operation and Maintenance Costs
Step 2
Calculate Net Present Value of lifecycle costs. See table below.
Example:
Capital cost is $4,000,000 and Annual operation and Maintenance Costs is $100,000.
NPVcosts = $5, 333, 171 for a discount rate (dr) of 7%. This could vary from $4,977,105 for
a dr of 10% to $5,979,277 for a dr of 4%.
154
Risk Based Design
155
Risk Based Design
Where:
BCR=Benefit Cost Ratio
Note:
A range of discount rates may be used to give a range of NPVs which can in turn be
used to determine a range of Benefit Cost Ratios (see below) to test how financial
benefits may vary with different financial situations.
Example:
Calculating BCR
Step 1
Calculate Net Present Value of Benefits = NPVbenefits
Step 2
Calculate Net Present Value of Costs = NPVcosts
Step 3
Calculate BCR. BCR = 1.14 for dr 7%, BCR = 1.51 for dr 4%, BCR = 0.9 for dr 10%.
Discount Rate dr (%) NPV benefits ($) NPV costs ($) BCR
4 9,055,194 5,979,277 1.51
7 6,099,257 5,333,171 1.14
10 4,473,916 4,977,905 0.90
5.13. References
AEMI (Australian Emergency Management Institute) (2013), Australian Emergency
Management Handbook 7: Managing the Floodplain Best Practice in Flood Risk
Management in Australia AEMI, Canberra.
156
Risk Based Design
AEMI (Australian Emergency Management Institute) (2014c), Guideline for using the
national generic brief for flood investigations to develop project-specific specifications, AEMI,
Canberra.
Condon, L.C., Gangopadhyay, S. and Pruitt, T. (2014), Climate change and non-stationary
flood risk for the Upper Truckee River Basin. Hydrological Earth System Sciences
Discussions, 11: 5077-5114.
DECC (NSW Department of Environment and Climate Change), (2007), Floodplain Risk
Management Guideline, Residential Flood Damage, Sydney.
Farber, S. and Bradley, D. (1996), Ecological Economics. USDA Forest Service, available at:
http://www.fs.fed.us.
HM Treasury and DEFRA (2009), Accounting for the effects of climate change:
Supplementary Green Book Guidance.
McLuckie, D., Toniato A., Askew E., Babister M. and Retallick M. (2016), Supporting Land
Use Planning by providing improved information from the Floodplain Management Process,
Floodplain Management Association Conference, Nowra, May 2016.
Ng, M. and Vogel, R.M. (2010), Multivariate non-stationary stochastic streamflow models for
two urban watersheds. Proc., Environmental and Water Resources Institute World Congress,
ASCE, Reston, VA, pp: 2550-2561.
Park, T., Kim, C. and Kim, H. (2014), Valuation of drainage infrastructure improvement under
climate change using real options. Water Resource Management, 28: 445-457.
Rootzen, H. and Katz, R. (2013), Design Life Level: Quantifying risk in a changing climate.
Water Resources Research, 49: 5964-5972.
Salas, J. and Obeysekera, J. (2013), Revisiting the concepts of return period and risk for
nonstationary hydrologic extreme events. Journal of Hydologic Engineering, 19(3), 554-568.
157
Risk Based Design
Short, M., Peirson, W., Peters, G. and Cox, R. (2012), Managing adaptation of urban water
systems in a changing climate. Water Resources Management, 26(7), 1953-1981
Stern, N. (2006), The Economics of Climate Change - The Stern Review, Cabinet Office -
HM Treasury, Cambridge University Press. Available at: http://
webarchive.nationalarchives.gov.uk/+/http://www.hm-treasury.gov.uk/sternreview_index.htm.
Thomson, R., Knee, P., Telford, T., Virivolomo, M. and Drynan, L. (2011), Climate Change
Adaptation & Economics - A Case Study in the Solomon Islands, Coast and Ports
Conference, Perth, Australia.
Thomson, R., Knee, P., Telford, T., Virivolomo, M. and Drynan, L. (2012), Incorporating
Climate Change into Economic Analysis - A Case Study of Emergency Flood Repairs in the
Solomon Islands, Practical Responses to Climate Change, Canberra, Australia.
Vogel, R.M., Yaindl, C. and Walter, M. (2011), Nonstationarity: Flood magnification and
recurrence reduction factors in the United States, J. Am. Water Resour. Assoc., 47: 464-474.
Woodward, M., Gouldby, B., Kapelan, Z., Khu, S. and Townend, I. (2011), Incorporating real
options into flood risk management decision making. 14th annual international real options
conference.
Åström, H., Friis Hansen, P., Garré, L. and Arnbjerg-Nielsen, K. (2013), An influence
diagram for urban flood risk assessment through pluvial flood hazards under non-stationary
conditions. NOVATECH.
158
Chapter 6. Climate Change
Considerations
Bryson Bates, Duncan McLuckie, Seth Westra, Fiona Johnson, Janice
Green, Jo Mummery, Deborah Abbs
Chapter Status Final
Date last updated 14/5/2019
6.1. Introduction
There is now widespread acceptance that human activities are contributing to observed
climate change. Human induced climate change has the potential to alter the prevalence
and severity of rainfall extremes, storm surge and floods. Recognition of the risks associated
with climate change is required for better planning for new infrastructure and mitigating the
potential damage to existing infrastructure.
There are five aspects of design flood estimation that are likely to be impacted by climate
change:
• and compound extremes (e.g. riverine flooding combined with storm surge inundation).
The magnitudes of the impacts on any these areas have not been subjected to
comprehensive study either nationally or internationally.
The climate change projections released in 2015 show simulated increases in the magnitude
of the wettest annual daily total and the 5% AEP wettest daily total across Australia (CSIRO
and Bureau of Meteorology, 2015). The projections do not include information about
potential changes in IFD relationships and rainfall temporal patterns. Quite restricted sets of
climate change projections for the Greater Sydney and southeast Queensland regions
indicate that IFD relationships are sensitive to climate change (Bates et al., 2015). For
example, the projections for Greater Sydney suggest that the 1% AEP for the 24 hour
duration IFD will increase by up to 20% by 2050. To the west, in some parts of the Blue
Mountains and beyond, decreases in the 1% AEP 24 hour duration IFD are possible. These
projections are not definitive, however, since only one high-end greenhouse gas emissions
scenario and a small set of climate model results were considered. For present climatic
conditions, the analysis of Wasko and Sharma (2015) indicates that temporal patterns of
rainfall within storm bursts in Australia are impacted by temperature variations regardless of
the climatic region and season. For the Greater Sydney region, Zheng et al. (2015) found
differing trends in annual maximum rainfall for different durations.
This chapter provides practitioners, designers and decision makers with an approach to
address the risks from climate change in projects and decisions that involve estimation of
159
Climate Change
Considerations
design flood characteristics while further research is undertaken to reduce key uncertainties.
It draws on the most recent climate science, particularly the release of the IPCC Fifth
Assessment Report (IPCC, 2013) as well as the new climate change projections for Australia
(CSIRO and Bureau of Meteorology, 2015). As such, the chapter is focused on potential
changes in rainfall intensity (or equivalent depth) given the paucity of climate change
projections for other factors that influence flood risk.
For consistency with design rainfall IFD estimates for the current climate (Book 2), the
chapter is intended to be applied to the key system design event (ie. the design standard for
the structure or infrastructure). It is applicable for current-day rainfall intensities within the
range of probability of one Exceedance per Year and 50% to 1% AEP. The approach
described with this chapter considers regional risks from climate change, the effective
service life (ie. the total period during which an asset remains in use) or the planning horizon
of the decision (ie. the length of time that a plan looks into the future), the social acceptability
and other consequences of failure, and the cost of retrofits. If climate change is found to be a
significant issue for the infrastructure of interest through a screening analysis, a more
detailed analysis is needed to draw on the best available knowledge of the likely future
climate and to allow for changes in the intensity of heavy rainfall events over time.
As the science of climate change is continually changing, it is anticipated that the chapter will
be replaced gradually as new and detailed research findings are released. The latest
published sources should always be sought for use in future assessments and decision
making. The chapter does not replace the need for informed judgement of likely risks, or the
need for detailed local analysis (for example through the use of additional climate and
hydrological modelling) where the facilities under consideration are important and the risks
potentially large.
1http://www.climatechangeinaustralia.gov.au/en/climate-projections/climate-futures-tool/introduction-climate-futures/
160
Climate Change
Considerations
Climate Futures subdivides the projected changes in two climate variables (e.g. temperature
and rainfall) from the full suite of GCMs into several classes, e.g. warmer-wetter, hotter-drier,
much hotter-much drier. The changes are relative to a 20-year (1986-2005) baseline. The
resultant classification provides a visual display of the spread and clustering of the projected
changes. This provides model consensus information for each class and assists the
selection of the classes that are of most importance for impact assessment.
Generally, there is more confidence in GCM simulations of temperature than for rainfall.
Thus the chapter provides an adjustment factor for IFD curves informed by temperature
projections alone. These temperature projections are then combined with current
understanding of changes to extreme rainfall event intensities based on research in Australia
and overseas. This research includes observation-based assessments, physical arguments
on the water holding capacity of a warmer atmosphere and high resolution dynamical
downscaling experiments. Using these multiple lines of evidence the expected change in
heavy rainfalls is between 2% and 15% per °C of warming.
Given the uncertainty in rainfall projections and their considerable regional variability,
an increase in rainfall (intensity or depth) of 5% per °C of local warming is
recommended.
Given the uncertainty in rainfall projections and their considerable regional variability, an
increase in rainfall (intensity or depth) of 5% per °C of local warming is recommended. The
proposed rate of increase has been tempered because: there is no guarantee that the same
scaling will apply across all of the frequencies and durations typically considered in flood
design, and there are other factors that have the potential to affect future rainfall intensities
161
Climate Change
Considerations
(or depths) over land. These include changes in regional atmospheric circulation, synoptic
systems and soil wetness.
For consistency with the design rainfall IFD estimates in Book 2, the chapter is intended to
be applied to the design event (ie. the design standard for the structure or infrastructure). It
is applicable for rainfall intensities under the current climatic regime within the range of
probability of one exceedance per year and 50% to 1% AEP. Other mechanisms that affect
the magnitude of flooding, such as tailwater levels and oceanic processes (e.g. wind, waves
and tides) are not considered. Where there is an additional risk of coastal flooding from sea
level rise refer to Engineers Australia (2012) for guidance.
A six-step process is used to incorporate climate change risks into decisions involving the
estimation of design flood characteristics. The process uses a decision tree approach that
enables the practitioner to define the nature of the information needed for a particular
problem and to reach an appropriate course of action (see Figure 1.6.3 to Figure 1.6.5 or
Figure 1.6.2).
Figure 1.6.2. Decision Tree for Incorporating Climate Change in Flood Design
If the effective service life or planning horizon is relatively short (less than 20 years from
2015) climate change will have negligible impact on IFD characteristics over that period of
time. Thus the projected hazard will be similar to the present, and the design process should
162
Climate Change
Considerations
be based on the IFD and temporal patterns described in Book 2. Otherwise, proceed to Step
2.
Figure 1.6.3. Decision Tree for Incorporating Climate Change in Flood Design – Part 1 of 3
The impact of the possible failure of the facility (e.g. asset, process or management strategy)
will have direct and indirect consequences, and should be assessed in terms of primary risk
outcomes such as issues of cost, safety, social acceptability and environmental impact.
Some categorisation of facilities may be useful when determining the consequences of
failure. For example, there can be substantial consequences if assets related to the delivery
of essential services fail or are significantly impacted.
• Low consequence - some probability that asset performance will be impacted but the
delivery of services will be only partially or temporarily compromised, or alternative
sources of services (e.g. availability of different power sources) are readily available.
163
Climate Change
Considerations
Where the consequences of failure and the costs of retrofitting are considered to be low, the
project or decision should proceed in accordance with the original design specifications.
Otherwise, proceed to Step 4.
Figure 1.6.4. Decision Tree for Incorporating Climate Change in Flood Design – Part 2 of 3
The outputs from this step include a good understanding of the extent to which the risks of
climate change may exceed the coping capacity of the facility to perform its intended
function. If the incremental impact and consequences are low (e.g. increases in flood levels
are slight) then the exposure risk to climate change is low, and design rainfall should be
determined using Book 2. Otherwise, proceed to Step 5.
164
Climate Change
Considerations
In reaching Step 5, the minimum basis for design should be the low greenhouse gas and
aerosol concentration pathway RCP4.5 and the maximum GCM consensus case indicated
by the Climate Futures web tool for the NRM cluster of interest (Book 1, Chapter 6, Section
5). The choice of RCP4.5 is recommended because RCP2.6 requires ambitious global
emissions reductions. The maximum consensus case is a reasonable choice since it is not
unduly affected by outlying GCM results. Where the additional expense can be justified on
socioeconomic and environmental grounds, the maximum consensus case for the high
concentration pathway RCP8.5 should also be considered.
In reaching Step 5, the minimum basis for design should be the low greenhouse gas
and aerosol concentration pathway RCP4.5 and the maximum GCM consensus case
indicated by the Climate Futures web tool for the NRM cluster of interest (Book 1,
Chapter 6, Section 5). Where the additional expense can be justified on socioeconomic
and environmental grounds, the maximum consensus case for the high concentration
pathway RCP8.5 should also be considered.
165
Climate Change
Considerations
For a given NRM cluster, service life or planning horizon, RCP and class interval of projected
increase in annual mean surface temperature, a projected rainfall intensity or equivalent
depth (Ip) can be obtained from:
��
�� = �ARR × 1.05 (1.6.1)
where �ARR is the design rainfall intensity (or depth) for current climate conditions (Book 2),
1.05 is the assumed temperature scaling based on the approximately exponential
relationship between temperature and humidity and �� is the temperature at the midpoint (or
median) of the selected class interval.
Figure 1.6.5. Decision Tree for Incorporating Climate Change in Flood Design – Part 3 of 3
Taking all of the above into account, if the cost of the modified design is low relative to the
associated benefits in reduction of residual risk (ie. the level of risk remaining after climate
change has been factored into the design or planning process), adopt the changed design.
Otherwise, proceed to Step 6.
166
Climate Change
Considerations
results change temperature bins. To address this problem the analysis was carried out with
the actual predicted temperature increase for each GCM case.
Problem Setting:
• Catchment of interest is located in the East Coast NRM cluster (Figure 1).
• Application of Step 4 in the six-step process outlined above indicates that consideration of
climate change projections is warranted.
Practitioner's Assumptions:
• Maximum GCM consensus cases for RCPs 4.5 and 8.5 are appropriate choices for the
design setting.
Climate Futures:
• Two or possibly three class intervals for projected temperature increases could be
considered for impact assessment: ‘warmer’, ‘hotter’ and ‘much hotter’ (refer to
Table 1.6.3).
• Only one maximum model consensus case for both RCPs 4.5 and 8.5 (22 of 40 GCMs
and 39 of 42 GCMs, respectively). This is the ‘hotter’ class interval (1.5 to 3 °C).
Calculations:
Notes:
• The above calculation indicates a 12% projected increase in rainfall intensity (or
equivalent depth).
• Had RCP 4.5 and the maximum consensus case been selected as the basis for design,
the model consensus for the ‘warmer’ class interval is not much smaller than that for the
‘hotter’ interval (18 versus 22 of 40 GCMs).
• If the model consensus for the ‘warmer’ class interval is deemed to be effectively tied with
that for the ‘hotter’ class interval, consideration could be given to the use of the midpoint of
the wider interval 0.5 to 3 ºC. Following the procedure outlined above leads to a rainfall
intensity scaling factor of 1.051.75 = 1.09. The design and economic implications of the
lowering of the scaling factor, would need to be considered.
167
Climate Change
Considerations
168
Climate Change
Considerations
169
Climate Change
Considerations
170
Climate Change
Considerations
171
Climate Change
Considerations
6.7. References
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Bates, B., Evans, J., Green, J., Griesser, A., Jakob, D., Lau, R., Lehmann, E., Leonard, M.,
Phatak, A., Rafter, T., Seed, A., Westra, S. and Zheng, F. (2015), Development of Intensity-
Frequency-Duration Information across Australia - Climate Change Research Plan Project.
Report for Institution of Engineers Australia, Australian Rainfall and Runoff Guideline: Project
1. 61p.
CSIRO and Bureau of Meteorology (2015), Climate Change in Australia, Projections for
Australia's NRM Regions. Technical Report, CSIRO and Bureau of Meteorology, Australia.
Retrieved from www.climatechangeinaustralia.gov.au/en [http://
www.climatechangeinaustralia.gov.au/en].
Engineers Australia (2012), Guidelines for Responding to the Effects of Climate Change in
Coastal and Ocean Engineering. The National Committee on Coastal and Ocean
Engineering, 3rd edition, revised 2013.
IPCC (Intergovernmental Panel on Climate Change) (2013), Climate Change 2013: The
Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of
the Intergovernmental Panel on Climate Change. [Stocker, T.F., and others (Eds.)].
Cambridge University Press, Cambridge, UK and New York, NY, USA.
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Zheng, F., Westra, S. and Leonard, M. (2015), Opposing local precipitation extremes. Nature
Climate Change, 5(5), 389-390, doi: 10.10. 38/nclimate2579, Available at: http://dx.doi.org/
10.1038/nclimate2579.
172
BOOK 2
Rainfall Estimation
Rainfall Estimation
Table of Contents
1. Introduction ........................................................................................................................ 1
1.1. Scope and Intent .................................................................................................... 1
1.2. Application of these Guidelines .............................................................................. 1
1.3. Climate Change ...................................................................................................... 1
1.4. Terminology ............................................................................................................ 2
1.5. References ............................................................................................................. 2
2. Rainfall Models .................................................................................................................. 3
2.1. Introduction ............................................................................................................. 3
2.2. Space-Time Representation of Rainfall Events ...................................................... 4
2.3. Orographic Enhancement and Rain Shadow Effects on Space-Time Patterns ...... 7
2.4. Conceptualisation of Design Rainfall Events .......................................................... 8
2.4.1. Event Definitions .......................................................................................... 8
2.4.2. Rainfall Event Duration ................................................................................ 9
2.4.3. Event Rainfall Depth (or Average Intensity) ................................................. 9
2.4.4. Temporal Patterns of Rainfall ....................................................................... 9
2.4.5. Spatial Patterns of Rainfall .......................................................................... 9
2.5. Spatial and Temporal Resolution of Design Rainfall Models .................................. 9
2.6. Applications Where Flood Estimates are Required at Multiple Locations ............ 11
2.7. Climate Change Impacts ...................................................................................... 11
2.8. References ........................................................................................................... 12
3. Design Rainfall ................................................................................................................ 13
3.1. Introduction ........................................................................................................... 13
3.2. Design Rainfall Concepts ..................................................................................... 13
3.3. Climate Change Impacts ...................................................................................... 14
3.4. Frequent and Infrequent Design Rainfalls ............................................................ 15
3.4.1. Overview .................................................................................................... 15
3.4.2. Rainfall Database ...................................................................................... 17
3.4.3. Extraction of Extreme Value Series ........................................................... 28
3.4.4. Regionalisation .......................................................................................... 33
3.4.5. Gridding ..................................................................................................... 35
3.4.6. Outputs ...................................................................................................... 37
3.5. Very Frequent Design Rainfalls ............................................................................ 37
3.5.1. Overview .................................................................................................... 37
3.5.2. Rainfall Database ...................................................................................... 38
3.5.3. Extraction of Extreme Value Series ........................................................... 39
3.5.4. Ratio Method ............................................................................................. 41
3.5.5. Gridding ..................................................................................................... 41
3.5.6. Outputs ...................................................................................................... 42
3.6. Rare Design Rainfalls ........................................................................................... 42
3.6.1. Overview .................................................................................................... 42
3.6.2. Rainfall Database ...................................................................................... 43
3.6.3. Extraction of Extreme Value Series ........................................................... 44
3.6.4. Regionalisation .......................................................................................... 45
3.6.5. Gridding ..................................................................................................... 45
3.6.6. Outputs ...................................................................................................... 45
3.7. Probable Maximum Precipitation Estimates ......................................................... 46
3.7.1. Overview .................................................................................................... 46
3.7.2. Estimation of PMPs ................................................................................... 46
3.7.3. Generalised Methods for Probable Maximum Precipitation Estimation ..... 46
3.7.4. Generalised Method of Probable Maximum Precipitation Estimation ........ 48
clxxv
Rainfall Estimation
clxxvi
Rainfall Estimation
clxxvii
Rainfall Estimation
clxxviii
List of Figures
2.2.1. Conceptual Diagram of Space-Time Pattern of Rainfall .............................................. 5
2.2.2. Conceptual Diagram of the Spatial Pattern and Temporal Pattern Temporal and
Spatial Averages Derived from the Space-Time Rainfall Field ...................................... 6
2.2.3. Conceptual Diagram Showing the Temporal Pattern over a Catchment and the
Spatial Pattern Derived over Model Subareas of the Catchment .................................. 7
2.3.1. Classes of Design Rainfalls ....................................................................................... 14
2.3.2. Frequent and Infrequent (Intensity Frequency Duration) Design Rainfall Method .... 16
2.3.3. Daily Read Rainfall Stations and Period of Record ................................................... 20
2.3.4. Continuous Rainfall Stations and Period of Record .................................................. 21
2.3.5. Daily Read Rainfall Stations Used for ARR 1987 and ARR 2016 Intensity
Frequency Duration Data ...................................................................................... 22
2.3.6. Continuous Rainfall Stations Used for ARR 1987 and ARR
2016 Intensity Frequency Duration Data ............................................................... 23
2.3.7. Length of Available Daily Read Rainfall Data ............................................................ 24
2.3.8. Length of Available Continuous Rainfall Data ........................................................... 24
2.3.9. Number of Long-term Daily Read Stations Used for ARR 1987 and ARR 2016
Intensity Frequency Duration Data ....................................................................... 25
2.3.10. Length of Record of Continuous Rainfall Stations Used for ARR 1987 and ARR
2016 Intensity Frequency Duration Data ............................................................... 26
2.3.11. Analysis Areas Adopted for the BGLSR .................................................................. 32
2.3.12. Daily Read Rainfall Stations and Continuous Rainfall Stations Used for Very
Frequent Design Rainfalls ......................................................................................... 39
2.3.13. Procedure to Derive Very Frequent Design Rainfall Depth Grids From Ratios ....... 42
2.3.14. Daily Read Rainfall Stations with 60 or More Years of Record ............................... 44
2.3.15. Generalised Probable Maximum Precipitation Method Zones ................................ 47
2.3.16. Design Rainfall Point Location Map Preview ........................................................... 51
2.3.17. IFD Outputs ............................................................................................................. 52
2.3.18. Very frequent Design Rainfall Outputs .................................................................... 53
2.3.19. Rare design rainfall outputs ..................................................................................... 54
2.3.20. Design Rainfall Output Shown as Table .................................................................. 55
2.3.21. Design Rainfall Output Shown as Chart .................................................................. 56
2.4.1. Area Reduction Factors Regions for Durations 24 to 168 Hours ............................. 65
2.5.1. Typical Storm Components ....................................................................................... 71
2.5.2. Two Different Storm Events with Similar Intensity Frequency Duration Characteristics
(Sydney Observatory Hill) – Two Hyetographs plus Burst Probability Graph ............ 73
2.5.3. Ten 2 hr Dimensionless Mass Curves ....................................................................... 74
2.5.4. Decay Curve of Ten Dimensionless Patterns and AVM Patterns .............................. 75
2.5.5. Pluviograph Stations Record Lengths ....................................................................... 77
2.5.6. Pluviograph Stations used Throughout South-Eastern Australia with Record Lengths 78
clxxix
Rainfall Estimation
clxxx
Rainfall Estimation
2.7.6. Generation of Daily Rainfall Sequences using the Regionalised Modified Markov
Model Approach .................................................................................................. 152
2.7.7. Disaggregated Rectangular Intensity Pulse Model (extracted from
Heneker et al. (2001)) ......................................................................................... 155
2.7.8. Schematic of Non-dimensional Random Walk used in DRIP disaggregate pulses . 156
2.7.9. Heneker et al. (2001) Model Fitted to Monthly Inter-event Time Data for
Melbourne in January ............................................................................................ 157
2.7.10. Heneker et al. (2001) Model Fitted to Monthly Storm Duration Data for
Melbourne in May .................................................................................................. 158
2.7.11. State-based Method of Fragments Algorithm used in the Regionalised Method of
Fragments Sub-daily Rainfall Generation Procedure ............................................ 159
2.7.12. Sydney Airport and nearby pluviograph stations .................................................. 163
2.7.13. Main Steps Involved in the Adjustment of Raw Continuous Rainfall Sequences to
Preserve the Intensity Frequency Duration relationships ...................................... 165
2.7.14. Annual Rainfall Simulations for Alice Springs using 100 Replicates ..................... 167
2.7.15. Intensity Frequency relationship for 24 hour Duration. ......................................... 167
2.7.16. 6 minute (left column) and 6 hour (right column) Annual Maximum Rainfall against
Exceedance Probability for Alice Springs. ............................................................ 169
2.7.17. Intensity Duration Frequency Relationships for Target and Simulated Rainfall before
and after Bias Correction at Alice Springs ............................................................. 170
clxxxi
List of Tables
2.3.1. Classes of Design Rainfalls ....................................................................................... 14
2.3.2. Frequent and Infrequent (Intensity Frequency Duration) Design Rainfall
Method .................................................................................................................... 17
2.3.3. Rainfall Reporting Methods ....................................................................................... 18
2.3.4. Restricted to Unrestricted Conversion Factors ......................................................... 29
2.3.5. Intensity Frequency Duration Outputs ....................................................................... 37
2.3.6. Very Frequent Design Rainfall Method ...................................................................... 38
2.3.7. Very Frequent Design Rainfall Outputs ..................................................................... 42
2.3.8. Rare Design Rainfall Method .................................................................................... 43
2.3.9. Rare Design Rainfall Outputs .................................................................................... 45
2.4.1. ARF Procedure for Catchments Less than 30 000 km2 and
Durations up to and Including 7 Days ....................................................................... 62
2.4.2. ARF Equation (2.4.2) Coefficients by Region for Durations 24 to 168
hours Inclusive .......................................................................................................... 65
2.5.1. Number of Pluviographs by Decade .......................................................................... 78
2.5.2. Regions- Number of Gauges and
Events ....................................................................................................................... 80
2.5.3. Burst Loading by Region and
Duration ..................................................................................................................... 82
2.5.4. Regional Temporal Pattern Bins ................................................................................ 86
2.5.5. Temporal Pattern Durations ....................................................................................... 86
2.5.6. Temporal Pattern Selection Criteria ........................................................................... 87
2.5.7. Areal Rainfall Temporal Patterns - Catchment Areas and Durations ........................ 88
2.5.8. Minimum Number of Pluviographs Required for Event
Selection for Each Catchment Area .......................................................................... 89
2.5.9. Areal Temporal Pattern Sets for Ranges of Catchment Areas .................................. 93
2.5.10. Alternate Regions Used for Data ............................................................................. 95
2.5.11. Flows for the 1% Annual Exceedance Probability for Ten Burst Events .................. 99
2.6.1. Calculation of Weighted Average of Point Rainfall Depths for the 1% AEP 24 hour
Design Rainfall Event for the Stanley River at Woodford ...................................... 120
2.6.2. Stanley River Catchment to Woodford: Calculation of Catchment Average Design
Rainfall Depths (bottom panel) from Weighted Average of Point Rainfall Depths (top panel)
and Areal Reduction Factors (middle panel) ........................................................... 121
2.6.3. Stanley River Catchment to Somerset Dam: Calculation of Catchment Average Design
Rainfall Depths (bottom panel) from Weighted Average of Point Rainfall Depths (top panel)
and Areal Reduction Factors (middle panel) ........................................................... 122
2.6.4. Calculation of Design Spatial Pattern for Stanley River at Woodford ...................... 123
2.6.5. RORB Model Scenarios Run for Worked Example on Stanley River Catchment to Somerset
Dam ......................................................................................................................... 127
2.7.1. Alternative Methods for Stochastic Generation of Daily Rainfall ............................. 141
clxxxii
Rainfall Estimation
2.7.2. Number of States used for Different Rainfall Stations in the Transition
Probability Model (Srikanthan et al., 2003) ......................................................... 145
2.7.3. State Boundaries for Rainfall Amounts in the Transition Probability
Model ................................................................................................................... 146
2.7.4. Daily Scale Attributes used to Define Similarity between Locations ....................... 149
2.7.5. Commonly used Sub-daily Rainfall Generation Models .......................................... 153
2.7.6. Sub-daily Attributes used to Define Similarity between Locations .......................... 160
2.7.7. Logistic Regression Coefficients for the Regionalised Method of Fragments
Sub-daily Generation Model ................................................................................ 161
2.7.8. Statistical Assessment of Daily Rainfall from RMMM for Alice Springs using 100
Replicates 67 years Long .................................................................................... 166
2.7.9. Performance of extremes and representation of zeroes (for 6 minute time-steps)
from the sub-daily rainfall generation using RMOF for at-site generation using
observed sub-daily data (option 1), at-site disaggregation using observed daily data
(option 2), and the purely regionalised case (option 3). ...................................... 168
clxxxiii
List of Equations
2.4.1. Short duration ARF Equation .................................................................................... 64
clxxxiv
Chapter 1. Introduction
Mark Babister, Monique Retallick
Chapter Status Final
Date last updated 14/5/2019
Despite the advances in flood estimation many design inputs are assumed to be much
simpler than real or observed events. The more complex methods continue to make
assumptions including the use storm burst instead of a complete storm and spatial uniform
temporal patterns. For these reasons actual rainfall events tend to show considerably more
variability than design events and often have different probabilities at different locations.
This book describes the different rainfall inputs can be derived and how they can be used.
Book 2, Chapter 2 provides an introduction to rainfall models. Book 2, Chapter 3 details the
development of the design rainfalls (Intensity Frequency Duration data) by the Bureau of
Meteorology. Book 2, Chapter 4 and Book 2, Chapter 5 discuss the spatial and temporal
distributions of rainfall respectively. Book 2, Chapter 7 covers the development of continuous
rainfall time series for use in continuous simulation models.
The IFD’s presented in this chapter can be adjusted for future climates using the method
outlined in Book 1, Chapter 6 Which recommends an approach based on temperature
scaling using temperature projections from the CSIRO future climates tool. Scaling based on
temperature is recommended, as climate models are much more reliable at producing
temperature estimates than individual storm events.
1
Introduction
The impact of climate change on storm frequency, mechanism, spatial and temporal
behaviour is less understood. Work by (Abbs and Rafter, 2009) suggests that increases are
likely to be more pronounced in areas with strong orographic enhancement. There is
insufficient evidence to confirm whether this result is applicable to other parts of Australia.
Work by (Wasco and Sharma, 2015) analysing historical storms found that, regardless of the
climate region or season, temperature increases are associated with patterns becoming less
uniform, with the largest fractions increasing in rainfall intensity and the lower fraction
decreasing.
1.4. Terminology
The terminology for frequency descriptor described in Figure 1.2.1 applies to all chapters of
this book other than Book 2, Chapter 3 Design Rainfall.
1.5. References
Abbs, D. and Rafter, T. (2009), Impact of Climate Variability and Climate Change on Rainfall
Extremes in Western Sydney and Surrounding Areas: Component 4 - dynamical
downscaling, CSIRO.
CSIRO and Australian Bureau of Meteorology (2007), Climate Change in Australia, CSIRO
and Bureau of Meteorology Technical Report, p: 140. www.climatechangeinaustralia.gov.au
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Westra, S., Evans, J., Mehrotra, R., Sharma, A. (2013), A conditional disaggregation
algorithm for generating fine time-scale rainfall data in a warmer climate, Journal of
Hydrology, 479: 86-99
2
Chapter 2. Rainfall Models
James Ball, Phillip Jordan, Alan Seed, Rory Nathan, Michael Leonard,
Erwin Weinmann
2.1. Introduction
The philosophical basis for use of a catchment modelling approach is the generation of data
that would have been recorded if a gauge were present at the location(s) of interest for the
catchment condition(s) of interest. For reliable and robust predictions of design flood
estimates with this philosophical basis, there is a need to ensure that rainfall characteristics
as one of the major influencing factors are considered appropriately.
There are many features of rainfall to consider when developing a rainfall model for design
flood prediction; exploration of these features can be undertaken using historical storm
events as a basis. In using this approach, there is a need to acknowledge that consideration
of historical events is an analysis problem and not a design problem. Nonetheless, insights
into the characteristics of rainfall events for design purposes can be obtained from this
review.
Rainfall exhibits both spatial and temporal variability at all spatial and temporal scales that
are of interest in flood hydrology. High resolution recording instruments have identified
temporal variability in rainfall from time scales of less than one minute to several days
(Marani, 2005). Similarly, observations of rainfall from high resolution weather radar and
satellites have demonstrated spatial variability in rainfall at spatial resolutions from 1 km to
more than 500 km (Lovejoy and Schertzer, 2006).
While it is important to be aware of this large degree of variability, for design flood estimation
based on catchment modelling it is only necessary to reflect rainfall variability at space and
time scales that are influential in the formation of flood events. The main focus is generally
on individual storms or bursts of intense rainfall within storms that cover the catchment
extent. However, it needs to be recognised that, depending on the design problem (e.g. flood
level determination in a system with very large storage and small outflow capacity), the
relevant ‘event’ to be considered may consist of rainfall sequences that include not just one
storm but extend over several months or even years.
Rainfall models are designed to capture in a simplified fashion those aspects of the spatial
and temporal variability of rainfall that a relevant to specific applications. A broad distinction
between different rainfall models can be made on the basis of their scope. Commonly rainfall
models consider only the temporal dimension by neglecting the spatial dimension. Inclusion
of the spatial dimension together with the temporal dimension results in an alternative form
of a rainfall model. This leads to the following categorisation of rainfall models:
• Models that concentrate on significant rainfall events (storms or intense bursts within
storms) at a point or with a typical spatial pattern that have the potential to produce floods;
• Models that attempt to simulate rainfall behaviour over an extended period at a point,
producing essentially a complete (continuous) rainfall time series incorporating flood
3
Rainfall Models
producing bursts of rainfall, low intensity bursts of rainfall and the dry periods between
bursts of rainfall(Book 2, Chapter 7); and
• Models that attempt to replicate rainfall in both the spatial and temporal dimensions.
Currently, models in this category are being researched and are not in general usage.
There are, however, many problems where rainfall models of this form may be applicable.
Rainfall models that concentrate on the flood producing bursts of rainfall have the inherent
advantage of conciseness (from a flood perspective, only the interesting bursts of rainfall are
considered). Hence, there is great potential to consider interactions of rainfall with other
influential flood producing factors but they also need to allow for the impact of varying initial
conditions.
Continuous rainfall models (Book 2, Chapter 7) have the inherent advantage of allowing the
initial catchment conditions (e.g. soil moisture status and initial reservoir content) at the
onset of a storm event to be simulated directly. However, the need to model the rainfall
characteristics of both storm events (intense rainfall) and inter-event periods (no rainfall to
low intensity rainfall) adds significant complexity to continuous rainfall models. The greater
range of events these models cover tends to be achieved at the cost of reduced ability to
represent rarer, higher intensity rainfall events. Additionally, very long sequences of rainfall
observations are required to properly sample rarer events. These issues make continuous
rainfall models more suitable for simulation of frequent events.
Rainfall data are mostly obtained from individual gauges (daily read gauges or pluviographs)
and only provide data on point rainfalls. However, for catchment simulation the interest is on
rainfall characteristics over the whole catchment. Rainfall models thus are needed to allow
extrapolation of rainfall characteristics from the point scale to the catchment scale. In
extrapolating rainfall characteristics from a point to a catchment or subcatchment, there is a
need to ensure that the extrapolation does not introduce bias into the predictions. This
applies to both continuous rainfall models and event rainfall models.
4
Rainfall Models
If the space-time pattern of rainfall is considered as a field defined in three dimensions, then
the temporal and spatial patterns of rainfall that have conventionally been used in hydrology
can be considered as convenient statistical means of summarising that field. The temporal
pattern of rainfall over a catchment area is derived by taking an average in space (over one
or more grid elements) of the rainfall depth (or mean intensity) over each time increment of
the storm. The spatial pattern of rainfall for an event is defined by taking an average in time
(over one or more time periods) of the rainfall depth (or mean intensity) over each grid cell of
the catchment. Derivation of spatial and temporal patterns is demonstrated with the
conceptual diagram in Figure 2.2.2. Commonly, the spatial pattern is defined by averaging
over each subarea to be used in a model of the catchment or study area as shown in
Figure 2.2.3. The application of some catchment modelling systems (for example, rainfall-on-
grid models commonly used to simulate floods in urban areas), however, require grid based
spatial patterns of rainfall. In these situations, each grid element can be considered as a
subarea or subcatchment.
The space-time pattern of rainfall varies in a random manner between events and within
events influenced by spatial and temporal correlation structures that are an inherent
observed property of rainfall. The random space-time variability may make it difficult to
specify typical or representative spatial patterns for some catchments. Umakhanthan and
Ball (2005) in a study of the Upper Parramatta River Catchment in NSW showed the
variation in the temporal and spatial correlation between storm events on that catchment.
5
Rainfall Models
Figure 2.2.2. Conceptual Diagram of the Spatial Pattern and Temporal Pattern Temporal and
Spatial Averages Derived from the Space-Time Rainfall Field
6
Rainfall Models
Figure 2.2.3. Conceptual Diagram Showing the Temporal Pattern over a Catchment and the
Spatial Pattern Derived over Model Subareas of the Catchment
Upon ascent, the air that is being lifted will expand and cool. This adiabatic cooling of a
rising moist air parcel may lower its temperature to its dew point, thus allowing for
condensation of the water vapour contained within it, and hence the formation of a cloud.
Rainfall can be generated from the cloud through a number of physical processes (Gray and
Seed, 2000). The cloud liquid droplets grow through collisions with other droplets to the size
where they fall as rain. Rain drops from clouds at high altitude may fall through the clouds
near the surface that have formed because of the uplift due to topography and grow as a
result of collisions with the cloud droplets. Air may also become unstable as it is lifted over
higher areas of terrain and convective storms may be triggered by this instability. These
influences combine to typically produce a greater incidence of rainfall on the upwind side of
hills and mountains and also typically larger rainfall intensities on the upwind side than would
otherwise occur in flat terrain.
7
Rainfall Models
The space-time pattern will vary between every individual rainfall event that occurs in a
catchment. In catchments that are subject to orographic influences, there will commonly be
similarity in the space-time pattern of rainfall between many of the different events that are
observed over the catchment. This will typically be the case for catchments that are subject
to flood producing rainfall events that have similar hydrometeorological influences. For
example, the spatial patterns of rainfall for different events may often demonstrate similar
ratios of total rainfall depth in the higher elevations of the catchment to total rainfall depth at
lower elevations.
The spatial patterns of rainfall in catchments that are influenced by orographic effects
represent a systematic bias away from a completely uniform spatial pattern. The influence of
this systematic bias in spatial pattern of rainfall should be explicitly considered in design
flood estimation. Other hydrometeorological influences, such as the distance from a
significant moisture source like the ocean may also give rise to systematic bias in the spatial
pattern of rainfall.
For ease of modelling, storm events can be conceptualised and represented by four main
event characteristics that are analysed and modelled separately:
• Total rainfall depth (or average intensity) over the event duration, at a point or over a
catchment;
• Spatial distribution (or pattern) of rainfall over the catchment during the event; and
These rainfall event characteristics are discussed in Book 2, Chapter 2, Section 4. to Book 2,
Chapter 2, Section 4
Two different types of rainfall events are relevant for design flood estimation: complete storm
events and internal bursts of intense rainfall. While complete storm events are the
theoretically more appropriate form of event for flood simulation, the internal rainfall bursts of
given duration, regardless of where they occur within a storm event, lend themselves more
8
Rainfall Models
readily for statistical analysis. The Intensity Frequency Duration (IFD) data covered in Book
2, Chapter 3 are thus for rainfall burst events.
The design rainfall data provided in ARR covers the range of rainfall burst durations from 1
minute to 7 days.
9
Rainfall Models
in their space-time patterns may produce very different hydrographs at the outlet of the
catchment. Both the runoff generation and runoff-routing processes in catchments are
typically non-linear, so a space-time pattern that exhibits more variability will normally
generate a higher volume of runoff and larger peak flow at the catchment outlet than a
space-time pattern that is more uniform.
• the presence of reservoirs and lakes, for which all rainfall on the water surface is
converted to runoff;
• the presence of dams, weirs, drains and other flow regulating structures;
• the arrangement of the drainage network of the catchment the dependency of alternative
flow paths on event magnitude and differences in contributing area with length of network;
• significant variations in antecedent climatic conditions across the catchment prior to the
events; and
The required resolution of rainfall models to adequately reflect the variability of rainfall in
historical rainfall events has been investigated by (Umakhanthan and Ball, 2005) for the
Upper Parramatta River catchment. (Umakhanthan and Ball, 2005) categorised the
variability of recorded storm events in the spatial and temporal domains and confirmed that
the degree of spatial and temporal resolution of rainfall inputs to flood estimation models can
have a significant impact on resulting flood estimates. A range of other studies have come to
similar conclusions but have found it difficult to give more than qualitative guidance on the
required degree of spatial and temporal resolution of rainfall for different modelling
applications. The conclusions can be summarised in qualitative terms as:
• “Spatial rainfall patterns are understood to be a dominant source of variability for very
large catchments and for urban catchments but for other hydrological contexts, results
vary. Much of this knowledge is either site specific or is expressed qualitatively” (Woods
and Sivapalan, 1999).
• Where short response times are involved in urban catchments, inadequate representation
of temporal variability of rainfall can lead to significant underestimation of design flood
peaks (Ball, 1994). More generally, the importance of temporal variability of rainfall in flood
10
Rainfall Models
modelling depends on the degree of ‘filtering’ of shorter term rainfall peaks through
catchment routing processes (ie. the amount of storage in the catchment system) and the
interaction of flood contributions from different parts of a catchment system.
Sensitivity analyses can be applied to determine for a specific application the influence of the
adopted spatial and temporal resolution of design rainfalls on flood estimates and their
uncertainty bounds.
• Deriving average values of the point design rainfalls for the total catchment upstream of
each location;
• Conversion of average point design rainfall values to areal estimates by multiplying by the
ARF applicable to the total catchment area upstream of each location; and
• Adoption of space-time patterns of rainfall relevant to the total catchment area upstream of
each location.
It is commonly found that design flood estimates are required at one or more locations in a
catchment where flow gauges are not located. If so, it will be necessary to use the above
procedure to derive design rainfalls for the catchment upstream of each gauge location so
that the rainfall-based design floods estimates can be verified against estimates derived from
Flood Frequency Analysis at each flow gauge. Different sets of design rainfall intensities,
ARF and space-time patterns should be calculated for the each of the catchments draining
to the other locations of interest, which are not at flow gauges.
The impact of climate change on storm frequency, mechanism, spatial and temporal
behaviour is less understood.
Work by Abbs and Rafter (2009) suggests that increases are likely to be more pronounced in
areas with strong orographic enhancement. There is insufficient evidence to confirm whether
11
Rainfall Models
this result is applicable to other parts of Australia. Work by Wasco and Sharma (2015)
analysing historical storms found that, regardless of the climate region or season,
temperature increases are associated with patterns becoming less uniform, with the largest
fractions increasing in rainfall intensity and the lower fraction decreasing.
The implications of these expected climate change impacts on the different design rainfall
inputs to catchment modelling are discussed further in the relevant sub-sections of the
following chapters.
2.8. References
Abbs, D. and Rafter, T. (2009), Impact of Climate Variability and Climate Change on Rainfall
Extremes in Western Sydney and Surrounding Areas: Component 4 - dynamical
downscaling, CSIRO.
Ball, J.E. (1994), 'The influence of storm temporal patterns on catchment response', Journal
of Hydrology, 158(3-4), 285-303.
CSIRO and Australian Bureau of Meteorology (2007), Climate Change in Australia, CSIRO
and Bureau of Meteorology Technical Report, p: 140. www.climatechangeinaustralia.gov.au
Gray, W. and Seed, A.W. (2000), The characterisation of orographic rainfall, Meteorological
Applications, 7: 105-119.
Hoang, T.M.T., Rahman, A., Weinmann, P.E., Laurenson, E.M. and Nathan, R.J. (1999),
Joint Probability Description of Design Rainfalls. 25th Hydrology and Water Resources
Symposium, Brisbane, 2009, Brisbane.
Leonard, M., Lambert, M.F., Metcalfe, A.V. and Cowpertwait, P.S.P. (2008), A space-time
Neyman-Scott rainfall model with defined storm extent. Water Resources Research, 44(9),
1-10, http://doi.org/10.1029/2007WR006110
Lovejoy, S. and D. Schertzer, 2006: Multifractals, cloud radiances and rain, J. Hydrology,
322: 59-88.
Marani, M., 2005: Non-power-law-scale properties of rainfall in space and time, Water
Resour. Res., 41, W08413, doi:10.1029/2004WR003822.
Seed, A.W., Srikanthan, R., and Menabde, M. (2002), Stochastic space-time rainfall for
designing urban drainage systems. Proc. International Conference on Urban Hydrology for
the 21st Century, pp: 109-123, Kuala Lumpur.
Umakhanthan, K. and Ball, J.E. (2005), Rainfall models for catchment simulation. Australian
Journal of Water Resources, 9(1), 55-67.
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Westra, S., Evans, J., Mehrotra, R., Sharma, A. (2013), A conditional disaggregation
algorithm for generating fine time-scale rainfall data in a warmer climate, Journal of
Hydrology, 479: 86-99
12
Chapter 3. Design Rainfall
Janice Green, Fiona Johnson, Catherine Beesley, Cynthia The
Chapter Status Final
Date last updated 14/5/2019
3.1. Introduction
Obtaining an estimated rainfall depth for a specified probability is an essential component of
the design of infrastructure including gutters, roofs, culverts, stormwater drains, flood
mitigation levees, retarding basins and dams.
If sufficient rainfall records are available, at-site frequency analysis can be undertaken to
estimate the rainfall depth corresponding to the specified design probability in some cases.
However, limitations associated with the spatial and temporal distribution of recorded rainfall
data necessitates the estimation of design rainfalls for most projects.
The purpose of this chapter is to outline the processes used to derive temporally and
spatially consistent design rainfalls for Australia by the Bureau of Meteorology. The classes
of design rainfall values for which estimates have been developed are described in Book 2,
Chapter 3, Section 2. The practitioner is advised that this chapter uses different frequency
descriptors (Table 2.3.1) used to describe events to other the rest of this Guideline (which
use Figure 1.2.1).
Book 1, Chapter 6 summarises the current recommendations on how climate change should
be incorporated into design rainfalls for those situations where the design life of the structure
means that it could be affected by climate change.
Book 2, Chapter 3, Section 4 summarises the steps involved in deriving the frequent and
infrequent design rainfalls (also known as the Intensity Frequency Duration (IFD) design
rainfalls) for Australia. Book 2, Chapter 3, Section 5 and Book 2, Chapter 3, Section 6
describe how the very frequent and rare design rainfalls were estimated. The methods
adopted are only briefly outlined in these sections, with additional references provided to
facilitate access to further technical information for interested readers. More detail on each of
the methods is provided in Bureau of Meteorology (2016).
In Book 2, Chapter 3, Section 7 a summary of the methods adopted for the estimation of
Probable Maximum Precipitation is provided. Book 2, Chapter 3, Section 8 provides
information on the uncertainties associated with the design rainfalls and Book 2, Chapter 3,
Section 9 explains how to access estimates of each of the design rainfall classes.
There are five broad classes of design rainfalls that are currently used for design purposes,
generally categorised by frequency of occurrence. These are summarised below and
13
Design Rainfall
presented graphically in Figure 2.3.1. However, it should be noted that there is some overlap
between the classes. Different methods and data sets are required to estimate design
rainfalls for the different classes and these are discussed in the following sections. The
practitioner is advised that this chapter uses different frequency descriptors (Table 2.3.1)
used to describe events to other the rest of this Guideline (which use Figure 1.2.1).
14
Design Rainfall
As part of the ARR revision projects a summary of the scientific understanding of how
projected changes in the climate may alter the behaviour of factors that influence the
estimation of the design floods was undertaken. Climate change research undertaken as
part of the ARR revision projects has lead to an interim recommendation to factor the design
rainfalls based on temperature scaling using temperature projections from the CSIRO future
climates tool. Advice on how to adjust design rainfalls for climate change is detailed in Book
1, Chapter 6.
15
Design Rainfall
Figure 2.3.2. Frequent and Infrequent (Intensity Frequency Duration) Design Rainfall Method
16
Design Rainfall
Table 2.3.2. Frequent and Infrequent (Intensity Frequency Duration) Design Rainfall Method
Step Method/Data
Number of rainfall stations Daily read - 8074 gauges
Continuous – 2280 gauges
Period of record All available records up to 2012
Length of record used in analyses Daily read ≥ 30 years
Continuous > 8 years
Source of data Organisations collecting rainfall data across
Australia
Series of Extreme values Annual Maximum Series (AMS)
Frequency analysis Generalised Extreme Value (GEV)
distribution fitted using L-moments
Extension of sub-daily rainfall statistics to Bayesian Generalised Least Squares
daily read stations Regression (BGLSR)
Regionalisation Region of Influence (ROI)
Gridding Regionalised at-site distribution parameters
gridded using ANUSPLIN
Daily read rainfall gauges are read at 9:00 am each day and a total rainfall depth for the
previous 24 hours is reported (refer to Book 1, Chapter 4, Section 9).
Continuous rainfall stations measure rainfall depth at much finer time intervals. In Australia
there have been two main types of continuous rainfall stations as discussed below.
17
Design Rainfall
Dines Tilting Syphon Pluviographs (DINES) record rainfall on a paper chart which is then
digitised manually. Due to the limitations in the digitisation process, the minimum interval
at which rainfall data could be accurately provided was 5 or 6 minutes.
Since the 1990s the majority of Dines pluviographs have been replaced by Tipping Bucket
Raingauges (TBRG) which typically have a 0.2 mm bucket capacity. Each time the bucket
is filled the gauge tips creating an electrical impulse which is logged. Rainfall data from
TBRGs can be accurately provided for intervals of less than one minute (refer to Book 1,
Chapter 4, Section 9).
However, because of the relatively sparse spatial distribution of the RADAR network across
Australia and the short period of record, data from RADAR were not able to be used in the
estimation of the design rainfalls.
3.4.2.1.5. Meta-data
Meta-data provides essential information about the rainfall station such as the location of the
rainfall station, the type of instrumentation and data collection method. It therefore provides
context for the rainfall data collected at a station and an indication of its quality. At a
minimum, meta-data relating to location in terms of latitude and longitude were collated for
each rainfall station. However, any additional meta-data that were available including
elevation, details on siting and clearance and photographs were also collated.
18
Design Rainfall
Rainfall data collected by the Bureau of Meteorology are stored in the Australian data
Archive for Meteorology (ADAM) which contains approximately 20 000 daily read rainfall
stations (both open and closed) starting in 1800; and nearly 1500 continuous rainfall stations
– using both DINES and TBRG instrumentation.
Under the terms of the Water Regulations 2008, water information (including rainfall data)
collected by organisations across Australia are required to be provided to the Bureau of
Meteorology. The rainfall data collected by organisations including local and state
government water agencies, hydropower generators and urban water utilities are stored in
the Australian Water Resources Information System (AWRIS) together with other water
information. At present, AWRIS contains:
Of particular importance to design rainfall estimation are the dense networks of continuous
rainfall stations operated by urban water utilities which provide data in areas of steep rainfall
gradients and urban areas.
The location and period of record of the daily read rainfall stations operated by the Bureau of
Meteorology are shown in Figure 2.3.3.
Figure 2.3.3 depicts the spatial coverage of the daily read rainfall stations across Australia is
reasonably good, especially over the eastern states and around the coast. Gaps in the
spatial coverage of the daily read rainfall station network occur in the eastern half of Western
Australia; the western and north eastern parts of South Australia; and the parts of the
Northern Territory that are removed from the road and rail networks.
19
Design Rainfall
20
Design Rainfall
21
Design Rainfall
Figure 2.3.5. Daily Read Rainfall Stations Used for ARR 1987 and ARR 2016 Intensity
Frequency Duration Data
The increase in daily read rainfall stations is due to the increased number of stations that
met the minimum period of record criterion.
Figure 2.3.6 shows the inclusion of data from continuous rainfall stations operated by other
organisations has resulted in a significant increase in the spatial coverage of these data. In
particular, the spatial coverage along the east coast of Australia; the west coast of Tasmania;
and large areas in Western Australia has been improved.
22
Design Rainfall
Figure 2.3.6. Continuous Rainfall Stations Used for ARR 1987 and ARR 2016 Intensity
Frequency Duration Data
In Figure 2.3.7 the distribution of record lengths for daily read rainfall stations is shown. It
can be seen that, although there are a reasonable number of long term stations,
approximately half of the daily read rainfall stations have less than 10 years of record.
23
Design Rainfall
While there are a small number of continuous rainfall stations with more than 70 years of
record, the majority of stations have less than 40 years of record and a high proportion have
less than 10 years of record. Figure 2.3.8 shows the distribution of available length of record
for the continuous stations.
24
Design Rainfall
Figure 2.3.9. Number of Long-term Daily Read Stations Used for ARR 1987 and ARR 2016
Intensity Frequency Duration Data
For the continuous rainfall data, the inclusion of stations operated by other organisations and
the nearly 30 years of additional data resulted in a significant increase in both the length of
record available and the number of rainfall stations that met the minimum record length
criterion (Figure 2.3.10).
25
Design Rainfall
Figure 2.3.10. Length of Record of Continuous Rainfall Stations Used for ARR 1987 and
ARR 2016 Intensity Frequency Duration Data
The quality controlling undertaken of both the daily read and continuous rainfall data is
summarised below (refer to Green et al. (2011) for more information). The quality controlled
database prepared for the estimation of the design rainfalls will be archived in AWRIS and
made available from the Bureau of Meteorology’s website via the Water Data Online product.
• time shifts.
• identify gross errors - data inconsistent with neighbouring records but not captured by
either of the above two categories.
26
Design Rainfall
Manual correction of gross errors identified during the automated quality controlling
procedures was facilitated through the use of the Bureau of Meteorology’s Quality Monitoring
System. The Bureau of Meteorology’s Quality Monitoring System is a suite of programs that
has functionalities to map the suspect value in relation to nearby stations and to link to
Geographic Information System (GIS) data from other systems including RADAR, Satellite
Imagery and Mean Sea Level Pressure Analysis.
In order to reduce the amount of continuous rainfall data that needed to be quality controlled
to a manageable volume, only a subset of the largest rainfall events was quality controlled.
The subset was created by extracting the number of highest rainfall records equal to three
times the number of years of record at each site for each duration being considered.
Each continuous rainfall value in the data subset that was flagged as being spurious by the
automated quality controlling procedures was subjected to manual quality controlling. The
manual quality controlling of the data was undertaken in order to determine whether the
flagged value was correct or not. The manual quality controlling procedure adopted involved
comparing 9:00 am to 9:00 am continuous rainfalls with daily (also 9:00 am to 9:00 am)
rainfalls at the co-located daily read rainfall station. For continuous rainfall sites with no co-
located daily site, the continuous rainfall record was compared with the daily rainfall record
of the nearest site. The continuous rainfall value was not modified in any way - the
comparison with daily values was made in order to assess whether it was valid or not.
Where it was assessed that the flagged value was definitely incorrect it was excluded from
the analyses, otherwise values were retained in the continuous rainfall database.
3.4.2.5.3. Meta-data
The meta-data associated with each of the rainfall stations were also checked. For the
Bureau of Meteorology operated rainfall stations, the Bureau of Meteorology’s meta-data
database, SitesDB, includes details of the station’s location in latitude and longitude, and
elevation. For rainfall stations operated by other organisations, meta-data were provided with
the rainfall data and stored in AWRIS. Gross error checks on station locations and elevation
were performed by comparing elevations derived using a Digital Elevation Model (DEM) to
those recorded in the station’s meta-data. Checks of latitude and location were also carried
out by plotting the latitudes and longitudes in GIS. Revisions to station locations or
elevations were carried out using Google Earth and information on the station provided in
the Bureau of Meteorology’s station meta-data catalogue.
For the limited number of closed stations for which an elevation was not included in the
meta-data, the station elevation was extracted from the Geoscience Australia 9 second
DEM1 based on the latitude and longitude.
27
Design Rainfall
stationarity in the recorded rainfalls, then possibly only a portion of the observed record
should have been used in deriving the design rainfalls. This is because a key assumption in
the statistical methods adopted for the derivation of the design rainfalls is that the data are
stationary. In order to determine whether the complete period of available rainfall records
could be adopted in estimating the design rainfalls, it was necessary to assess the degree of
non-stationarity present in the historic record at rainfall stations across Australia (Green and
Johnson, 2011).
Two methods were used to establish if there are trends in the Annual Maximum Series of
rainfalls for Australia. The first examined the records at individual stations which were tested
to assess trends in the time series of the annual maximum rainfalls and changes in the
probability distributions fitted to the annual maxima to estimate design rainfall quantiles. The
second method used an area averaged approach to check for regional trends in the number
of exceedances of pre-determined thresholds. The approach was based on that carried out
by Bonnin et al. (2010) to assess trends in large rainfall events in the USA as part of the
revisions by the National Oceanic and Atmospheric Administration to design rainfalls.
It was concluded that although some stations showed strong trends in the annual maximum
time series, particularly for short durations and more frequent events, the magnitude of these
changes was within the expected accuracy of the fitted design rainfall relationships. It was
therefore considered appropriate to assume stationarity and use the complete period of
record at all stations in the estimation of the design rainfalls.
The extreme value series can be defined using the Annual Maximum Series (AMS) or the
Partial Duration Series (PDS) (also known as Peak over Threshold) (more information can
be found in Book 3, Chapter 2). For the frequent and infrequent design rainfalls, the AMS
was used to define the extreme value series because of its lack of ambiguity in defining the
series; its relatively simple application and the problem of bias associated with the PDS for
less frequent AEPs.
It should be noted that in extracting the AMS, the focus was on obtaining the largest rainfall
depth in each year for each of the durations considered. Therefore the extracted depths
comprised both total storm depths and bursts within storms.
In order to reduce the uncertainty in the design rainfall estimates, minimum station record
lengths were adopted. The criteria were:
These criteria were selected on the basis of optimising the spatial coverage of the rainfall
stations while ensuring that there were sufficient AMS values at each site to undertake
frequency analysis.
The daily read rainfall data are for the restricted period from 9:00 am to 9:00 am rather than
for the actual duration of the event. As this may not lead to the largest rainfall total, it was
28
Design Rainfall
necessary to convert these ‘restricted’ daily read rainfall depths to unrestricted rainfall
depths. In order to do this, ‘restricted’ to unrestricted conversions factors were estimated
using co-located daily read and continuous rainfall gauges at a number of locations around
Australia of differing climatic conditions. The resultant factors are shown in Table 2.3.4.
While for durations of one day and longer this was a fairly straightforward approach, for sub-
daily durations the scarcity of long-term continuous rainfall records meant that an alternative
approach was needed to supplement the available data. For the IFD revision project, a
Bayesian Generalised Least Squares Regression (BGLSR) approach was adopted, a
summary of which is provided in Book 2, Chapter 3, Section 4; more details can be found in
Johnson et al. (2012a) and Haddad et al. (2015).
The spatial coverage of sub-daily rainfall stations is considerably less than that of the daily
read stations (refer to Figure 2.3.4 and Figure 2.3.5 ). Therefore, a method was needed to
improve the spatial coverage of the sub-daily data. This is most commonly done using
information from the daily read stations with statistics of sub-daily data being inferred from
those of the daily data. Previously, adopted techniques for predicting rainfall depths at
durations below 24 hours from those for the 24, 48 and 72 hour durations have been
29
Design Rainfall
factoring of the 24 hour IFDs; principal component analysis followed by regression; and
Partial Least Squares Regression. However, a major weakness of these previously adopted
approaches is their inability to account for variation in record lengths from site to site and
inter-station correlation.
The approach adopted for the frequent and infrequent design rainfalls was Bayesian
Generalised Least Squares Regression (BGLSR) as it accounts for possible cross-validation
and unequal variance between stations by constructing an error co-variance matrix and can
explicitly account for sampling uncertainty and inter-site dependence. Details of the BGLSR
approach can be found in Reis et al. (2005), Madsen et al. (2002) and Madsen et al. (2009).
In Australia, Haddad and Rahman (2012a) used BGLSR to obtain regional relationships to
estimate peak streamflow in ungauged catchments and for pilot studies for the design rainfall
project (Haddad et al., 2009; Haddad et al., 2011; Haddad et al., 2015).
Where Xij (j = 1,…,k) are the k predictor variables, βjare the parameters of the model that
must be estimated, ε is the sampling error and δ is the model error.
A further advantage of the BGLSR is that the Bayesian formulation allows for the separation
of sampling and statistical modelling errors. This is important because it was found that the
sampling errors dominate the total error in the statistical model. The BGLSR produces
estimates of the standard error in:
• the predicted values at-site used in establishing the regression equations; and
• the predicted values at new sites (that is, sites not used in deriving the regression). In the
application of the BGLSR these are the daily rainfall stations where the predictions of sub-
daily rainfalls statistics are required.
The error variances for the predictions are comprised of the regional model error and the
sampling variance.
The errors in the BGLSR model are assumed to have zero mean and the co-variance
structure described in Equation (2.3.2).
�2� ,
�=� � 2, �=�
�
��� ��, � � = ; ��� ��, � � = � (2.3.2)
�� �� �� ,� ≠ � 0, �≠�
� � ��
Where �2� is the sampling error variance at site i, �� is the correlation coefficient between
� ��
�2��
sites i and j, is the model error variance. For the Bayesian framework introduced by Reis
et al. (2005), the parameters of the model (β) are modelled with a multivariate normal
30
Design Rainfall
distribution using a non informative prior. A quasi analytic approximation to the Bayesian
formulation of the GLSR has been developed by Reis et al. (2005) to solve for the posterior
distributions of the mean and variance for β.
• L-Skewness.
These three statistics can then be used to define the parameters of any appropriate
probability distribution which in the case of the design rainfalls had been shown to be the
GEV distribution.
The initial work required to apply the BGLSR was to determine the appropriate predictors
(i.e. X from Equation (2.3.1)) to estimate the three rainfall statistics listed above. A review of
literature and meteorological causative mechanisms selected a number of site and rainfall
characteristics for use as possible predictors as reported in Johnson et al. (2012a). These
predictors were:
• Elevation;
• Slope;
• Aspect;
• Rainfall statistics (mean, L-CV and L-Skewness) for the 24, 48 and 72 hour duration
events.
Haddad and Rahman (2012b) provide extensive details of the cross-validated predictor
selection process for each of the study areas. It was found that the most important predictor
is the 24 hour rainfall statistic. However performance of the model was not changed
significantly by including all predictors so this approach was adopted.
As well as determining the optimum combination of predictor variables, the testing for the
BGLSR needed to determine the number of stations to contribute to each regression
equation. Ideally, the number of stations in each analysis area would be maximised to
improve the accuracy of the regression equations. However the number of stations is limited
to approximately 100 by the requirement for the error co-variance matrices to be invertible.
The delineation of the analysis areas thus needed to balance these two competing
requirements.
31
Design Rainfall
It was also important that stations were grouped into analysis areas where the causative
mechanisms for large rainfall events are similar. The rainfall stations were grouped primarily
according to climatic zones by considering the seasonality of rainfall events and mean
annual rainfalls. Australian drainage divisions were also used to guide the division of larger
climatic zones into smaller areas over which the BGLSR calculations are tractable, such as
in the northern tropics where three analysis areas have been adopted (NT, GULF and
NORTH_QLD). The final analysis areas are shown in Figure 2.3.11. The South East Coast
and South Western WA areas are considered Regions of Interest. A 0.2 degree buffer has
been used in assigning stations to each analysis area to provide a smooth transition
between adjacent areas.
For each analysis area, a regression relationship was developed which could be applied to
all stations within the analysis area. Where the density of stations was high, a Region Of
Influence (ROI) approach (Burn, 1990) was adopted such that each station has its own ROI.
This allowed the regression equations to smoothly vary across the data dense analysis
areas. For sparser analysis areas, a clustering, or fixed region, approach was adopted such
that stations were grouped by spatial proximity into analysis areas with rigid boundaries. All
stations in each analysis area were used to derive one regression equation that was then
adopted for the predictions at those stations.
To improve the predictions from the BGLSR it was desirable that the distribution of each
predictor variable was relatively symmetric and preferably approximately normally
distributed. For each analysis area the distribution of the predictor variables from all sites in
the area were examined using histograms and quantile-quantile plots. For predictors that
appeared to be strongly skewed, a range of transformations were trialled to attempt to
reduce the skewness of the variable. The transformations included a natural logarithm,
32
Design Rainfall
square root transformation and Box-Cox (i.e. power) transformation. In general the log
transformation and the Box-Cox transformation were successful in reducing the skewness of
the predictors.
After determining the regression coefficients for the analysis areas, these coefficients were
combined with the set of predictors for the daily station locations to produce the estimates of
the sub-daily rainfall statistics. There are no observations of the sub-daily rainfall statistics to
which these predictions at daily sites can be compared. However “sanity” checks on the
values were carried out by comparing the estimates to the 24 hour rainfall statistics and to
the possible range of values for L-CV and L-Skewness (both limited to -1 to 1).
The result of using estimated sub-daily rainfall statistics was that the number of locations
with sub-daily information was increased from approximately 2300 to approximately 9700
when both the daily and continuous rainfall stations locations are used. This substantially
increased density of sub-daily rainfall data assisted in the subsequent gridding of the rainfall
quantiles across Australia described in Book 2, Chapter 3, Section 4.
3.4.4. Regionalisation
Regionalisation recognises that for stations with short records, there is considerable
uncertainty when estimating the parameters of probability distributions and short records can
bias estimates of rainfall statistics. To overcome this, it is assumed that information can be
combined from multiple stations to give more accurate estimates of the parameters of the
extreme value probability distributions. One approach that is widely used to reduce the
uncertainty and overcome bias in estimating rainfall quantiles is regional frequency analysis,
also known as regionalisation.
For the design rainfalls, regionalisation was used to estimate the L-CV and L-Skewness with
more confidence. The regionalisation approach adopted is generally called the “index flood
procedure” (Hosking and Wallis, 1997). This approach assumes that sites can be grouped
into homogenous regions, such that all sites in the region have the same probability
distribution, other than a scaling factor. The scaling factor is termed the index flood or in this
case, since the regionalisation is of rainfall data, the “index rainfall”. The index rainfall is the
mean (that is, first L-moment) of the extreme value series data at the station location.
The homogenous regions for the frequency analysis can be defined in a number of ways.
Cluster and partitioning methods divide the set of all stations into a fixed number of
homogenous groups (Hosking and Wallis, 1997) where generally every site is assigned to
one group. Alternatively, a ROI approach (Burn, 1990) can be adopted, such that for each
station an individual homogenous region is defined. Each ROI will contain a potentially
unique set of sites, with each site possibly contributing to multiple ROIs.
For the design rainfalls, the station point estimates were regionalised using a ROI as the
advantage of this approach is that the region sizes can be easily varied according to station
density and the available record lengths. The assumptions of the approach are, firstly, that
the specified probability distribution (GEV in the case of the AMS) is appropriate; that the
region is truly homogenous; and, finally, that sites are independent or that their dependence
is quantified.
In the application of the ROI method, it was first necessary to establish how big the ROIs
should be. The size of the ROI can be defined in two ways; either using the number of
stations included in the region or alternatively by calculating the total number of station-years
in the region as the sum of the record lengths of the individual stations included in the ROI.
33
Design Rainfall
Region sizes from 1 to 50 stations and from 50 to 5000 station-years were investigated to
establish the optimum ROIs for estimating rainfall quantiles across Australia using a simple
circular ROI and the Pooled Uncertainty Measure (PUM) (Kjeldsen and Jones, 2009). The
minimum PUM values occur where there is an optimum size in the trade-off between bias
and variance of generally lead to the minimum PUM value. When considering the region
defined using the number of stations it was found that a region of 8 stations performed best.
Given that the average record length for stations used in the analysis was 66 years, a region
of 8 stations will have on average 528 years of data which is consistent with the region size
using the station-year criteria. The findings were generally independent of rainfall event
duration and frequency.
Defining regions in terms of station-years is attractive as this approach can adapt to different
station densities and station record lengths. Given the similar results from both methods, the
station-years definition for the region size was adopted.
After finalising the optimum region size, a number of geographic and non-geographic
similarity measures were investigated as methods to define membership of each ROI. Three
different alternatives for defining the ROIs using geographical similarity were investigated:
• Distance between sites (in kilometres) defined using latitude and longitude;
• Euclidean distance between sites where distance was defined using latitude, longitude
and scaled elevation (Hutchinson, 1998); and
• Elevation;
• Aspect;
• Slope;
The results of the trialling showed that the best results were provided using:
34
Design Rainfall
• For each station location, a circular ROI was expanded until 500 stations years of record
was achieved. The resultant region was tested for homogeneity using the H measure of
(Hosking and Wallis, 1997);
• If the region was not homogenous the stations in the regions were checked according to
the discordancy measures of Hosking and Wallis (1997) and the region membership
revised where appropriate;
• The average L-CV for each region was calculated using a weighted average of the L-CV
at all stations in the region, with the weights proportional to the station lengths. This was
repeated for the L-Skewness; and
• The regionalised L-CV and L-Skewness were used to estimate the scale (α) and shape (κ)
parameters of the growth curve (scaled GEV distribution) at each location.
The regions defined for the 24 hour duration rainfall data were used for all daily and sub-
daily durations. More details on the regionalisation can be found in Johnson et al. (2012b).
3.4.5. Gridding
The regionalisation process resulted in estimates of the GEV parameters at all station
locations, which were combined with the mean of the extreme value series at that site to
estimate rainfall quantiles for any required exceedance probability. However frequent and
infrequent design rainfall estimates are required across Australia, not just at station locations
and therefore the results of the analyses needed to be extended in some way to ungauged
locations.
35
Design Rainfall
qualitative assessments were also conducted by preparing maps which compared the index
rainfall derived from at-site frequency analysis of rainfall records, the length of record
available at each station, and the spatial density of the rainfall gauge network to the gridded
index rainfalls produced by ANUSPLIN for daily durations.
The final IFD grids were produced by the application of ANUSPLIN using a 0.025 degree
DEM resolution and adopting 3570 knots with no transformation of the data. More details on
the gridding approach adopted can be found in The et al. (2012), The et al. (2014) and
Johnson et al. (2015).
� = 1 − � 1 − �(1 + �) /� (2.3.3)
where � is the location parameter for the regionalised growth curve and � represents the
Gamma function.
where �(�) is the quantile function of the growth curve for the cumulative probability �.
where �(�) is the quantile function of the scaled growth curve, which is multiplied by the
index rainfall �.
� �
�� = � (2.3.6)
� �
Where �� is the sub-hourly rainfall intensity for duration � , �� is the 60 minute rainfall
intensity (ie. duration � is 60 minutes).
36
Design Rainfall
was done by applying a sixth order polynomial to each grid point to all the standard durations
from one minute up to seven days.
Inconsistencies with respect to duration (rainfall depths at lower durations exceeding those
at higher durations) were also found and were addressed.
Inconsistencies were detected by subtracting each grid from a longer duration grid at the
same probability and checking for negative values. Inconsistencies were addressed by
adjusting the longer duration rainfall upwards so that the ratio of shorter duration rainfall to
the longer duration rainfall equals 0.99 or
Rainfall depth at the shorter duration
Rainfall depth at the longer duration
= 0.99
The smoothing procedure was applied first to the original grids and the smoothed grids
adjusted for inconsistencies. The grids were smoothed once again and a final adjustment for
inconsistencies across durations was performed. The final grids were also checked for
inconsistencies across AEP.
Grids of the polynomial coefficients were prepared in order to enable IFDs for any duration to
be determined.
3.4.6. Outputs
The method described in Book 2, Chapter 3, Section 4 produced frequent and infrequent
design rainfall estimates across Australia. The design rainfall estimates are provided both as
rainfall depths in millimetres (mm) and rainfall intensities in millimetres per hour (mm/hr) for
the standard durations and standard probabilities described in Table 2.3.5.
Design rainfalls for the three month Average Recurrence Interval (or 4 EY) have not been
previously available, with agencies giving their own advice on the approach for estimating
37
Design Rainfall
very frequent design rainfalls. To address this need, estimates for probabilities more frequent
than 1 EY have been derived.
To ensure consistency between the very frequent design rainfalls and the frequent and
infrequent design rainfall, the overall approach adopted for the very frequent design rainfalls
was very similar to that adopted for the frequent and infrequent design rainfall. However,
some modifications to the approach were necessary because of the increased frequency of
occurrence that was being considered. A summary of the method is presented in Table 2.3.6
and in Book 2, Chapter 3, Section 5 to Book 2, Chapter 3, Section 5. Further details can be
found in The et al. (2015).
Continuous – 2722
Period of Record All available records up to 2012
Length of Record used in Analyses Daily read > 5 years
Additional stations could be used as the minimum number of years of record was reduced
from 30 (for the frequent and infrequent design rainfalIs) to five years for the very frequent
design rainfalls. A threshold of five effective years was selected for daily and sub-daily sites
as this was deemed to be statistically acceptable given the high frequency of the estimated
exceedances compared to the previous 1 EY. The shorter record length ensures greater use
of available sites but also ensures that there is sufficient information available to derive the
more frequent probabilities from 12 EY to 2 EY.
38
Design Rainfall
Figure 2.3.12. Daily Read Rainfall Stations and Continuous Rainfall Stations Used for Very
Frequent Design Rainfalls
As a PDS approach was being adopted it was necessary to define the threshold above
which all events will be included. It was important to identify the number of values per year
that are required to accurately estimate the more frequent IFD's. Given that the most
frequent probability is 12 EY, a minimum of 12 events per year was used to adequately
represent the at-site distribution for these higher frequency events.
An assumption of the method, is that the events in the PDS are independent. In order to
ensure that the events in the PDS were independent, a method that provided a consistent
and meteorologically rigorous approach to defining independence of rainfall events across
Australia was developed. The event independence testing criteria used were based on the
Minimum Inter-event Time (MIT) approach (Xuereb and Green, 2012). The analyses
suggested that a MIT that varied from two to six days with latitude across Australia was
appropriate for event durations up to three days while, for durations longer than three days
the MIT adopted was zero. For durations of less than one day, the MIT for the one day
duration was adopted (Green et al., 2015).
39
Design Rainfall
As discussed in Book 2, Chapter 3, Section 4 and Green et al. (2012b), testing of the most
appropriate distribution to adopt for both the AMS and the PDS was undertaken as part of
the derivation of the IFDs with results identifying the GEV distribution as the most
appropriate for the AMS and the GPA distribution for the PDS. However, as a monthly
exceedance data series was adopted for the very frequent design rainfalls there is some
added uncertainty; to address this, a comparison was conducted of the GEV and GPA
distributions. Twenty-four geographically distributed test sites with medium to long record
lengths were selected for assessing the relative fit of the distributions to the at-site data. The
test sites indicated that the GPA provides a closer fit to the site data in the majority of cases.
On the basis of this, the very frequent design rainfalls used the GPA distribution fitted to the
PDS for all stations which met the required record length.
Extracting 12 independent events per year of record for the MES introduced the issue of
zero values included in the PDS at some sites. This particularly occurred through the arid
areas of central Australia to the west coast, where annual rainfall is highly variable and
strong seasonality can occur. These areas have short wet seasons and can fail to have 12
rain events on average that are independent of one another for every year. However, given
the previously defined minimum number of events being 12, these zero values events are
considered as part of the distribution. To manage the occurrence of the zero values in the
extreme value series, Hosking and Wallis (1997) suggest using a ‘mixed distribution’ or more
correctly a conditional probability adjustment that gives a probability of a zero value, and
cumulative distribution for the non-zero values as seen in Equation (2.3.7) (Guttman et al.,
1993).
Where � is the probability of a zero rainfall value which is estimated by dividing the numbers
of zeros by the total number of events and �(�) is the cumulative distribution function of the
non-zero rainfall events. Using this approach, if the Non-Exceedance Probability (NEP) of
interest is less than � , then the quantile estimate is zero and if the NEP is greater than �,
the quantile is estimated from �(�) using the adjusted NEP shown in Equation (2.3.8).
For series with a small proportion of zeros, the impact on the distribution and resulting
quantiles was negligible. For records with less than 10% zeros, there is very little difference
and for up to nearly 20% zeros there is less than 10% average difference in the quantile
40
Design Rainfall
depths. However, the differences become much more significant when the proportion of
zeros increases.
The ratio method adopted involves estimating at-site quantiles, using the at-site 50% AEP as
the reference values for the ratios and gridding the calculated ratios. The advantage of this
approach and using the at-site 50% AEP, was that it allows for the spatial variability in the
ratios. In addition, the ratio was generally a more accurate representation of the X EY to
50% AEP ratio since it was calculated from the same dataset and resulted in a smooth
spatial pattern. Consistency was also inherent since the ratios would always decrease with
increasing probability. Since the ratios were spatially consistent, the final very frequent
design rainfall depths follow the frequent and infrequent 50% AEP depths closely. These
depth estimates were calculated using the gridded ratios, and multiplying by the 50% AEP
design rainfall.
3.5.5. Gridding
As with the frequent and infrequent design rainfalls the ratios for all durations and EYs were
gridded using the splining software ANUSPLIN (Hutchinson and Xu, 2013). To determine the
most appropriate method to adopt for the gridding of the ratios a range of tests was
undertaken of combinations of variates and different knot sets. The final case adopted was a
spline that incorporated latitude, longitude and elevation using 4000 knots for the daily
dataset and 1000 knots for the sub-daily dataset. The 0.025 degree Digital Elevation Model
of Australia was used to provide the elevation data which were the same as that used in the
derivation of the frequent and infrequent grids (The et al., 2014).
The grids were smoothed to reduce any inconsistencies across durations and to smooth
over discontinuities in the gridded data. A sixth order polynomial was applied to each grid
point for all the standard durations from 1 minute up to 7 days. Grids were also checked for
inconsistencies across EY.
41
Design Rainfall
Figure 2.3.13. Procedure to Derive Very Frequent Design Rainfall Depth Grids From Ratios
3.5.6. Outputs
The method described in Book 2, Chapter 3, Section 5 to Book 2, Chapter 3, Section 5
produced very frequent design rainfall estimates across Australia. The very frequent design
rainfall estimates are provided both as rainfall depths in millimetres (mm) and rainfall
intensities in millimetres per hour (mm/hr) for the standard durations and standard
probabilities in Table 2.3.7:
• the design of dams that fall into the Significant and Low Flood Capacity Category where
the Acceptable Flood Capacity is the 1 in 1000 AEP design flood (ANCOLD, 2000);
• the design of bridges, where the ultimate limit state adopted in the Australian bridge
design code is defined as ‘the capability of a bridge to withstand, without collapse, the
design flood associated with a 2000 year return interval’ (Austroads, 1992);
• the incorporation of climate change into IFDs in accordance with Book 1, Chapter 6 which
recommends that if the design probability for a structure is 1% AEP, then the possible
impacts of climate change should be assessed using 0.5% and 0.2% AEP (Bates et al.,
2015); and
• the undertaking of spillway adequacy assessments of existing dams as the Dam Crest
Flood (DCF) of many dams lies between the 1% AEP flood and the Probable Maximum
Flood (as defined by the Probable Maximum Precipitation, PMP). Rare design rainfalls
enable more accurate definition of the design rainfall and flood frequency curves between
the 1% AEP and Probable Maximum Events.
Unlike the derivation of very frequent, frequent and infrequent design rainfalls which are
based on observed rainfall events that lie within the range of probabilities being estimated,
rare design rainfalls are an extrapolation beyond observed events. The longest period for
which daily read rainfall records are available is around 170 years (Figure 2.3.3 and
42
Design Rainfall
Figure 2.3.7) however rare design rainfalls are required for probabilities much rarer than this.
As a consequence it is difficult to validate the resultant rare design rainfalls and therefore the
method adopted needs to be based on a qualitative assessment that the assumptions made
in the method are reasonable and that the adopted approach is consistent with methods
used to derive more frequent design rainfalls where the results can be validated.
The method adopted for deriving the rare design rainfalls was based on the data and
method adopted for the more frequent design rainfalls but places more weight on the largest
observed rainfall events which are of most relevance to rare design rainfalls. The adopted
regional LH-moments approach is summarised in Table 2.3.8 and Book 2, Chapter 3,
Section 6 to Book 2, Chapter 3, Section 6. More detail can be found in Green et al. (2015)
and Bureau of Meteorology (2016).
43
Design Rainfall
Figure 2.3.14. Daily Read Rainfall Stations with 60 or More Years of Record
As discussed in Book 2, Chapter 3, Section 4, the GEV distribution was adopted for AMS for
the frequent and infrequent design rainfalls following extensive testing of a range of
candidate distributions. On the basis of these trials and similar results found by Nandakumar
et al. (1997) and Schaefer (1990), the GEV distribution was adopted for the rare design
rainfall analyses.
In keeping with the approach adopted for the more frequent design rainfalls, the statistical
properties of the at-site data were estimated and then translated into the relevant GEV
distribution parameters. However, whereas L-moments were used for the more frequent
design rainfalls, for the rare design rainfalls LH-moments were adopted (Wang, 1997). LH-
moments were adopted as they more accurately fit the upper tail (rarer probabilities) of the
distribution.
44
Design Rainfall
3.6.4. Regionalisation
For the rare design rainfalls, the ROI approach adopted for the IFDs was used to reduce the
uncertainty in the estimated LH-moments by regionalising the station point estimates. While
500 station years was found to be an optimum pool size for the IFDs, because the rare
design rainfalls are provided for probabilities up to 1 in 2000 AEP, the ROI needed to be
increased. The tradeoff between gaining improved accuracy from a larger pool of data was
that the assumption of homogeneity may not be satisfied. Testing was conducted to find the
pool size that reduced uncertainty without introducing significant homogeneity, with a
minimum of 2000 station years adopted. However, where necessary, the number of pooled
station years was increased above this number to maximize the available record used, while
ensuring homogeneity.
The average LH-CV for each region was calculated using a weighted average of the LH-CV
at all stations in the region, with the weights proportional to the station lengths. This was
repeated for the LH-Skewness.
3.6.5. Gridding
The Index, regionalised LH-CV and LH-SK values for all durations and AEP's were gridded
using the splining software ANUSPLIN (Hutchinson and Xu, 2013) that was adopted for the
more frequent design rainfalls. To determine the most appropriate method to adopt for the
gridding of the moments a range of tests was undertaken of combinations of different knot
sets. The final case adopted was a spline that incorporated latitude, longitude and elevation
using 3750 knots for the Index (as was adopted for the more frequent design rainfalls) and
2200 knots for the regionalised LH-CV and LH-SK values. The 0.025 degree Digital
Elevation Model (DEM) of Australia was used to provide the elevation data which was the
same as that used in the derivation of the frequent and infrequent grids (The et al., 2014) .
In order to provide consistent design rainfall estimates across all durations and probabilities,
a suitable method was required to integrate the rare design rainfalls with the more frequent
design rainfalls. After testing of various ‘anchor’ points, the rare design rainfalls were
anchored to the more frequent design rainfalls at the 5% AEP as it was considered that the
rare design rainfalls provide a better estimate of the upper tail of the distribution down to the
5% AEP.
3.6.6. Outputs
The method described in Book 2, Chapter 3, Section 6 to Book 2, Chapter 3, Section 6
produced rare design rainfall estimates across Australia for the standard durations and
standard probabilities in Table 2.3.9.
45
Design Rainfall
The Probable Maximum Precipitation (PMP) is defined as ‘the theoretical greatest depth of
precipitation that is physically possible over a particular catchment’ (World Meteorological
Organisation, 1986). The PMP assumes the simultaneous occurrence in one storm of
maximum amount of moisture and the maximum conversion rate of moisture to precipitation
(maximum efficiency).
• In Situ Maximisation Method: During the 1950’s to 1970’s PMP estimates were based
on the maximisation of the moisture content of storms which had been observed over the
catchment of interest. The limitation of this method was that the differing lengths of rainfall
records and occurrence or non-occurrence of an extreme storm led to inconsistent PMP
estimates for catchments within the same region.
• Storm Transposition Method: During the late 1960’s and early 1970’s the size of the
extreme storm sample for a specific catchment was increased by the transposition to the
catchment of interest of extreme storms which had been observed over nearby
catchments which had similar hydrometeorological and topographic features. Although this
improved the within-region consistency of PMP estimates, the method was limited, as only
storms from a similar topographic region could be transposed, and the selection of storms
introduced a significant level of subjectivity.
• Generalised Methods: From the mid-1970’s generalised methods were introduced into
Australia. Generalised methods make use of all available storm data for a large region by
making adjustments for moisture availability and differing topographic effects. The
generalised methods currently adopted in Australia are described in Book 2, Chapter 3,
Section 7.
46
Design Rainfall
47
Design Rainfall
• The rainfall totals for the total storm duration were plotted on a topographic map and
isohyets drawn to determine the spatial extent and distribution of each storm.
• To determine the storm temporal distribution, parallelograms were drawn around the storm
centre for standard areas of 100; 500; 1 000; 2 500; 10 000; 40 000 and 60 000 km2. The
average daily rainfall depths within a parallelogram were determined using Thiessen
weights. For each standard area, the percentage of the total storm that fell during each 24
hour period was determined. These daily data were supplemented by pluviograph and 3
hourly synoptic charts.
• The representative dew point temperature for each storm was determined using a number
of sources including the Australian Region Mean Sea Level charts, National Climate
Centre Archives and Observers’ Logbooks.
The effects of storm type were removed from the data set by the dividing of Australia into the
GSAM and GTSMR regions on the basis of the type of storm that produces the largest
observed rainfall depths. The two regions were further divided into Coastal and Inland Zones
on the basis that different mechanisms produce the largest rainfall depths in each of the
zones (refer Figure 2.3.15).
The removal of the site specific topographic effects was undertaken using 72 hour, 50 year
ARI ‘flat land’ rainfall intensity field in order to produce the convergence component of each
storm.
48
Design Rainfall
maximisation factor. The moisture maximisation factor is defined as the ratio of extreme
precipitable water associated with the extreme dew point temperature at the storm location.
The site-specific moisture content of each storm was removed by transposition to a single
location which for the GSAM was chosen as Brisbane and for GTSMR as Broome. For each
location, representative seasonal extreme 24 hour persisting dew point temperatures were
selected and the moisture content for each storm standardised.
MAF=EPWcatchment/EPWstd (2.3.9)
where MAF = Moisture Adjustment Factor; EPWcatchment is the Extreme Precipitable Water
associated with the catchment extreme dew point temperature; EPWstd is the Extreme
Precipitable Water associated with the standard extreme dew point temperature for
appropriate season.
The Topographic Enhancement Factor (TEF) for the catchment PMP is estimated in the
same manner as the topographic component of the storms in the database using the 72
hour 50 year ARI rainfall intensities.
The total PMP for a specific catchment for each of the standard durations is estimated as:
PMPcatchment=MCDstd*MAFcatchment*TEFcatchment (2.3.10)
However, there are uncertainties associated with the design rainfalls which arise from
various sources including:
• errors in the data due to short record length, instrumentation errors, gaps in the data,
unidentified errors in the data;
49
Design Rainfall
These uncertainties need to be taken into consideration when the design rainfalls are being
used in conjunction with other design flood inputs. Quantification of the uncertainties
associated with the design rainfalls is described in Bureau of Meteorology (2016) as well as
advice on how to incorporate uncertainty when using the design rainfalls.
3.9. Application
3.9.1. Design Rainfalls
The very frequent, frequent, infrequent, and rare design rainfalls are available via the Bureau
of Meteorology’s website2.
• Decimal degrees;
The location of the entered co-ordinate can be seen by using the map preview option and a
location label can also be entered (see Figure 2.3.16).
2http://www.bom.gov.au/water/designRainfalls/ifd/index.shtml
50
Design Rainfall
Determine the design rainfall class for which design rainfalls are required:
• Very frequent
• Rare
• Standard
• Non-standard
51
Design Rainfall
52
Design Rainfall
53
Design Rainfall
54
Design Rainfall
55
Design Rainfall
56
Design Rainfall
Guides for the application of the GSAM and GTSMR methods are available from the Bureau
of Meteorology (Bureau of Meteorology, 2005; Bureau of Meteorology, 2006).
3.10. Acknowledgements
The authors would like to acknowledge the contributions made by the following people
during the IFD Revision Project:
• Dr Louise Minty and Dr Ian Prosser for their direction of the project
• Andrew Frost, Khaled Haddad, Catherine Jolly, Garry Moore, Scott Podger, Ataur
Rahman, Lionel Siriwardena and Karin Xuereb for their contribution during various stages
of the project
• Professor Mike Hutchinson for his ongoing advice on the application of ANUSPLIN
• Erwin Jeremiah, Deacon McKay, Scott Podger and Michael Sugiyanto for their cheerful
quality controlling of the data
• Dr George Kuczera, Dr Nanda Nandakumar; Dr Rory Nathan and Erwin Weinmann for
their sage advice on the rare design rainfalls
• Martin Chan, Damian Chong, Ceredwyn Ealanta Murray Henderson, Chris Lee, Quentin
Leseney, Maria Levtova, Max Monahon, Julian Noye, and William Tall for the webpages.
3.11. References
ANCOLD (Australian National Committee on Large Dams) (2000), Guidelines on Selection
of Acceptable Flood Capacity for Dams. Australian National Committee on Large Dams.
Bates, B.C., McLuckie, D., Westra, S., Johnson, F., Green, J., Mummery, J. and Abbs, D.
(2015), Revision of Australian Rainfall and Runoff - The Interim Climate Change Guideline.
Presented at National Floodplain Management Authorities Conference, Brisbane, Qld, May.
Bonnin, G., Maitaria, K. and Yekta, M. (2010), Trends in Heavy Rainfalls in the Observed
Record in Selected Areas of the U.S. IN PALMER, R. N. (Ed.) World Environmental and
Water Resources Congress 2010: Challenges of Change. Providence, Rhode Island,
American Society of Civil Engineers.
Bureau of Meteorology (2016), Design rainfalls for Australia: data, methods and analyses.
Bureau of Meteorology, Melbourne, VIC.
57
Design Rainfall
Burn, D.H. (1990), An appraisal of the 'region of influence' approach to flood frequency
analysis, Hydrological Sciences - Journal-des Sciences Hydrologiques, 35(2), 149-165.
Green, J. and Johnson, F.J. (2011), Stationarity Assessment of Australian Rainfall, Internal
Bureau of Meteorology Report.
Green, J.H., Beesley, C., Frost, A., Podger, S. and The, C. (2015), National Estimates of
Rare Design Rainfall. Presented at Hydrology and Water Resources Symposium, Hobart,
Tas, December 2015.
Green, J.H., Johnson, J., McKay, D., Podger, P., Sugiyanto, M. and Siriwardena, L. (2012a),
Quality Controlling Daily Read Rainfall Data for the Intensity-Frequency Duration (IFD)
Revision Rainfall Project. Presented at Hydrology and Water Resources Symposium,
Sydney, NSW, November.
Green, J.H., Xuereb, K., Johnson, J., Moore, G. and The, C. (2012b), The Revised Intensity-
Frequency Duration (IFD) Design Rainfall Estimates for Australia - An Overview. Presented
at Hydrology and Water Resources Symposium, Sydney, NSW, November.
Guttman, N.B., Hosking, J.R.M. and Wallis, J.R. (1993), Regional precipitation quantile
values for the continental U. S. computed from L-moments. Journal of Climate, 6:
2326-2340.
Haddad, K. and Rahman, A. (2012a), Regional flood frequency analysis in eastern Australia:
Bayesian GLS regression-based methods within fixed region and ROI framework: quantile
regression vs parameter regression technique. Journal of Hydrology, 430: 142-161.
Haddad, K., Johnson, F., Rahman, A., Green, J. and Kuczera, G. (2015), Comparing three
methods to form regions for design rainfall statistics: Two case studies in Australia. Journal
of Hydrology, 527: 62-76.
Haddad, K., Pirozzi, J., McPherson, G., Rahman, A. and Kuczera, G. (2009),'Regional flood
estimation technique for NSW: application of generalised least squares quantile regression
technique', Proc. 32nd Hydrology and Water Resources Symp., Newcastle, pp: 829-840.
Haddad, K., Rahman, A. and Green, J. (2011), Design Rainfall Estimation in Australia: A
Case Study using L moments and Generalized Least Squares Regression, Stochastic
Environmental Research & Risk Assessment, 25(6), 815-825.
Hosking, J.R.M. and Wallis, J.R. (1997), Regional Frequency Analysis: An Approach Based
on L-Moments. CambridgeUniversity Press, Cambridge, UK. p: 224.
Huff, F.A. and Angel, J.R. (1992), Rainfall Frequency Atlas of the Midwest. Illinois State
Water Survey, Champaign, Bulletin 71, 1992.
58
Design Rainfall
Hutchinson, M.F. (1998), Interpolation of rainfall data with thin plate smoothing splines - part
2: Analysis of topographic dependence, Journal of Geographic Information and Decision
Analysis, 2(2), 152-167.
Hutchinson, M.F. (2007), ANUSPLIN version 4.37 User Guide, The Australian National
University, Centre for Resources and Environmental Studies, Canberra.
Hutchinson, M.F. and Xu, T. (2013), ANUSPLIN Version 4.4 User Guide, The
AustralianNationalUniversity. FennerSchool of Environment and Society, Canberra,
Australia.
Johnson, F., Hutchinson, M.F., The, C., Beesley, C. and Green, J. (2015), Topographic
relationships for design rainfalls over Australia. Accepted for publication Journal of
Hydrology.
Johnson, F., Xuereb, K., Jeremiah, E. and Green, J. (2012a), Regionalisation of Rainfall
Statistics for the IFD Revision Project. Presented at Hydrology and Water Resources
Symposium, Sydney, NSW, November 2012.
Johnson, F., Haddad, K., Rahman, A., and Green, J. (2012b), Application of Bayesian GLSR
to Estimate Sub Daily Rainfall Parameters for the IFD Revision Project. Presented at
Hydrology and Water Resources Symposium, Sydney, NSW, November 2012.
Kjeldsen, T.R. and Jones, D.A. (2009), A formal statistical model for pooled analysis of
extreme floods, Hydrology Research, 40(5), 465-480.
Madsen, H., Mikkelsen, P.S., Rosbjerg, D. and Harremoes, P. (2002), Regional estimation of
rainfall intensity duration curves using generalised least squares regression of partial
duration series statistics. Water Resources Research, 38(11), 1239.
Madsen, H., Arnbjerg-Neilsen, K. and Mikkelsen, P.S. (2009), Update of regional intensity-
duration-frequency curves in Denmark: Tendency towards increased storm intensities.
Atmospheric Research, 92: 343-349.
Menabde, M., Seed, S. and Pegram, G. (1999), A simple scaling model for extreme rainfall.
Water Resources Research, 35: 335-339
Minty, L.J., Meighen, J. and Kennedy, M.R. (1996), Development of the Generalised
Southeast Australia Method for Estimating Probable Maximum Precipitation, [Online] HRS
Report No. 4, Hydrology Report Series, Bureau of Meteorology, Melbourne, Australia,
August 1996, p: 48. Available at: http://www.bom.gov.au/water/designRainfalls/pmp/
gsam.shtml
Nandakumar, N., Weinmann, P.E., Mein, R.G. and Nathan, R.J. (1997), Estimation of
Extreme Rainfalls for Victoria using the CRCFORGE Method, CRC for Catchment
Hydrology.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Reis Jr., D.S., Stedinger, J.R. and Martins, E.S. (2005), Bayesian GLS regression with
application to LP3 regional skew estimation. Water Resour Res. 41, W10419, (1) 1029.
59
Design Rainfall
The, C., Johnson, F., Hutchinson, M., and Green, J. (2012), Gridding of Design Rainfall
Parameters for the IFD Revision Project for Australia. Presented at Hydrology and Water
Resources Symposium, Sydney, NSW, November.
The, C., Hutchinson, M., Johnson, F., Beesley, C. and Green, J. (2014), Application of
ANUSPLIN to produce new Intensity-Frequency-Duration (IFD) design rainfalls across
Australia. Presentated at Hydrology and Water Resources Symposium, Perth, WA, February
2014.
The, C., Beesley, C., Podger, S., Green, J., Hutchinson, M. and Jolly, C. (2015), Very
frequent design rainfalls - an enhancement to the new IFDs. Accepted for presentation at
Hydrology and Water Resources Symposium, Hobart, TAS, December 2015.
Walland, D.J., Meighen, J., Xuereb, K.C., Beesley, C.A. and Hoang, T.M.T. (2003), Revision
of the Generalised Tropical Storm Method for Estimating Probable Maximum Precipitation
[Online], HRS Report No.8, Hydrology Report Series, Bureau of Meteorology Melbourne,
Australia, available at: http://www.bom.gov.au/water/designRainfalls/hrs8.shtml.
Wang, Q.J. (1997), LH moments for statistical analysis of extreme events. Water Resources
Research, 33(12), 2841-2848.
Wang, Q.J. (1998), Approximate goodness-of-fit tests of fitted generalised extreme value
distributions using LH moments. Water Resources Research, 34(12), 3497-3502.
Xuereb, K. and Green, J. (2012), Defining Independence of Rainfall Events with a Partial
Duration Series Approach. Presented at Hydrology and Water Resources Symposium,
Sydney, NSW, November 2012.
Xuereb, K.C., Moore, G.J., and Taylor, B.F. (2001), Development of the Method of Storm
Transposition and Maximisation for the West Coast of Tasmania. HRS Report No.7,
Hydrology Report Series, Bureau of Meteorology, Melbourne, Australia, January 2001.
60
Chapter 4. Areal Reduction Factors
Phillip Jordan, Rory Nathan, Scott Podger, Mark Babister, Peter
Stensmyr, Janice Green
Chapter Status Final
Date last updated 14/5/2019
4.1. Introduction
Design rainfall information for flood estimation generally is made available in the form of
rainfall Intensity Frequency Duration (IFD) date (Book 2, Chapter 3) that relates to specific
points in a catchment rather than to the whole catchment area. However, most flood
estimates are required for catchments that are sufficiently large that design rainfall intensities
at a point are not representative of the areal average rainfall intensity across the catchment.
The ratio between the design values of areal average rainfall and point rainfall, computed for
the same duration and Annual Exceedance Probability (AEP), is called the Areal Reduction
Factor (ARF). This allows for the fact that larger catchments are less likely than smaller
catchments to experience high intensity storms simultaneously over the whole of the
catchment area.
It should be noted that the ARF provides a correction factor between the catchment rainfall
depth (for a given combination of AEP and duration) and the mean of the point rainfall
depths across a catchment (for the same AEP and duration combination). Applying an ARF
is a necessary input to computation of design flood estimates from a catchment model that
preserves a probability neutral transition between the design rainfall and the design flood
characteristics. The ARF merely influences the average depth of rainfall across the
catchment, it does not account for variability in the spatial and/or space-time patterns of its
occurrence over the catchment.
The modified Bell’s method involves defining hypothetical circular catchments in areas with
sufficient data and creating an areal rainfall time series for each catchment by weighting
point rainfall values based on Thiessen polygon areas (or an equivalent weighting method).
The frequency quantiles calculated from the areal rainfall time series are divided by the
weighted point frequency quantiles for the sites within the catchment, yielding an ARF
estimates for the given catchment area and a range of AEPs. Once ARFs have been
calculated for the required catchment areas, durations and AEPs for as many locations as
possible, they are averaged across these attributes and an equation is fitted to provide a
prediction model for the selected region.
61
Areal Reduction Factors
The adopted methodology is described in more detail in Podger et al. (2015a), Podger et al.
(2015b) and Stensmyr et al (2014).
The design areal rainfall to be applied in a design flood simulation is the average rainfall over
the total catchment area to the point of interest. Consequently, the ARF should be computed
for the total catchment upstream of each location of interest where a design flood estimate is
required. The ARF should not be computed independently for each subarea in a runoff-
routing model of the catchment of interest, as this would result in systematic overestimation
of catchment rainfalls and simulated design flood hydrographs.
The ARF to be applied to design rainfall is a function of the total area of the catchment, the
duration of the design rainfall event and it’s AEP. The ARF should be computed using the
relevant procedure described in Table 2.4.1.
If the duration of interest is greater than 12 hours, Equation (2.4.2) will be required as part of
the calculation procedure and the coefficients of Equation (2.4.2) vary regionally across
Australia. The applicable ARF region should be selected by referring to Figure 2.4.1. Where
a catchment overlaps the boundary between regions, the ARF should be selected for the
region that has the largest overlap with the boundary of the catchment. The coefficients to be
applied with Equation (2.4.2) should be selected from the appropriate region from
Table 2.4.2.
Table 2.4.1. ARF Procedure for Catchments Less than 30 000 km2 and Durations up to and
Including 7 Days
Catchment Area Duration ≤ 12 hours Duration Between 12 Duration ≥ 24 Hours
and 24 hours (1 Day) and ≤ 7 Days
(168 hours)
≤ 1 km2 ARF = 1
Between 1 and 10 1. Compute ARF(10 1. Compute ARF(24 1. Compute ARF(10
km2 km2) using hr, 10 km2) using km2) using
Equation (2.4.1) for Equation (2.4.2) for Equation (2.4.2) for
2
area = 10 km and 2
area = 10 km and area = 10 km2
selected duration duration = 1440
min 2. Interpolate ARF for
2. Interpolate ARF for catchment area
catchment area 2. Compute ARF(12 and selected
and selected 2
hr, 10 km ) using duration using
Equation (2.4.1) for Equation (2.4.4)
62
Areal Reduction Factors
3. Interpolate ARF(10
km2) for selected
duration using
Equation (2.4.4)
• Equation (2.4.1), Equation (2.4.2) and Equation (2.4.3) require the selected duration to be
provided in minutes.
• There has been limited research on ARF applicable to catchments that are less than 10
km2. The recommended procedure is to adopt an ARF of unity for catchments that are
less than 1 km2, with an interpolation to the empirically derived equations for catchments
that are between 1 and 10 km2 in area (refer to Equation (2.4.3)).
• The ARF equations derived by Podger et al. (2015a), Podger et al. (2015b) and Stensmyr
et al (2014) were derived for the 50% to 1% AEPs. Although these have been
recommended for use for a wider range of AEP, (out to 0.05% AEP), further verification is
ongoing on the validity of this approach. As a result, the coefficients of Equation (2.4.2)
(from Table 2.4.2) and/or the regional boundaries (refer to Figure 2.4.1) may be revised.
63
Areal Reduction Factors
Equation (2.4.1)
where Area is in km2, Duration is in minutes and AEP is a fraction (between 0.5 and 0.0005).
1 − �(����� − �log10��������)��������−�
+���������������(0.3 + log10���)
��� = ��� 1, (2.4.2)
��������
�����
1440
+ℎ10 (0.3 + log10���)]
where Area is in km2, Duration is in minutes and AEP is a fraction (between 0.5 and 0.0005).
�������� − 720
��� = ���12ℎ��� + ���24ℎ��� − ���12ℎ��� (2.4.3)
720
64
Areal Reduction Factors
Figure 2.4.1. Area Reduction Factors Regions for Durations 24 to 168 Hours
Table 2.4.2. ARF Equation (2.4.2) Coefficients by Region for Durations 24 to 168 hours
Inclusive
Region1 a b c d e f g h i
East Coast North 0.327 0.241 0.448 0.36 0.000 0.48 -0.21 0.012 -0.001
96 3
Semi-arid Inland 0.159 0.283 0.25 0.308 7.3E- 1 0.039 0 0
Queensland 07
Tasmania 0.060 0.347 0.2 0.283 0.000 0.347 0.087 0.012 -0.000
5 76 7 33
South-West Western 0.183 0.259 0.271 0.33 3.85E 0.41 0.55 0.008 -0.000
Australia -06 17 45
Central New South 0.265 0.241 0.505 0.321 0.000 0.414 -0.021 0.015 -0.000
Wales 56 33
South-East Coast 0.06 0.361 0 0.317 8.11E 0.651 0 0 0
-05
Southern Semi-arid 0.254 0.247 0.403 0.351 0.001 0.302 0.058 0 0
3
Southern Temperate 0.158 0.276 0.372 0.315 0.000 0.41 0.15 0.01 -0.002
141 7
Northern Coastal 0.326 0.223 0.442 0.323 0.001 0.58 -0.374 0.013 -0.001
3 5
65
Areal Reduction Factors
Region1 a b c d e f g h i
Inland Arid 0.297 0.234 0.449 0.344 0.001 0.216 0.129 0 0
42
1These values are provided on the ARR Data Hub for the relevant region when queried
(Babister et al (2016), accessible at http://data.arr-software.org/)
Design rainfall depths for catchments larger than 30 000 km2 should be derived from
frequency analysis of catchment average rainfall depths over the specific catchment. The
design rainfall depths from the catchment-specific frequency analysis should be checked by
dividing them by the average of the point rainfall depths from point IFD analysis for the
catchment (Bureau of Meteorology, 2013) to infer the ARF for the catchment for each rainfall
duration and AEP. It would be expected that for a catchment larger than 30 000 km2, the
ARF inferred from this check for each duration and AEP should be less than the ARF
calculated from the regional method Equation (2.4.2) for the corresponding duration and
AEP combination. It would also be expected that the inferred ARF (for a given AEP) should
increase with rainfall duration.
For catchments larger than 30 000 km2, it becomes increasingly likely that rainfall events
that would give rise to flooding would be concentrated in one part of the catchment. For
catchments larger than 30 000 km2 it is strongly recommended that partial area storms are
explicitly modelled (using Monte Carlo or other joint probability approaches). Explicit
modelling of partial area storms should also be considered for catchments in the range
between 5 000 km2 and 30 000 km2.
66
Areal Reduction Factors
catchment would derive from a combination of the mixture of storm types causing heavy
rainfall within a region, the direction and speed of movement of those storms and the spatial
and temporal characteristics of those storms. Analysis by a hydrometeorologist of the
prevalence of different storm types within different parts of Australia and the advection,
temporal and spatial characteristics of those storms is likely to provide an understanding of
the causes of variations in ARF. Such understanding is difficult to infer directly, on its own,
from the empirically derived ARF equations that are currently recommended for use in
Australia. It is recommended that hydrometeorologists are engaged to investigate the
causes of variations in ARF.
Once the hydrometeological analysis recommended above has been undertaken, the
outcomes of that work may enable further research and improvements in the following
specific areas:
• Clarification of how well the ARFs derived using an empirical method such as Bell’s
method, compare with those derived from a suitable theoretical method that may better
account for hydrometeorological understanding of the drivers of variability in ARFs.
• There are some areas within each of the regions where the ARF values determined
empirically for the circular catchments demonstrated a trend toward being larger or smaller
than obtained from the ARF equations fitted to the mean ARF values from all circular
catchments within the region for a given area, duration and AEP. Hydrometeorological
understanding may enable definition of smaller sub-regions, combining of existing regions
(with the existing regions largely defined using state and territory boundaries), or definition
of new regions in order to reduce the uncertainty introduced by this variability.
• Seasonality was found to be a significant driver of ARFs in Western Australia but has not
been investigated for other parts of Australia. Hydrometeorological understanding may
guide the regions where seasonal dependence in ARF would be likely, the start and end
dates of seasons and how transition periods between seasons should be handled.
It is recommended that after an appropriate study has been undertaken to determine the
hydrometeorological causes of variations in ARF that further studies are then scoped and
prioritised according to areas where the hydrometeorological causes can be best exploited
to reduce residual uncertainty in ARFs.
Once the hydrometeological analysis recommended above has been undertaken, the
outcomes of that work may enable further research and improvements in the following areas:
67
Areal Reduction Factors
• how well the ARFs derived using an empirical method such as Bell’s method, compare
with those derived from a suitable theoretical method that may better account for
hydrometeorological understanding of the drivers of variability in ARFs.
• There are some areas within each of the regions where the ARF values determined
empirically for the circular catchments demonstrated a trend toward being larger or smaller
than the fitted ARF equations, which were fitted to the mean ARF values from all circular
catchments within the region for a given area, duration and AEP. Hydrometeorological
understanding may enable definition of smaller sub-regions, combining of existing regions
(with the existing regions largely defined using state and territory boundaries), or definition
of new regions in order to reduce the uncertainty introduced by this variability.
• Seasonality was found to be a significant driver of ARFs in Western Australia but has not
been investigated for other parts of Australia. Hydrometeorological understanding may
guide the regions where seasonal dependence in ARF would be likely, the start and end
dates of seasons and how transition periods between seasons should be handled.
It is recommended that after an appropriate study has been undertaken to determine the
hydrometeorological causes of variations in ARF that further studies are then scoped and
prioritised according to areas where the hydrometeorological causes are best exploited to
reduce residual uncertainty in ARFs.
4.7. References
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Bell, F.C. (1976), The areal reduction factors in rainfall frequency estimation. Natural
Environmental Research Council (NERC), Report No. 35, Institute of Hydrology, Wallingford,
U.K.
Jordan, P., Weinmann, P.E., Hill, P. and Wiesenfeld, C. (2013), Collation and Review of Areal
Reduction Factors from Applications of the CRC-FORGE Method in Australia, Final Report,
Australian Rainfall and Runoff Revision Project 2: Spatial Patterns of Rainfall, Engineers
Australia, Barton, ACT.
Podger, S., Green, J., Jolly, C., The, C. and Beesley, C. (2015a),Creating long duration areal
reduction factors for the new Intensity-Frequency-Duration (IFD) design rainfalls, Proc.
Engineers Australia Hydrology and Water Resources Symposium, Hobart, Tasmania,
Australia.
Podger, S., Green, J., Stensmyr, P. and Babister, M. (2015b), Combining long and short
duration areal reduction factors, Proc. Engineers Australia Hydrology and Water Resources
Symposium, Hobart, Tasmania, Australia.
Siriwardena, L. and P.E. Weinmann, (1996), Derivation of areal reduction factors for design
rainfalls in Victoria, 96(4), 60.
68
Areal Reduction Factors
Stensmyr, P., Babister, M. and Retallick, M. (2014), Australian Rainfall and Runoff Revision
Project 2: Spatial Patterns of Rainfall: Stage 2 Report, Short Duration Areal Reduction
Factors, ARR Report Number P2/S2/019, ISBN 978-085825-9614.
69
Chapter 5. Temporal Patterns
Mark Babister, Monique Retallick, Melanie Loveridge, Isabelle Testoni,
Scott Podger
Chapter Status Final
Date last updated 14/5/2019
5.1. Introduction
The majority of hydrograph estimation methods used for flood estimation require a temporal
pattern that describes how rainfall falls over time as a design input. Traditionally a single
burst temporal pattern has been used for each rainfall event duration. The use of a single
pattern has been questioned for some time (Nathan and Weinmann, 1995) as the analysis of
observed rainfall events from even a single pluviograph shows that a wide variety of
temporal patterns is possible.
The importance of temporal patterns has increased as the practice of flood estimation has
evolved from peak flow estimation to full hydrograph estimation. There has been a strong
move toward storage-based mitigation solutions in urban catchments which require realistic
temporal patterns that reproduce total storm volumes as well as the temporal distribution of
rainfall within the event.
This chapter discusses use of temporal patterns for design flood estimation where a fixed
temporal pattern is applied over the entire catchment. Book 2, Chapter 6 discusses the more
complex case of space-time patterns.
• Book 2, Chapter 5, Section 2 – discusses fundamental temporal pattern concepts and how
the concept of using ensembles of temporal pattern developed;
• Book 2, Chapter 5, Section 3 – discusses the storm database that was used to develop
temporal patterns;
70
Temporal Patterns
and catchment response. Figure 2.5.1 depicts a typical storm pattern and how components
of the storm can be characterised. It is important to note the components can be
characterised either by IFD relationships or by catchment response and are highly
dependent on the definitions used. The components of a storm include:
• antecedent rainfall - is rainfall that has fallen before the storm event and is not considered
part of the storm but can affect catchment response. This is not considered in this chapter
but is introduced for completeness.
• pre-burst rainfall - is storm rainfall that occurs before the main burst. With the exception of
relatively frequent events, it generally does not have a significant influence on catchment
response but is very important for understanding catchment and storage conditions before
the main rainfall burst.
• the burst - represents the main part of the storm but is very dependent on the definition
used. Bursts have typically been characterised by duration. The burst could be defined as
the critical rainfall burst , the rainfall period within the storm that has the lowest probability,
or the critical response burst that corresponds to the duration which produces the largest
catchment response for a given rainfall Annual Exceedance Probability (AEP).
• post-burst rainfall - is rainfall that occurs after the main burst and is generally only
considered when aspects of hydrograph recession are important. This could be for
drawing down a dam after a flood event or understanding how inundation times affect
flood recovery, road closures or agricultural land.
If the critical response burst is not the same as the critical rainfall burst then the critical
response burst is either:
Rarer shorter duration bursts within a burst are typically called embedded bursts and can
cause problems in modelling as, while the intention may be to assess the catchment
71
Temporal Patterns
response to a burst of a defined duration and probability, the response to a rarer shorter
duration burst is also being assessed.
The distinction between a burst and a complete storm is important as complete storms are
used for calibration and bursts are typically used for design. Though this difference is less
important for catchments with long duration responses, as the bursts typically represent
nearly the entire storm event.
“In nature, a wide range of patterns is possible. Some storms have their period of peak
intensity occur early, while other storms have the peak rainfall intensity occur towards the
end of the storm period and a large number have a tendency for the peak to occur more or
less centrally.”
Figure 2.5.2 depicts two very different storms from Sydney Observatory Hill that have similar
IFD characteristics from 15 min to 12 hrs and a 2 hr critical rainfall burst, however on other
criteria they are very different. The first pattern has 182 mm before the 2 hr burst, while the
second pattern has 16 mm. For the 2nd event the commencement of runoff will be closely
aligned with the main burst while the former event is likely to have significant runoff prior to
the main burst. The rainfall burst also exhibits significant variability, Figure 2.5.3 presents the
temporal patterns (as dimensionless mass curves) of ten 2 hr bursts of similar probability
that show the variability described by Pilgrim et al. (1969).
72
Temporal Patterns
Figure 2.5.2. Two Different Storm Events with Similar Intensity Frequency Duration
Characteristics (Sydney Observatory Hill) – Two Hyetographs plus Burst Probability Graph
73
Temporal Patterns
Most of the historical research on temporal patterns has assumed that the central tendency
of the pattern is more important than the variability, with the aim of producing a typical,
representative or median pattern. French (1985) describes how a pattern can be considered
a two dimensional quantity with most methods breaking the pattern into two manageable one
dimensional quantities that describe the magnitude of the element and the order of the
elements. This can be described as a rank order vector and a decay curve that describes
how the magnitude decreases between ranks.
Monte Carlo and ensemble modelling techniques try to overcome the problems associated
with this simplification by using an ensemble of temporal patterns. A short history of temporal
patterns helps explain that, while the complexity of temporal patterns has been well
understood for a long time, it has been difficult to produce a simple set of design patterns.
The development of the Average Variability Method by Pilgrim et al. (1969) and Pilgrim and
Cordery (1975) analysed the variability of patterns by separating the analysis of the
74
Temporal Patterns
magnitude or the ranks from the analysis of rank order. This approach is only applicable to
bursts and was applied in ARR 1977 and in a very detailed way in ARR 1987. A variation of
this approach was developed by Hall and Kneen (1973). During the finalisation of ARR 1987
some problems where found with the use of the patterns and extensive testing was carried
out which resulted in some changes to remove some embedded bursts, including some
arbitrary rank order changes. It has been assumed that an AVM pattern would preserve
probability neutrality. Single burst per duration AVM patterns have been extensively used
since ARR 1987 and appear to have performed reasonable well for peak flow estimation.
By their very nature AVM patterns do preserve the average rank magnitudes. Figure 2.5.4
plots the magnitude or decay curve for the ten dimensionless curves in Figure 2.5.3 and the
AVM curve from these events and all 10 events that would be considered in a AVM analysis.
Figure 2.5.4. Decay Curve of Ten Dimensionless Patterns and AVM Patterns
The issues that have arisen have been discussed in Retallick et al. (2009):
• Big changes in pattern and corresponding peak flow occur at some region boundaries;
• The burst only approach has made conversion of observed losses to burst losses difficult;
• Filtering to remove embedded bursts that was recommended in ARR 1987 (Pilgrim, 1987)
has had a mixed uptake and sometimes produces unrealistically long critical durations.
The AVM approach also works best when there is a dominant typical pattern shape. As part
of the AVM approach the average and standard deviation of the rank of each period is
75
Temporal Patterns
calculated, when there is no dominant pattern these averages can be very similar and the
resultant AVM pattern need not bare any resemblance to any of the observed patterns.
The complexity of producing a representative or median pattern has led many practitioners
to question the concept and ask whether it is better to specifically account for this variability
by modelling an ensemble of temporal patterns.
The problems with the AVM method and other median or representative patterns is that it
assumes the variability of actual patterns is much less important than their central tendency.
Such an approach does not account for how temporal patterns interact with catchments to
produce peak flows and hydrographs. The response can be very catchment-specific, and
there is no guarantee that a representative pattern will produce the medium response from
an ensemble of patterns that properly captures the variability of observed patterns. These
problems can become more pronounced when changes are made to the catchment
response or storage characteristics. Phillips and Yu (2015) examined the impact of storage
on an ensemble of events for the lot, neighbourhood and subcatchment scale.
It is unclear who first proposed the concept of running an ensemble of temporal patterns to
account for the variability of patterns. Practitioners have a long history of comparing the peak
flow estimates from design patterns to estimates from patterns extracted from real storms
that occurred on that catchment of interest. The development of Monte Carlo methods
(Hoang et al., 1999; Rahman et al., 2002; Weinmann et al., 2000) that stochastically sample
from observed events based on complete storms and embedded bursts of maximum
intensity (“storm cores”) is well documented and has helped highlight the value of using
ensembles. Others, such as Nathan et al. (2002) have used ensembles of storm bursts
based on observed point and areal storms to facilitate their use directly with design IFD
information. The development of the Bureau of Meteorology significant storm database
(Meighen and Kennedy, 1995) enabled practitioners to easily test ensembles of areal
patterns and how they affected other flood characteristics; examples include Webb
McKeown and Associates (2003). While there is a long history of testing ensembles of
temporal patterns, Sih et al. (2008) is the first example of detailed testing of ensemble
patterns as a design input outside a Monte Carlo environment. This long development
history of ensemble simulation can probably be explained by the fact that the concept could
not be confirmed until rigorous Monte Carlo techniques became available for validation.
Parallel to the development of ensemble and Monte Carlo approaches, practitioners have
become concerned with using burst patterns where complete storm volume is important.
Rigby and Bannigan (1996) suggest the entire burst approach needed to be reviewed and
design storms needed to replace design bursts. For the Wollongong area Rigby and
Bannigan (1996) demonstrated that historically most short duration events were embedded
in longer duration events. They recommended that short duration events could be
embedded in a 24 hour event of the same probability. They particularly cautioned against
using bursts on catchments with significant natural or man-made storages. Phillips et al.
(1994) had found similar problems in the upper Parramatta River and suggested embedded
storms were more realistic, and that basin storages would be underestimated with a burst
approach unless the embedded nature of events was factored into the starting volumes.
Rigby et al. (2003) extended the earlier work to include guidance on using the embedded
design storms. Roso and Rigby (2006) recommended a storm based approach be used
when there are significant storages or diversions present in the catchment. Kuczera et al.
(2003), inspired by Rigby and Bannigan (1996), explored basin performance using a
theoretical catchment at Observatory Hill, Sydney in a continuous simulation approach and
found similar problems with peak flow being underestimated by a similar amount when
76
Temporal Patterns
storages were present. All of these studies were based on catchments less than 110 km2
that are close to Sydney.
Table 2.5.1 presents the maximum number stations available in any year for decadal
periods. Table 2.5.1 demonstrates that most of the pluviograph record is from 1960 onwards.
Though overall the data comes from a period that starts before 1900, over half of the data is
from the period from 1993 onwards.
77
Temporal Patterns
As shown in Figure 2.5.5 the highest density of pluviograph stations is typically found along
coastal areas of Australia around key population centres. In Figure 2.5.6, the pluviograph
stations are seen to be clustered around urban areas, such as Sydney, Wollongong,
Melbourne and Canberra. Less data is available in central Australia, with the exception of
the Alice Springs area. Large areas of central Western Australia contain no data.
Figure 2.5.6. Pluviograph Stations used Throughout South-Eastern Australia with Record
Lengths
78
Temporal Patterns
The pluviograph database contains a significant number of events with long periods of
apparently uniform rainfall. While some of this could be from events with relatively uniform
rainfall, most appears to be the result of disaggregating accumulated rainfall totals uniformly.
Even if rainfall is uniform, a tipping bucket rain gauge recording at 5 minutes will only record
uniform rainfall over an extended period if the depth in each five minute period is equal or
slightly larger than an integer multiple of the tipping bucket capacity. It was concluded that
most of the periods of uniform rainfall are probably caused by digitisation and resampling at
different time steps. Events with large periods of uniform rainfall were disregarded, while
events were kept if the uniform period was only a small portion of the entire event.
In addition, several other issues with the data quality were found. Some records contained
significant periods of missing data. There were also periods of interpolated data, where
several data points were indicated as interpolated from a later point, presumably at the end
of an event. Events where a significant part of the rainfall was interpolated were excluded
from further analysis. There were also sections where the rainfall was uniform over many
intervals within an event, indicating that the data points were interpolated, even though the
quality control value did not specify that to be the case. Since storms generally do not have
uniform rainfall at the local scale, events that had a significant part of their total rainfall depth
occurring in consecutive identical intervals were also excluded.
• time when 50% of the burst depth occurred (ie. burst loading);
• Post-burst depth.
For each event the rarest or critical rainfall burst (of any duration and location within the
overall event) was also calculated.
79
Temporal Patterns
80
Temporal Patterns
The term ‘burst loading’ refers to the distribution of rainfall within a burst and is a defining
characteristic of a rainfall event. For each event the burst loading was calculated as the
percentage of the time taken for 50% of the burst depth to occur. The burst loading can be
used as a simple measure of when the heaviest part of the burst occurs and can be used to
categorise events as ‘front’, ‘middle’ or ‘back’ loaded. Events where categorised into three
groups, depending on where 50% of the burst rainfall occurs:
81
Temporal Patterns
This simple categorisation provides a pragmatic means of capturing when the peak loading
occurs, as described by Pilgrim et al. (1969), though it is worth recognising that for a double
peaked event, when most of the rainfall is in the early and later part of the burst, the loading
can somewhat illogically fall into the middle category.
Each region was characterised by its burst loading distribution, which describes the
percentage of front, middle and back loaded events for different durations. The proportion of
front/middle/back loading for each region was determined for less than and greater than 6
hours duration (Table 2.5.3). The proportion was assumed to be constant across all AEPs.
82
Temporal Patterns
Figure 2.5.9 and Figure 2.5.10 depict the median ratio of the pre-burst to burst and the depth
of pre-burst in mm for the 6hr duration and probabilities mapped across Australia. The full
data set of maps is available online at the ARR data repository (http://data.arr-software.org/).
Figure 2.5.11 shows the probability distribution of the pre-burst rainfall for each region.
83
Temporal Patterns
84
Temporal Patterns
The manner in which pre-burst rainfall is treated depends on the magnitude of the pre-burst
rainfall, how this compares to losses and whether pre-burst runoff is likely and will have a
significant effect on hydrograph volumes. As longer duration bursts tend to represent most of
the storm events, it is generally only an issue for smaller catchments. If pre-burst rainfall is
unlikely to affect the runoff responses, it is best treated in a simple manner with losses. For
simple urban cases pre-burst rainfall can be used to condition storage starting conditions.
Where pre-burst is influential for flood response, it can be sampled from its distribution and
applied with a typical pre-burst temporal pattern.
85
Temporal Patterns
Based on the results of this performance testing, the temporal patterns derived by
theregional burst approach are recommended for general use. Temporal patterns for
complete storms derived by the ROI approach are left as an alternative for small, volume-
sensitive systems where pre- and post-burst rainfall is important, though it may be that such
systems are better analysed using continuous simulation approaches.
Other major findings from ARR Revision Project 3 - Temporal Patterns of Rainfall
(WMAwater, 2015b) were:
1. Irrespective of the method used to derive the temporal patterns, when using a
representative ensemble of patterns all methods produce relatively similar results; and
2. Frequent patterns should not be used for rarer events; scaling a temporal pattern
introduces more variability and produces higher design estimates.
Duration
Minutes Hours Days
15 0.25 0.010
30 0.5 0.021
60 1 0.042
120 2 0.083
86
Temporal Patterns
Duration
180 3 0.125
270 4.5 0.188
360 6 0.250
540 9 0.375
720 12 0.500
1080 18 0.750
1440 24 1
2160 36 1.5
2880 48 2
4320 72 3
5760 96 4
7200 120 5
8640 144 6
10080 168 7
When selecting a small ensemble of temporal patterns it is important to capture the typical
variability of the observed events. A methodology was therefore adopted that samples from
observed events with the intent of generating a representative ensemble in terms of the
variability of actual events, with no obvious bias. Whilst it is difficult to quantify or verify the
achievement of this objective, steps have been undertaken to ensure the samples are
broadly representative, including a visual check of all ensembles.
There are a large number of frequent events from which to select, however, the choice is
more limited for rarer events.
For each AEP and duration bin the sampling of an ensemble of 10 patterns used the
preferred criteria in Table 2.5.6. The relaxed criteria were used where less than 10 suitable
patterns could be found using the preferred criteria.
While patterns containing embedded rarer bursts at their recording location were not
selected, events can still have embedded bursts at other locations within the region. The
presence of major embedded bursts may warrant filtering of patterns.
87
Temporal Patterns
Table 2.5.7. Areal Rainfall Temporal Patterns - Catchment Areas and Durations
Variable
Catchment Area (km2) 100, 200, 500, 1000, 2500, 5000, 10 000, 20
000, 40 000
Durations (Hours) 12, 18, 24, 36, 48, 72, 96, 120, 144, 168
88
Temporal Patterns
Figure 2.5.13. Combinations of Aspect Ratio and Rotation for Hypothetical Catchments
To ensure data quality, hypothetical catchments that did not contain the minimum number of
pluviograph stations (Table 2.5.8) within their vicinity that were producing quality data at the
time of the event were disregarded. An additional filter was then applied to remove events
that had too much area assigned to a single gauge or erroneous/unrealistic rainfall values.
Each space-time independent areal temporal pattern had a Pearson correlation coefficient
derived between the areal pattern and all the corresponding temporal patterns of
pluviograph stations within its vicinity. Patterns that had a very high correlation to a single
gauge and no others or did not have enough stations with a reasonable correlation to the
areal pattern were removed.
Table 2.5.8. Minimum Number of Pluviographs Required for Event Selection for Each
Catchment Area
89
Temporal Patterns
Given that events were chosen simply based on their total rainfall depth, many of the longer
duration patterns selected represented a shorter duration. Using the same procedure
implemented in defining event extents in the events database, events that had extents less
than the duration shorter than the duration of interest were removed.
Like the point temporal patterns, meta-data on each pattern has been provided that allows
practitioners to track the location and time of the event. Figure 2.5.14 compares the
cumulative mass curves for 24 hour point and area temporal patterns. This demonstrates
how the spatial averaging produced by areal temporal patterns reduces the variability.
Figure 2.5.15 compares areal temporal patterns to the temporal patterns of the pluviograph
closest to the area’s centroid, further highlighting a reduction in variability with the areal
temporal patterns.
Figure 2.5.14. Comparison of Point Temporal Patterns and Areal Temporal Patterns - East
Coast South Region - 1 Day
90
Temporal Patterns
Figure 2.5.15. Comparison of Areal Temporal Patterns and the Temporal Pattern of the
Closest Pluviograph for the Same Event
Book 2, Chapter 6 provides advice on the assignment of historical temporal patterns when
modelling observed events for calibration or analysis.
91
Temporal Patterns
• Events are selected and duration binned on the basis of the critical rainfall duration;
• An event can only be used in one duration so a large storm sample is required;
The process of binning events on the basis of their critical rainfall duration means that events
that produce a catchment response because of rainfall over a certain duration can be placed
in a bin of a very different duration. This means that under a critical duration approach these
event are unlikely to influence the design estimate even if they produce the largest
catchment response. This problem would not occur in a Monte Carlo framework that
samples across durations.
Despite these limitations, a complete storm approach has the advantage of producing
realistic rainfall storm events that have the correct burst IFD and storm volume
characteristics. Coombes et al. (2015) showed considerable difference in basin performance
on a small urban catchment between burst and complete storm approaches.
Westra et al. (2013) proposed rainfall sequences for future climates could be constructed by
sampling historical rainfall patterns corresponding to warmer days at the same location, or
from locations which have an atmospheric profile more reflective of expected future climate.
Such an approach could conceptually be applied in the selection of design burst patterns but
would require significant testing.
92
Temporal Patterns
applying projected climate change, the temporal patterns applied should be those derived for
existing climatic conditions but recognising the additional uncertainty in simulation results.
Point temporal patterns should be used for catchments less than 75 km2. Areal temporal
patterns have been derived for a number of different catchment areas. Table 2.5.9 provides
a guide to applying the areal patterns. Both point and areal temporal pattern sets can be
downloaded from the ARR Data Hub (Babister et al (2016), accessible at http://data.arr-
software.org/)
Table 2.5.9. Areal Temporal Pattern Sets for Ranges of Catchment Areas
Range of Target Catchment Areas (km2) Catchment Area of Designated Areal
Temporal Pattern Set (km2)
75 – 140 100
140 – 300 200
300 – 700 500
700 – 1600 1000
1600 – 3500 2500
3500 – 7000 5000
7000 – 14,000 10,000
14,000 – 28,000 20,000
28,000 + 40,000
93
Temporal Patterns
When selecting a single representative (average) pattern the practitioner needs to look
at the whole catchment response hydrograph and not local inflow hydrographs.
The ensemble of 10 pattern provides a range of plausible answers. The practitioner should
consider the benefits of investigating multiple temporal patterns or Monte Carlo for sensitive
designs and solutions.
Running an ensemble of ten temporal patterns through a two dimensional model could
be time consuming. One option is to double the grid cell size which will decrease run
time 8 fold.
1. Draw from both bins at the boundary effectively doubling the ensemble size which
effectively smooths the results; or
2. Replacing the frequent probability bin with the intermediate bin to ensure a smooth
catchment response with rainfall.
Consideration should be given to filtering out (or excluding) embedded bursts of lower AEP
by re-distributing rainfalls of high intensity to other time increments proportionally to their
magnitude (e.g. Herron et al. (2011)). In some situations results will still need to be
smoothed.
If 10 temporal patterns are not available for a given region, duration and frequency bin then
patterns were taken from other similar regions (Table 2.5.10). For 7 day events there was not
94
Temporal Patterns
enough patterns and all regions were pooled. Local knowledge can be used to add events to
the ensemble that weren’t selected in the regional sample.
95
Temporal Patterns
Region Alternate
Region
Rangelands Flatlands West,
Flatlands East,
Flatlands West
Flatlands West Flatlands East,
Rangelands
West
Flatlands East Flatlands West,
Flatlands West,
Rangelands
• Burst Loading - Classified events as front, middle or back (see Book 2, Chapter 5, Section
3);
• Burst Loading (%) - The percentage of the event that falls into the loading category;
• Burst Depth - Source temporal pattern burst depth. This is the total depth the original
pattern had, patterns are given in percentages and not original depths;
• AEP (source) - The source temporal pattern AEP at the location it was recorded. This AEP
is based of the 2016 IFDs and the burst depth of the event;
• Burst Start - The time and date at which the burst began;
• Burst End - The time and date at which the burst ended; and
There will be events that weren’t picked up in the regional sample however, local experience
will allow them to be used in the ensemble (note when doing this the front, middle and back
loading of the regional needs to be considered).
96
Temporal Patterns
Depending on the severity of the embedded burst, filtering of rarer bursts is recommended.
When minor embedded bursts occur in only a few of the 10 patterns in the ensemble filtering
can be neglected. Testing has shown with the 2013 Intensity Frequency Duration data that
there are some locations where all observed temporal patterns would have embedded
bursts.
5.9.9. Pre-burst
The treatment is dependent on the approach taken to model losses and the magnitude of
the pre-burst. In most cases practitioners will use just the median pre-burst and median
storm initial loss. However, in some cases the practitioner may sample from distributions of
pre-burst and initial loss. As pre-burst varies with location, duration and probability the
adopted approach will vary, but it is probably sensible to adopt a consistent approach across
durations and probabilities being assessed. Pre-burst for a specific location can be
downloaded from the ARR Data Hub (Babister et al (2016), accessible at http://data.arr-
software.org/).
In locations and for durations that do not have significant pre-burst (Figure 2.5.10), the pre-
burst depth can be ignored when applying a temporal pattern. Therefore the Burst IL (ILb)
can be taken as the Storm IL (ILs). In those locations where the pre-burst is significant a
number of approaches are possible:
• Storm IL is greater than Pre-burst – Pre-burst should be taken out of the storm IL
• Storm IL is approximately equal to Pre-burst – In the case where storm IL and pre-burst
are close to equal IL is satisfied and no IL needs to be taken from the burst temporal
pattern;
• Pre-burst is greater than storm IL - In the case where pre-burst is larger than the storm IL
there are a number of options:
• Apply a pre-burst temporal pattern after taking out the Storm IL. There is little research
that has investigated pre-burst temporal patterns;
• Test the sensitivity of pre-burst on the resultant flood estimate and determine if it can be
ignored;
• Apply a complete storm approach instead of a burst approach e.g. Coombes et al.
(2015).
5.10. Example
The Tennant Creek Catchment is located in the Northern Territory and was used for the ARR
research Projects 3 and 6. The catchment and catchment location is shown in Figure 2.5.16.
It is located in the Rangeland Region for temporal patterns. The temporal pattern data for the
region can be extracted at http://data.arr-software.org/. In this particular example a RORB
model was set up for the catchment. The model was previously calibrated with an ILb=0 and
a CL=7 mm/hr. As the catchment area is 72.3 km2 it was decided that the critical duration
would be between 60 minutes and 1440 minutes (1 day).
97
Temporal Patterns
For this example only the 1% AEP patterns were run through the hydrologic model. 10
patterns from the rare AEP bin and the Rangelands region were run for each of the following
durations:
• 60 minute
• 120 minute
• 180 minute
• 270 minute
• 360 minute
• 540 minute
• 720 minute
• 1080 minute
• 1440 minute
Figure 2.5.17 is a presentation of the results in a box plot. The results are also presented in
Table 2.5.11, it should be noted that even though the columns are labelled bursts 1 to 10
they are not the same storms across the durations. The box plot presents clearly that the
180 minute duration is critical. The average peak flow of the 180 minute duration bursts
(277.35 m3/s) should then be taken as the 1% AEP flow. If a practitioner wanted to run a
98
Temporal Patterns
pattern through a hydraulic model and it was not practical to run all 10 patterns, the pattern
closest to the average (shown in Table 2.5.11) is Burst number 1.
Table 2.5.11. Flows for the 1% Annual Exceedance Probability for Ten Burst Events
5.11. References
Askew, A.J. (1975), Use of Risk Premium in Chance-Constrained Dynamic Programming.
Water Resources Research, 11(6): 862-866.
99
Temporal Patterns
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Ball, J.E. (1994), 'The influence of storm temporal patterns on catchment response', Journal
of Hydrology, 158(3-4), 285-303.
CSIRO and Bureau of Meteorology (2015), Climate Change in Australia Information for
Australia's Natural Resource Management Regions: Technical Report, CSIRO and Bureau of
Meteorology, Australia.
Coombes, P., Babister, M. and McAlister, T. (2015).Is the Science and Data underpinning the
Rational Method Robust for use in Evolving Urban Catchments, Proceedings of the 36th
Hydrology and Water Resources Symposium Hobart 2015.
Cordery, I., Pilgrim D.H. and Rowbottom I.A. (1984), Time patterns of rainfall for estimating
design floods on a frequency basis. Wat. Sci. Tech., 16(8-9), 155-165.
French, R.H. (1985), Open Channel Hydraulics, McGraw-Hill Book Company, New York.
Hall, A.J. and Kneen, T.H. (1973), Design Temporal Patterns of Storm Rainfall in Australia,
Proc. Hydrology Symposium, Inst. Eng. Aust., pp: 77-84
Herron, A., Stephens, D., Nathan, R. and Jayatilaka, L. (2011), Monte Carlo Temporal
Patterns for Melbourne. In: Proc 34th World Congress of the International Association for
Hydro- Environment Research and Engineering: 33rd Hydrology and Water Resources
Symposium and 10th Conference on Hydraulics in Water Engineering. Barton, A.C.T.:
Engineers Australia, pp: 186-193.
Hill, P.H., Graszkiewicz, Z., Taylor, M. and Nathan, R. (2014), Australian Rainfall and Runoff
Revision Project 6: Loss Models for Catchment Simulation, Phase 4, October 2014, Report
No. P6/S3/016B.
Hill, P.I., Graszkiewicz, Z., Loveridge, M., Nathan, R.J. and Scorah, M. (2015), Analysis of
loss values for Australian rural catchments to underpin ARR guidance. Hydrology and Water
Resources Symposium 2015, Hobart, 9-10 December 2015.
Hoang, T.M.T., Rahman, A., Weinmann, P.E., Laurenson, E. and Nathan, R. (1999) Joint
Probability description of design rainfall, Water 99 Joint congress, Brisbane.
IEAust. (1977), Australian Rainfall and Runoff: flood analysis and design. The Institution of
Engineers Australia. Canberra.
Keifer, C.J. and Chu, H.H. (1957), Synthetic storm pattern for drainage design, ASCE
Journal of the Hydraulics Division, 83(4), 1-25.
100
Temporal Patterns
Kuczera, G., Lambert, M., Heneker, T., Jennings, S., Frost, A. and Coombes, P. (2003) Joint
Probability and design storms at the cross roads, Proceedings of the 28th International
Hydrology and Water Resources Symposium, Wollongong.
Loveridge, M., Babister, M., Retallick, M. and Testoni, I. (2015). Testing the suitability of
rainfall temporal pattern ensembles for design flood estimation, Proceedings of the 36th
Hydrology and Water Resources Symposium Hobart 2015.
Meighen, J. and Kennedy, M.R. (1995), Catalogue of Significant Rainfall Occurrences over
Southeast Australia , HRS Report No. 3, Hydrology Report Series, Bureau of Meteorology,
Melbourne, Australia, October.
Milston, A.K. (1979), The influence of temporal patterns of design rainfall in peak flood
discharge, Thesis (M.Eng.Sci.), University of New South Wales.
Nathan, R.J. and Weinmann, P.E. (1995), The estimation of extreme floods - the need and
scope for revision of our national guidelines. Aus J Water Resources, 1(1), 40-50.
Nathan, R.J., Weinmann, P.E. and Hill, P.I. (2002), Use of a Monte-Carlo framework to
characterise hydrologic risk. Proc., ANCOLD conference on dams, Adelaide.
Nathan, R., Weinmann, E., Hill, P. (2003) Use of Monte Carlo Simulation to Estimate the
Expected Probability of Large to Extreme Floods, Proc. 28th Int. Hydrology and Water Res.
Symp., Wollongong, pp: 1.105-1.112.
Pattison, A, (ed) 1977, Australian Rainfall and Runoff: Flood Analysis and Design, The
Institution of Engineers Australia
Phillips, P. and Yu, S. (2015), How robust are OSD and OSR Systems?, 2015 WSUD &
IECA, Sydney.
Phillips, B.C., Lees, S.J. and Lynch, S.J. (1994), Embedded Design Storms - an Improved
Procedure for Design Flood Level Estimation?. In: Water Down Under 94: Surface Hydrology
and Water Resources Papers; Preprints of Papers. Barton, ACT: Institution of Engineers,
Australia, 235-240. National conference publication (Institution of Engineers, Australia), No.
94/15.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Pilgrim, D.H. and Cordery, I. (1975), Rainfall temporal patterns for design floods, Journal of
the Hydraulics Division, 101(1), 81-95.
Pilgrim, D.H., Cordery, I. and French, R. (1969), Temporal patterns of design rainfall for
Sydney, Institution of Engineers, Australia, Civil Eng. Trans., CE11: 9-14.
Podger, S., Babister, M., Brady, P. (2016), Deriving Temporal Patterns for Areal Rainfall
Bursts, Proceedings of the 37th Hydrology and Water Resources Symposium Queenstown
2016.
Rahman, A., Weinmann, P.E., Hoang, T.M.T. and Laurenson, E.M. (2002), Monte Carlo
simulation of flood frequency curves from rainfall. Journal of Hydrology.
Retallick, M., Babister, M., Varga, C., Ball, J. and Askew, A. (2009), Do Filtered patterns
resemble real patterns? Proceedings of the Hydrology and Water Resources Symposium,
Newcastle 2009, Engineers Australia.
101
Temporal Patterns
Rigby, E. and Bannigan, D. (1996), The Embedded design storm concept - A critical review,
23rd Hydrology and Water Resources Symposium Hobart.
Rigby, E., Boyd, M., Roso, S. and Van Drie, R. (2003), Storms, Storm bursts and flood
estimation- A need for review of the AR&R procedures, Proceedings of the 28th International
Hydrology and Water Resources Symposium, Wollongong.
Roso, S. and Rigby, E. (2006), The impact of embedded design storms on flows within a
catchment, Proceedings of the 30th Hydrology and Water Resources Symposium
Launceston 2006.
Sih, K., Hill, P. and Nathan, R. (2008), Evaluation of Simple Approaches to Incorporating
Variability in Design Temporal Patterns. In: Lambert, Martin (Editor); Daniell, TM (Editor);
Leonard, Michael (Editor). Proceedings of Water Down Under 2008. Modbury, SA: Engineers
Australia; Causal Productions, pp: 1049-1059.
Varga, C., Ball, J., Babister, M. (2009), An Alternative for Developing Temporal Patterns,
Proceedings of the Hydrology and Water Resources Symposium, Newcastle 2009,
Engineers Australia
WMAwater (2015a), Australian Rainfall and Runoff Revision Project 3: Temporal Patterns of
Rainfall, Part 1 - Development of an Events Database, Stage 3 Report, October 2015.
WMAwater (2015b), Australian Rainfall and Runoff Revision Project 3: Temporal Patterns of
Rainfall, Part 3 - Preliminary Testing of Temporal Pattern Ensembles, Stage 3 Report,
October 2015.
WMAwater (2015c), Australian Rainfall and Runoff Revision Project 3: Temporal Patterns of
Rainfall, Part 3 - Preliminary Testing of Temporal Pattern Ensembles, Stage 3 Report,
October 2015.
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Webb McKeown and Associates, (2003), Assessment of the variability of rates of rise for
hydrographs on the Hawkesbury Nepean River, December 2003.
Weinmann, P., Rahman, A., Hoang, T., Laurenson, E., Nathan, R. (2000), Monte Carlo
Simulation of flood Frequency Curves, Proceedings of Hydro 2000, 3rd International
hydrology and water resources symposium, Perth, Inst. Of Engineers Australia, pp: 564-569.
Westra, S., Evans, J., Mehrotra, R., Sharma, A. (2013), A conditional disaggregation
algorithm for generating fine time-scale rainfall data in a warmer climate, Journal of
Hydrology, 479: 86-99
Wood, J.F. and Alvarez, K. (1982), Review of Pindari Dam Design Inflow Flood Estimate,
Proceedings of the workshop on spillway design, Department of National Development and
Energy, AWRC Conference Series No.6.
102
Chapter 6. Spatial Patterns of Rainfall
Phillip Jordan, Alan Seed, Rory Nathan
Chapter Status Final
Date last updated 14/5/2019
6.1. Introduction
As discussed in Book 2, Chapter 2, the description of rainfall events used in most currently
applied design flood estimation methods is based on a reductionist approach, where the
temporal and spatial variations of rainfall within an event are represented separately by
typical temporal patterns and spatial patterns of event rainfall.
This chapter provides practitioners with recommendations on the derivation and application
of spatial patterns of rainfall for use in design flood estimation using representations of
varying complexity. This includes recommendations for reconstructing the space-time
patterns of rainfall for the observed events used in the calibration of hydrologic catchment
models.
There are a number of items where the authors recognise that the guidance adopted in this
chapter is uncertain and where benefit would be obtained from further research to better
quantify and potentially reduce the impact of those uncertainties on design flood estimation
practice. Section 6 lists and briefly discusses the residual uncertainties relating to various
aspects of the derivation and application of spatial and space-time patterns of rainfall, and
recommends potential areas of future investigation.
Rainfall gauges provide data on the rainfall depths observed at the point location of the
rainfall gauge over different periods of time. Daily reporting rainfall gauges provide rainfall
depths recorded at the gauge over the preceding day period, with the Bureau of
Meteorology’s typical practice being that these gauges report at 9:00 am local time on each
day. Pluviograph or tipping bucket rainfall gauges can provide rainfall depths observed at a
point location for sub-daily temporal resolution. They can provide rainfall depths with
temporal resolutions down to less than one minute.
Rainfall gauges are subject to some observational errors but they typically provide relatively
accurate measurements of the time series of rainfall recorded at a point location. They can
under-record rainfall during periods of high winds, particularly for snow or for when rainfall
intensities are low. There can be errors associated with estimating rainfall rates from tipping
bucket rainfall gauges over defined periods of time from the recorded times of the bucket
tips. During periods of very high rainfall intensity, tipping bucket rainfall gauges can be
subject to errors induced by the bucket failing to tip or tipping when it is partially full. Rainfall
103
Spatial Patterns of Rainfall
gauges can also be subject to errors in manual recording of the data or electronic
transmission of data from telemetered rainfall sites.
Remote sensing approaches can provide estimates of rainfall intensity observed on a spatial
grid across a wide observation domain for a given period of time. The two most commonly
available remote sensing approaches for rainfall estimation are ground-based weather radar
and satellite observing systems.
Weather radars measure the reflectivity returned by rain drops, hail stones or snow, which
are converted into a rainfall intensity estimate. There are several different types of errors in
this process that degrade the accuracy of the radar rainfall measurement (Joss and
Waldvogel, 1990; Collier, 1996). The analysis of radar data to derive space-time patterns
requires specialist expertise that lies outside the scope of these guidelines. However, such
analysis could be considered for large or high risk studies which are able to secure the
specialist expertise required. Weather radar approaches typically provide estimates of
rainfall intensities that are more accurate on a relative basis within the space-time field of the
event than in absolute magnitude terms.
The practitioner should make use of the available data to reconstruct the space-time pattern
of rainfall across the catchment or study area for the events that are to be utilised in model
calibration and design flood simulation. The practitioner should assess the suitability of the
rainfall data that is available for the event for reconstructing the space-time pattern, including
rainfall gauges and any data that is available from remote sensing.
The practitioner should consider the events that are to be used for constructing the space-
time patterns of rainfall. Factors that should be considered in selecting events are the:
• Number of sites and locations, relative to the catchment, of daily rainfall gauges;
• Number of sites and locations, relative to the catchment, of continuous rainfall gauges;
• Likely accuracy of quantitative rainfall estimates derived from remotely sensed data;
• Purpose of estimating the space-time rainfall pattern, whether it is for hydrological model
calibration, deriving a space-time pattern or spatial pattern for inclusion in design flood
simulation or both;
• Existence and quality of recorded flood levels, flood extents and gauged flows for the
event, which make it a candidate for model calibration; and
104
Spatial Patterns of Rainfall
• Estimated AEP of the flood event or the rainfall total for the event over the catchment of
interest, relative to the AEP of the design floods that are to be estimated.
The practitioner may need to make judgements between using the space-time pattern for an
older event, for which there is no remote sensing data and relatively poor coverage of rain
gauge data but which produced higher flood levels and a more recent event that has
remotely sensed data and/or better coverage of rain gauge data but which produced a
smaller flood.
The conventional approach applied in flood estimation for approximating the space-time
pattern of rainfall from gauge networks has been:
1. To estimate the spatial pattern of rainfall for the whole rainfall event; and
2. To disaggregate the rainfall accumulation for each part of the spatial domain, often a
model subarea or subcatchment, using the temporal pattern observed at a particular
rainfall gauge.
The conventional approach is a valid method in most situations but it may be that a more
sophisticated approach involving construction of different spatial patterns for different
increments of the event are required when spatial or temporal variability of the rainfall
pattern for the event is large (Umakhanthan and Ball, 2005). Considerations for application
of each of the two steps are discussed in Book 2, Chapter 6, Section 2 and Book 2, Chapter
6, Section 2. Potential alternative approaches are discussed in Book 2, Chapter 6, Section 2.
There is no preferred technique for constructing a spatial pattern of rainfall for an event.
Hand drawing of rainfall contours informed by the rainfall totals at the gauges remains a valid
approach that will produce acceptable results for many rainfall events.
Spatial interpolation techniques using a computer usually involve interpolation between the
point observations onto a grid, defined in either a geographic or projected Cartesian
coordinate system. The grid resolution should be sufficiently fine to capture the spatial
variability in the rainfall field at a meaningful scale for the catchment. It is recommended that
the resolution of the grid should be 1 km (for a projected grid) or 0.01° (for a geographic grid)
or finer. There are many potential approaches that have been developed for spatial
interpolation (Verworn and Haberlandt (2011) and the references therein), including:
105
Spatial Patterns of Rainfall
• Construction of Thiessen polygons, which is equivalent to adopting the rainfall depth from
the nearest neighbour rainfall gauge when applied using a grid;
• Weighting of rainfall using the inverse of the square of the distances to the gauges;
• Natural neighbours;
• Variants on Kriging, such as indicator Kriging, regression Kriging and Kriging with external
drift.
Some Kriging and spline interpolation algorithms allow for the use of a covariate in the
interpolation algorithm, which may improve the accuracy of the interpolation. Either elevation
or design rainfall intensities for a relevant AEP and duration may provide appropriate
covariates that improve the accuracy of the interpolation, particularly in catchments or study
areas that are subject to appreciable and consistent orographic effects.
Gridded daily rainfall data sets are available from SILO and the Australian Water Availability
Project (Jones et al., 2009). These data sets may be useful for providing spatial patterns of
rainfall events but they should be used with caution as they were not derived with the
intention of being used for design flood estimation (refer to Book 1, Chapter 4, Section 9 and
Book 2, Chapter 7, Section 2).
Regardless of the approach that is used to produce the spatial pattern, the practitioner
should check the spatial pattern produced by mapping it against the point values observed
at rainfall gauges. If the mapping reveals anomalies in the interpolation approach, an
alternative method should be adopted. Where large gaps in rainfall gauge coverage exist
(particularly in mountainous areas) careful review of the simplifying assumptions made in the
interpolation procedure should be undertaken by the practitioner to avoid unrealistic spatial
patterns.
Rainfall totals for each model subarea should be estimated from the spatially interpolated
rainfall field for the event by averaging the rainfall totals at all grid cells that intersect with the
spatial extent of the model subarea. Mathematically, this is represented by:
∑�, � �� ∩ �, ���, �
�� = (2.6.1)
∑�, � �� ∩ �, �
where Ss is the rainfall depth for the total event applied to model subarea, s, �� ∩ �, �is the
area of overlap between grid cell at coordinate location (i,j) in the interpolated grid and the
model subarea s, and ��, �is the interpolated rainfall total for the grid cell.
For rainfall events that extend for a period longer than 24 hours, it may be useful to construct
spatial patterns for separate time periods of the event to investigate the temporal evolution of
the event.
106
Spatial Patterns of Rainfall
����, �
��, � = (2.6.2)
∑� ��, �
Where Rs,i is the rainfall depth applied to model subarea, s, for model time increment, i; SS is
the total event rainfall for model subarea s; rg,i is the rainfall depth recorded at gauge, g , for
model time increment, i; and the summation is formed for all time increments over the event.
Where data is available from more than one recording rainfall gauge, this provides options to
the practitioner on which gauge to select to provide the temporal pattern for each subarea of
the model. An assumption is often made that the most appropriate pattern would be provided
by the recording rainfall gauge that is located closest, by horizontal distance, from the
centroid of the model subarea. Whilst the closest gauge by physical distance makes intuitive
sense, it is not necessarily the case that it must provide the most appropriate pattern for
allocating the temporal pattern of a particular subarea. The practitioner may consult other
information, such as catchment topography, data on wind velocities during the event or
remote sensing data to guide the selection of an alternative to the nearest gauge for
providing the temporal pattern. The practitioner may also use other information on the
meteorology of the event to justify use of an adjusted temporal pattern for disaggregation.
For example, if information was available that a particular rainfall event was moving in a
particular direction at an average velocity of 15 km/h and a model subarea was located 30
km downwind of a rainfall gauge, it may be justified to adjust the temporal pattern recorded
at this gauge by moving it backward in time by two hours, to represent the estimated travel
time of the storm from the rainfall gauge to the model subarea.
The time series of rainfall at each recording gauge should be checked before it is used for
disaggregation. The event total rainfall at each recording gauge should be checked, where
available, against the event total rainfall at other daily recording and continuously rainfall
gauges in the vicinity. A gauge should not be used if significant anomalies are identified in
the recorded data for the site.
107
Spatial Patterns of Rainfall
increment would then be summed to produce a temporal pattern for each model subarea. If
this approach is used, the total rainfall across the event should be calculated for each
subarea and then compared to the rainfall computed from spatially interpolating the event
total rainfall only. Adjustments should be considered to the approach if the totals for any
subarea differ by more than 5%.
The spatial coverage of pluviographs around a catchment may be relatively sparse. This
introduces uncertainty into the estimation of the actual temporal pattern of rainfall for any
given subarea or grid cell of a model. Whilst a reasonable assumption may be that the
temporal pattern for a given model subarea would be defined by the pluviograph that is
nearest in horizontal distance, this may not necessarily produce the most accurate temporal
pattern for the model subarea. As discussed in Book 2, Chapter 6, Section 2, a more
accurate representation of the space-time rainfall pattern over a model subarea may be
produced by judicious adjustment of the temporal pattern observed at a gauge or selection
of a temporal pattern from a gauge that is not physically closest to the subarea centroid.
Adjusting the assignment of temporal patterns to subareas in this manner may assist the
practitioner in achieving a more robust calibration of the model parameters to the event.
Adjustment of temporal patterns and temporal pattern assignment is therefore allowable,
particularly if meteorological evidence is provided to support the decision.
Remote sensing data, where available, may be used to estimate the space-time rainfall field
of an event for catchment modelling system calibration. If used, the space-time rainfall field
should be corrected using data from rainfall gauges, using a recommended approach for
adjusting for the mean field bias.
The space-time pattern or set of space-time patterns adopted for design flood estimation
should be chosen in a manner that, when coupled with other aspects of the catchment
108
Spatial Patterns of Rainfall
modelling system, preserves the AEP of the design flood when derived from its causative
rainfall.
If there is sufficient density of continuously rainfall gauges that have recorded a number of
rainfall events, using this data to derive alternative (non-uniform) design spatial patterns may
be considered.
For estimation of design flood events more frequent than and including the 1% AEP event,
the spatial pattern should be estimated using the spatial pattern derived from the design
rainfall grids (as discussed in Book 2, Chapter 3) across the catchment for the relevant IFD
surface for the AEP and duration. In many cases there will be little relative variation in spatial
distribution between probabilities or adjacent duration. Different spatial patterns could be
applied for different durations. Alternatively, one spatial pattern may be estimated for the
critical duration and this single spatial pattern may then be applied for all durations.
For estimation of design flood events rarer than 1% AEP with durations of 6 hours and less
on catchment areas less than 1000 km2, the spatial pattern should be derived in accordance
with Woolhiser (1992) for the relevant duration. Use of different spatial patterns for different
AEP ranges may introduce inconsistencies at the adjacent limits of each method, and if this
is the case then any such inconsistencies should be smoothed in an appropriate fashion.
For estimation of design flood events rarer than 1% AEP with durations of 9 hours and
greater or on catchment areas greater than 1000 km2, the spatial pattern should be derived
from the Topographic AdjustmentFactor (TAF) database derived from the generalised PMP
method that is relevant for zone that the catchment is located in. Use of different spatial
patterns for different AEP ranges may introduce inconsistencies at the adjacent limits of
each method, and if this is the case then any such inconsistencies should be smoothed in an
appropriate fashion.
For large studies and particularly for large catchments the practitioner should investigate and
analyse the variability in spatial patterns between events. Where topography is dominant or
large events are generally produced by a single rainfall mechanism there is likely to be only
moderate variability between events but for some catchments there can be significant
variations in space-time patterns between events. The practitioner should prepare and
examine maps of the spatial pattern of rainfall for each event as a whole and for time slices,
for example each 24 hour period, using an approach described in Book 2, Chapter 6,
Section 2. These spatial patterns should be compared to rainfall accumulations from
109
Spatial Patterns of Rainfall
Intensity Frequency Duration analysis for a relevant duration and AEP (refer to Jordan et al.,
2015 for an example of this approach). Consistency in spatial patterns between events may
reveal that it is acceptable to apply a single spatial pattern for all design flood estimates,
particularly if it is consistent with the design rainfall analysis.
As discussed in Book 2, Chapter 4, Section 3, partial area storms should always be explicitly
considered for catchments with an area exceeding 30 000 km2 and it should be considered
for catchments larger than 5000 km2.
• A set (or population) of spatial patterns across the catchment of interest from a number of
observed rainfall events; and
The spatial pattern of rainfall should be assembled for each event in the population, in
accordance with the methods discussed in Book 2, Chapter 6, Section 2. The catchment
average rainfall accumulation should be computed for each event and continuously rainfall
gauges located in or near the catchment should be used to estimate the duration over which
most of the total rainfall accumulation was likely to have fallen in the catchment. The
estimated catchment average depth for the event and the estimated rainfall event duration
should be used with the table of design rainfall estimates for the catchment, after application
of the applicable ARF, to estimate the AEP of the rainfall for each event.
Similarly, the temporal pattern of rainfall should be assembled for each event in the
population. The temporal pattern may be assembled at a single continuously rainfall gauge
or from a combination of a number of continuously rainfall gauges located in the vicinity of
the catchment or study area. The temporal pattern should be analysed to extract the
maximum burst for a number of different durations. The rainfall accumulations over these
bursts should be compared to the design rainfall estimates at the location of the rainfall
gauge, without the application of the ARF, to estimate the AEP of the rainfall for each event.
A sample of patterns for use in the Monte Carlo simulation should be selected from the set of
historical events that are available. A sufficient number of events should be selected to allow
for a meaningfully large sample in the Monte Carlo simulation. It is expected that a minimum
of five patterns would be required each of the sets of spatial and temporal patterns.
However, events should only be selected for inclusion in the sample if they are relatively
similar in terms of the AEP of the rainfall to the range of AEP that design flood estimates are
being produced. Ideally, the spatial patterns and temporal patterns of events selected for the
Monte Carlo sample should have an estimated AEP that is between 1/10 and ten times the
AEP of the design flood event to be simulated. Adding more historic spatial patterns to an
ensemble does not necessarily improve the simulation accuracy of a Monte Carlo model, as
the additional patterns that are most likely to be added would be at the more common end of
the AEP range. In many catchments, design floods of interest are caused by rainfall events
110
Spatial Patterns of Rainfall
Unless there is hydrometeorological evidence to the contrary, all potential spatial and
temporal patterns in the sets available for sampling should be given equal probability of
selection in the Monte Carlo simulation.
After the spatial and temporal patterns for the design rainfall burst have been selected
stochastically, the patterns should be scaled so that the catchment average rainfall depth for
the design rainfall burst matches the depth generated stochastically by the sampling
scheme.
There has been limited assessment on methods for selection of space-time patterns for use
in Monte Carlo simulation for design flood estimation. Further research should be conducted
in this area to provide more robust guidance on the minimum number of temporal and spatial
patterns in the sampling populations, the range of AEP represented by the populations of
spatial and temporal patterns to be sampled compared to the AEP of the depth of the rainfall
burst and the relative probabilities to be applied in the selection of spatial and temporal
patterns.
It may be an option to transpose space-time rainfall patterns from an area with a good
observational network for rainfall to a catchment with a poorer observational network. If this
is done, the practitioner should only transpose (non-dimensional) space-time rainfall patterns
from an area that is subject to rainfall events that are driven by similar hydrometeorological
processes. The transposition region should be subject to similar orographic influences. In
some cases, the space-time patterns may need to be rotated to maintain consistency
between the spatial gradients in the space-time patterns and orographically influenced
gradients in the design rainfall gridded data.
111
Spatial Patterns of Rainfall
includes an approach to post-process the sequence of generated rainfall data for the
catchment of interest so that the characteristics of the large and extreme rainfall events in
the sequence reflect the Intensity Frequency Duration (IFD) statistics for the catchment of
interest. If such an adjustment is conducted then it is recommended that the IFD statistics
used as the basis of the adjustment are calculated for the catchment of interest after
multiplying by the ARF that is applicable for the catchment area.
Book 2, Chapter 7 recommends that the IFD adjustment is applied for a set of target
durations of either 6 minutes, 1 hour and 3 hours or for a set of durations of 6 minutes, 30
minutes, 1 hour, 3 hours, 6 hours and 12 hours. It is recommended therefore that the
following procedure is adopted:
1. The IFD statistics for each of the target durations are calculated as the average of the
point IFD statistics across the catchment, for each of the standard AEP (1EY to 1% AEP);
2. The ARF are computed for each of the target durations, at each of the standard AEP, for
the total area of the catchment to be modelled;
3. The catchment IFD statistics are computed for each of the target durations, at each of the
standard AEP, as the product of the point IFD statistics (from step 1) and the ARF (from
step 2);
4. The catchment IFD statistics (from step 3) are applied in the modification procedure as
the depths at the selected target durations.
The practitioner may be adopting a catchment model that allows for spatial distribution of the
simulated rainfall sequence across the catchment. If this is the case, it is recommended that
the generated sequence of rainfall is scaled for each portion of the model (subcatchment or
grid cell as applicable to the particular model) to reflect the spatial distribution of rainfall that
would be typically observed across the catchment. Parts of the catchment that are typically
wetter would have rainfall depths applied in the model that are larger than the generated
mean rainfall depth across the catchment but with the same timing and sequencing.
Conversely, parts of the catchment that are typically drier would have rainfall depths applied
in the model that are smaller than the generated mean rainfall depth across the catchment
but with the same timing and sequencing.
Selection of an appropriate means of deriving the spatial pattern for a continuous simulation
model that includes spatial distribution depends upon the AEP of the design events that are
of most interest and the flood response characteristics of the catchment:
• When the focus is on estimation of floods with relatively frequent AEP (around 10% or
more common) and for catchments with large moisture stores having significant relation
between antecedent rainfall and the annual maximum flood, it is recommended that the
spatial pattern applied in the model should be estimated from contours of mean annual
rainfall;
• However when the focus is on estimation on floods with rarer AEP (5% or rarer) and for
catchments where the influence of large moisture stores are less significant, it is
recommended that the spatial pattern should be selected in a manner that is consistent
with the recommendation for design event simulation (refer to Book 2, Chapter 6, Section
3 and Book 2, Chapter 6, Section 3 above).
112
Spatial Patterns of Rainfall
It is recommended that until credible further studies are completed showing otherwise, for
simulations applying projected climate change ARF, spatial patterns and space-time patterns
of design rainfall should be the same as derived under existing climatic conditions.
Design rainfall estimates are developed for the Stanley River at Woodford, which has a
catchment area of 245 km2. The catchment to Woodford is the north-eastern portion of the
catchment to Somerset Dam and for this worked example it includes fifteen subcatchments
in the runoff-routing model.
A significant feature of the Stanley River catchment is the appreciable gradient in rainfall that
is typically observed during large rainfall events. Tropical cyclones, ex-Tropical Cyclones,
East Coast Lows and other rainfall producing systems typically feed moisture into the
catchment from the Pacific Ocean. Since the north-eastern part of the catchment is only 20
km from the coast but the western side of the catchment is almost 70 km from the coast, the
typical direction of storm movement and typical direction of flow of warm moist air from the
ocean results in a gradient of rainfall totals that reduce from east to west across the
catchment in most rainfall events. The strength of the rainfall gradient is enhanced by
orographic effects with the highest totals typically also occurring in the north-eastern part of
the catchment.
113
Spatial Patterns of Rainfall
Figure 2.6.1. Stanley River Catchment, Showing Runoff-routing Model Subcatchments and
the Locations of Daily rainfall and pluviograph gauges
114
Spatial Patterns of Rainfall
Figure 2.6.2. Rainfall totals (mm) Recorded at Rainfall Gauges for the January 2013 Event in
the Vicinity of the Stanley River Catchment
The January 2013 rainfall event was used in this worked example to demonstrate various
approaches to interpolation of spatial patterns of historical rainfall events, for the purpose of
calibration of runoff-routing models. For all of the algorithms, the rainfall totals were first
interpolated onto a 0.5 km resolution grid over the catchment. Rainfall totals for each of the
76 runoff-routing model subcatchments were then computed from the average of the rainfall
totals at the grid cells that overlapped each subcatchment.
1. Thiessen polygons, as shown in Figure 2.6.3. Observed totals at gauges are shown as
blue circles in both panels. The top panel shows interpolation to a 0.5 km grid (red
shading), whilst the bottom panel shows calculated subcatchment average depths in mm
(red circles);
2. Inverse distance weighting, as shown in Figure 2.6.4. Observed totals at gauges are
shown as blue circles in both panels. The top panel shows interpolation to a 0.5 km grid
(red shading), whilst the bottom panel shows calculated subcatchment average depths in
mm (red circles); and
3. Ordinary Kriging, as shown in Figure 2.6.5. Ordinary Kriging was applied using a linear
semi-variogram that was fitted to observed rainfall totals at the 20 gauges from the
January 2013 event, as shown in Figure 2.6.6.Observed totals at gauges are shown as
blue circles in both panels. The top panel shows interpolation to a 0.5 km grid (red
shading), whilst the bottom panel shows calculated subcatchment average depths in mm
(red circles).
115
Spatial Patterns of Rainfall
Figure 2.6.3. Application of Thiessen Polygons- Rainfall Totals for the January 2013 Event -
Stanley River Catchment
116
Spatial Patterns of Rainfall
Figure 2.6.4. Application of Inverse Distance Weighting - Rainfall Totals for the January 2013
Event - Stanley River Catchment
117
Spatial Patterns of Rainfall
Figure 2.6.5. Application of Ordinary Kriging - Rainfall Totals for the January 2013 Event -
Stanley River Catchment
118
Spatial Patterns of Rainfall
Figure 2.6.6. Observed Semi-variogram and Fitted Linear Semi-variogram for the January
2013 Rainfall Event for Stanley River catchment, Applied in the Ordinary Kriging Algorithm
The catchment area for the Stanley River to Woodford is 245.07 km2. The ARF for 24 hour
duration was computed by applying Equation (2.4.2), with the relevant coefficients for the
East Coast North region (from the 1st row of Table 2.4.2). For the 1% AEP event the relevant
ARF is given by:
119
Spatial Patterns of Rainfall
Table 2.6.1. Calculation of Weighted Average of Point Rainfall Depths for the 1% AEP 24
hour Design Rainfall Event for the Stanley River at Woodford
The catchment average design rainfall depth for 1% AEP, 24 hour duration for the Stanley
River at Woodford was therefore computed by multiplying the ARF by the weighted average
of the design point rainfall depths:
120
Spatial Patterns of Rainfall
The calculation was repeated for the catchment of the Stanley River to Woodford for each
combination of standard durations between 3 and 72 hours and the 1 Exceedance per Year
to the 1% AEP. These computations are shown for the Stanley River catchment to Woodford
in Table 2.6.2.
The catchment area for the Stanley River to Somerset Dam is 1324 km2. The calculation of
catchment average design rainfall intensities, after application of areal reduction factors, is
shown in Table 2.6.3. Comparing the top panels of Table 2.6.2 and Table 2.6.3, for the
corresponding AEP and durations the weighted averages of the point rainfall depths for the
catchment to Somerset Dam are less than those for Woodford, due to the gradient in the IFD
grids. Comparing the middle panels of Table 2.6.2 and Table 2.6.3, for the corresponding
AEP and durations the ARF catchment to Somerset Dam are less than those for Woodford
because the catchment area to Somerset Dam is larger. Hence comparing the bottom
panels of Table 2.6.2 and Table 2.6.3, for the corresponding AEP and durations the
catchment average design rainfall depths to Somerset Dam are less than those for
Woodford.
121
Spatial Patterns of Rainfall
Table 2.6.3. Stanley River Catchment to Somerset Dam: Calculation of Catchment Average
Design Rainfall Depths (bottom panel) from Weighted Average of Point Rainfall Depths (top
panel) and Areal Reduction Factors (middle panel)
Weighted Average of Point Rainfall Depths (mm)
Duration 1 Annual Exceedance Probability
(hours) Exceeda
nce per
Year
50% 20% 10% 5% 2% 1%
3 48.3 54.8 75.7 90.2 104.7 124.2 139.5
6 61.0 70.0 99.2 119.7 140.3 168.5 190.9
12 79.0 91.9 133.9 163.9 194.4 236.6 270.4
24 103.8 121.9 182.2 226.1 271.4 335.1 387.0
48 134.5 158.7 240.6 301.7 366.0 458.6 535.5
72 153.0 180.4 274.2 345.2 420.9 531.1 624.0
122
Spatial Patterns of Rainfall
nce per
Year
50% 20% 10% 5% 2% 1%
3 35.5 39.9 52.5 60.4 67.4 75.9 81.8
6 48.6 55.4 76.8 91.1 105.0 123.2 137.0
12 66.6 77.3 111.4 135.3 159.3 191.8 217.4
24 93.3 109.6 163.3 202.2 242.1 298.1 343.4
48 124.3 146.6 221.7 277.5 336.0 420.0 489.5
72 143.2 168.7 255.9 321.6 391.5 492.9 578.1
For the Stanley River catchment, this approach was demonstrated using the 24 hour
duration IFD data. For the catchment to Woodford, point rainfall depths at each of the
subcatchment centroids were divided by the weighted average of the point rainfall depths to
derive the non-dimensional spatial pattern, as computed in Table 2.6.4 and mapped in the
top panel of Figure 2.6.7. To model the 1% AEP 24 hour design flood event for the
catchment, the non-dimensional spatial pattern was multiplied by the catchment average
design rainfall depth to Woodford for this duration (468.6 mm, after application of the ARF),
as computed in Table 2.6.4 and mapped in the top panel of Figure 2.6.8.
The process was repeated for the Stanley River to Somerset Dam, with the map of the non-
dimensional spatial pattern shown in the bottom panel of Figure 2.6.7 and the map of the
design depths for the 1% AEP, 24 hour duration event in the bottom panel of Figure 2.6.8.
Table 2.6.4. Calculation of Design Spatial Pattern for Stanley River at Woodford
123
Spatial Patterns of Rainfall
124
Spatial Patterns of Rainfall
125
Spatial Patterns of Rainfall
Figure 2.6.8. Design Spatial Pattern of Design Rainfall Depths 1% AEP 24 hour Event for
Stanley River to Woodford (top panel) and Stanley River to Somerset Dam (bottom panel)
126
Spatial Patterns of Rainfall
Design peak flow estimates at Somerset Dam inflow were produced from a number of Monte
Carlo simulations that were implemented within RORB. There were a number of common
elements to all of these simulations:
• all adopted the same catchment average design IFD information multiplied by the areal
reduction factor for the applicable duration from Jordan et al. (2013);
• all were run using the stratified Monte Carlo sampling scheme that is implemented within
RORB (Laurenson et al., 2010);
• all were run for rainfall burst durations of 6, 9, 12, 18, 24, 36 and 48 hours, with the peak
flow defined by the highest flow from among these durations at each AEP;
• all simulations sampled from the same non-dimensional probability distribution of initial
loss values defined by Ilahee (2005), scaled by a median initial loss of 40 mm;
• all adopted a constant continuing loss rate of 1.7 mm/hour across all subcatchments;
• all simulations adopted RORB delay parameter, kc , values of 20 for the catchment
upstream of Peachester, 20 for the catchment between Peachester and Woodford, 16 for
the catchment upstream of Mount Kilcoy and 45 for the residual catchment to Somerset
Dam inflow.
The Monte Carlo simulations differed from one another in their approach to sampling of
spatial, temporal and space-time patterns across the catchment, as shown in Table 2.6.5.
Table 2.6.5. RORB Model Scenarios Run for Worked Example on Stanley River Catchment
to Somerset Dam
Case Spatial Pattern(s) Temporal Pattern(s)
1 Single spatial pattern derived Random sampling from a set
from IFD analysis, 1% AEP of 13 temporal patterns for
24 hour spatial pattern each duration, derived from
the bursts of the
corresponding duration within
the 13 selected events listed
in Table 1 of Jordan et al.
(2015)
2 Random sampling from a set of 13 space-time patterns for
each duration, derived from the bursts of the corresponding
duration within the 13 selected events listed in Table 1 of
Jordan et al. (2015)
Sinclair Knight Merz (2013) fitted a Generalised Extreme Value (GEV) distribution to the
estimated annual maxima inflows to Somerset Dam over the period between 1955 and 2013.
127
Spatial Patterns of Rainfall
The estimated inflow flood peak for the 1893 flood of 6200 m³/s was included as a censored
flow in the analysis. The distribution fitted to the estimated observed inflows was used to test
the performance of the RORB model simulations.
Figure 2.6.9 shows that both cases of RORB model simulations all provide an excellent
match to the fitted flood frequency quantiles across the range between 5% and 0.2% AEP.
Design peak inflow floods to Somerset Dam were insensitive to whether space-time patterns
are randomly sampled or only temporal patterns are randomly sampled in the Monte Carlo
simulation (case 1 versus case 2).
Figure 2.6.9. Flood Frequency Curves for Stanley River at Somerset Dam Inflow Derived
from Analysis of Estimated Annual Maxima and from RORB Model Simulations
128
Spatial Patterns of Rainfall
It is recommended that further research is conducted into quality control of remotely sensed
estimates of the space-time pattern of rainfall.
It is recommended that further research is conducted to improve methods for mean field bias
correction of remotely sensed rainfall data. The recommendations on the approaches that
should be adopted for mean field bias correction should be updated in these guidelines in
accordance with the findings from this research.
At the time of writing, there was not an agreed optimum method for deriving space-time
rainfall patterns from rainfall gauge data for Australian catchments, although Verworn and
Haberlandt (2011) provide reasonable guidance. It is recommended that further research is
conducted to identify a superior method (or set of potential methods) that are demonstrated
to reliably produce more accurate estimation of the space-time rainfall field from gauge
observations. It may be that the optimum method depends upon meteorological
characteristics of the storm, density of rainfall gauges, orographic characteristics of the
region or other factors. It is recommended that further research is conducted to explore
these influences on the selection of optimum spatial and space-time interpolation methods
for flood model calibration and design flood estimation.
There has been limited assessment on methods for selection of space-time patterns for use
in Monte Carlo simulation schemes for design flood estimation. Further research should be
conducted in this area, to provide more robust guidance on:
• The minimum number of space-time or temporal and spatial patterns in the sampling
population(s);
• The range of AEP represented by the populations of space-time or spatial and temporal
patterns to be sampled compared to the AEP of the depth of the rainfall burst; and
• The relative probabilities to be applied in the selection of patterns from the relevant
populations.
129
Spatial Patterns of Rainfall
It is recommended that further research is conducted into the validity of transposing space-
time patterns from one location to another. The research should assist in defining valid
regions over which transposition of space-time patterns is acceptable and conversely
boundaries between regions over which transposition should not occur. The research should
also consider other aspects of transposition, such as the validity or otherwise of rotating
space-time patterns and the maximum recommended angles for rotation.
Further research should be conducted into methods for stochastic generation of space-time
rainfall patterns. The research should investigate how orographic influences should be
incorporated into the stochastic generation algorithms in a way that replicates the space-time
variability of rainfall observed in historic rainfall events. Research should also develop more
definitive guidance on appropriate statistical tests to demonstrate that the stochastically
generated space-time rainfall patterns replicate the space-time statistical characteristics of
historical rainfall events that are sufficiently large to have caused flood events.
6.7. References
Abbs, D. and Rafter, T. (2009), Impact of Climate Variability and Climate Change on Rainfall
Extremes in Western Sydney and Surrounding Areas: Component 4 - dynamical
downscaling, CSIRO.
Ball, J.E. and Luk, K.C. (1998), Modelling spatial variability of rainfall over a catchment, J.
Hydrologic Engineering, 3(2), 122-130.
Barnston, A.G., (1991), An empirical method of estimating raingauge and radar rainfall
measurement bias and resolution, J. Applied Meterology, 30: 282-296.
Bradley, S.G., Gray, W.R., Pigott, L.D., Seed, A.W., Stow, C.D. and Austin, G.L. (1997),
Rainfall redistribution over low hills due to flow perturbation, J. Hydrology, 202, 33-47.
Collier, C.G. (1996), Applications of Weather Radar Systems: A Guide to Uses of Radar in
Meteorology and Hydrology. 2d ed. John Wiley, p: 383
Ilahee, M. (2005), Modelling Losses in Flood Estimation, A thesis submitted to the School of
Urban Development Queensland University of Technology in partial fulfilment of the
requirements for the Degree of Doctor of Philosophy, March 2005.
Intergovernmental Panel on Climate Change, (2013), Climate Change 2013: The Physical
Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the
Intergovernmental Panel on Climate Change [Stocker,T.F., D. Qin, G.-K. Plattner, M. Tignor,
S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge
University Press, Cambridge, United Kingdom and New York, NY, USA, p: 1535.
130
Spatial Patterns of Rainfall
Jones, D., Wang, W. and Fawcett, R. (2009), High-quality spatial climate data-sets for
Australia, Australian Meteorological and Oceanographic Journal, 58: 233-248.
Jordan, P., Nathan, R. and Seed, A. (2015), Application of spatial and space-time patterns of
design rainfall to design flood estimation. Engineers Australia, Hydrology and Water
Resources Symposium 2015.
Jordan, P., Weinmann, P.E., Hill, P. and Wiesenfeld, C. (2013), Collation and Review of Areal
Reduction Factors from Applications of the CRC-FORGE Method in Australia, Final Report,
Australian Rainfall and Runoff Revision Project 2: Spatial Patterns of Rainfall, Engineers
Australia, Barton, ACT.
Laurenson, E.M., Mein, R.G. and Nathan, R.J. (2010), RORB Version 6, Runoff Routing
Program, User Manual, Version 6.14, Monash University and Sinclair Knight Merz,
Melbourne, Victoria.
Seed, A.W. and Austin, G.L. (1990), Sampling errors for raingauge derived mean-areal daily
and monthly rainfall, J. Hydrology, 118: 163-173.
Sinclair Knight Merz, (2013), Brisbane River Catchment Dams and Operational Alternatives
Study, Generation of Inflow Hydrographs and Preliminary Flood Frequency Analysis,
Revision 1, Brisbane, Queensland.
Umakhanthan, K. and Ball, J.E. (2005), Rainfall models for catchment simulation. Australian
Journal of Water Resources, 9(1), 55-67.
Urbonas, B.R., Guo, J.C.Y. and Janesekok, M.P. (1992), Hyetograph density effects on
urban runoff modelling, Proc. International Conference on Computer Applications in Water
Resources, Tamkang University, Tamsui, Taiwan, pp: 32-37.
Woolhiser, D.A. (1992), Modeling daily precipitation - progress and problems, in Statistics in
the environmental and earth sciences, edited by A.T. Walden and P. Guttorp, p: 306, Edward
Arnold, London, U.K.
131
Chapter 7. Continuous Rainfall
Simulation
Ashish Sharma, Ratnasingham Srikanthan, Raj Mehrotra, Seth Westra,
Martin Lambert
Chapter Status Final
Date last updated 14/5/2019
While a clear case is often present when deciding between a Flood Frequency Analysis and
an event-based approach for estimating the design flood, it is less clear when a continuous
simulation approach should be used in place of an event-based approach. In general, the
primary benefits of continuous simulation approaches arise when the relationship between
the catchment’s antecedent moisture stores and the flood-producing rainfall event are not
independent of each other, or change over time (Blazkova and Beven, 2002; Boughton and
Droop, 2003; Cameron et al., 2000; Lamb and Kay, 2004). Continuous simulation allows an
explicit representation of the joint probability of antecedent moisture conditions and flood-
producing rainfall data, which can be challenging for event-based approaches. Therefore
key areas where continuous simulation approaches are likely to be useful include the
following:
• Catchments with large moisture stores which have a significant relationship between
antecedent rainfall and the annual maximum flood (Pathiraja et al., 2012);
• Examining the joint probability of flooding arising from the confluences of streams which
are subject varying spatial rainfall distributions;
• Situations where the relationship between historical antecedent conditions and flood-
producing rainfall are not representative of the design period. This is may occur as a result
of climate change, but may also be relevant when calibrating over a period that is over-
represented in terms of El Niño or La Niña events (e.g. Pui et al. (2011));
• Situations where the initial level of flood and reservoir storages are unknown and these
influence the resulting downstream flood flows; and
132
Continuous Rainfall Simulation
Consider the example in Figure 2.7.1 which uses data for a South Australian catchment to
illustrate the workings of the three approaches used for design flood estimation. While
Figure 2.7.1 uses the 90 day antecedent rainfall to illustrate its relation with the extreme
rainfall, similar joint relationships could exist between antecedent rainfall for longer periods,
or other more subtle rainfall characteristics that are difficult to summarise using a simple
metric.
The first two panels illustrate the working of a flood frequency or event-based modelling
approach for design flood estimation. The last panel illustrates a continuous simulation
model that attempts to capture the strong relationship in extreme rainfall with the 90 day
Antecedent rain.
Figure 2.7.1. Flood Events for a Typical Australian Catchment - Scott Creek, South Australia
As highlighted in Figure 2.7.1, continuous simulation approaches for design flood estimation
require continuous rainfall sequences as the primary data input. Although continuous rainfall
data exist in some locations for periods of several decades or longer, for most locations in
Australia the continuous data is either unavailable, too short or of insufficient quality to
support continuous rainfall-runoff modelling. This chapter therefore presents the basis and
techniques for stochastically generating continuous rainfall records in a catchment. Also
discussed are:
i. Generic issues regarding the accuracy of rainfall observations and methods for identifying
errors in rainfall time series;
133
Continuous Rainfall Simulation
iii. When to generate multi-site data as compared to lumped or single site rainfall;
iv. Approaches for generating data at locations where rainfall records are not available; and
Book 2, Chapter 7, Section 2 discusses the approaches used to prepare rainfall data for use
in stochastic generation or other modelling studies. Book 2, Chapter 7, Section 3 discusses
a conceptual framework that underlies stochastic rainfall generation at point or multiple
locations. Alternatives for generation of daily rainfall are discussed Book 2, Chapter 7,
Section 4, while Book 2, Chapter 7, Section 4 discusses alternatives for disaggregation of
daily rainfall to sub-daily time scales. Alternatives that generate continuous rainfall
sequences without reference to a daily total at point and multiple locations conclude the
presentation. Worked examples illustrating the applications of some of the models presented
are included to assist with practical implementations (Refer to Book 2, Chapter 7, Section 4).
• Effect of wind, wetting, evaporation and splashing on daily rainfall measurements – The
World Meteorological Organisation (World Meteorological Organisation, 1994) states that
these factors can result in the measured daily rainfall being less than the true rainfall by
anywhere between three and 30%.
• Errors in tipping bucket measurements – Tipping bucket rainfal gauges are the preferred
means of continuous rainfall measurement over the world. While reasonably accurate at
low rainfall intensities, tipping bucket rainfall gauges can underestimate the rainfall when
intensities are high due to the water lost as a result of the tipping motion of the rainfall
gauge. Typical errors for intensities greater than 200 mm/hr can range from 10-15% of the
true rainfall (La Barbera et al., 2002). A simple model for characterising gauge
measurement errors was proposed by Ciach (2003), marking them inversely proportional
to the measured rainfall intensity.
134
Continuous Rainfall Simulation
multiplicative scaling) may need to be used. Note that similar comparative checks can also
be used in the context of identifying ‘odd’ rainfall gauge locations from the regional
average (slope of the double mass curve will be significantly different to 1).
• Effect of untagged multi-day accumulations in daily rainfall data – As nearly a third of the
long-term daily rainfall records are recorded at Post Offices and other public buildings, the
occurrence of multi-day readings (representing Saturday to Monday) recorded on the first
working day after the weekend is frequent. An example of one such station is illustrated in
Figure 2.7.3. Viney and Bates (2004) outline a hypothesis test for identifying the periods in
a rainfall record that reflect significant multi-day accumulations. While there is no simple
corrective procedure that can be employed, common-sense alternatives such as
comparing with data at nearby locations (after ascertaining that they do not suffer from the
same problem), and using the persistence structure of the non-accumulated data to
disaggregate the accumulated values, should be adopted. It should be noted that while
such accumulations may not affect calculations in yield or water balance studies, their
implications in flood estimation studies can be significant.
135
Continuous Rainfall Simulation
Figure 2.7.2. Double Mass Curve Analysis for Rainfall at Station A (from World
Meteorological Organisation (1994))
136
Continuous Rainfall Simulation
Figure 2.7.3. Total Rainfall Amounts for Rainfall Station 009557 over the Period 1956-1962
(from Viney and Bates (2004))
• Use of gridded rainfall products -Given the need to use catchment averages of rainfall and
potential evapotranspiration in a range of hydrologic studies, datasets of gridded rainfall
and temperature have been produced for Australia and elsewhere. Two gridded datasets
used routinely in Australia are the SILO and the Australian Water Availability Project
(AWAP) daily rainfall 5 km x 5 km gridded datasets. The SILO project (Jeffrey et al., 2001)
by the Queensland Centre for Climate Applications, Department of Natural Resources,
aimed to develop a comprehensive archive of key meteorological variables (Maximum and
Minimum Temperature, Rainfall, Class-A pan Evaporation, Solar Radiation and Vapour
Pressure) through interpolation on a 0.05° grid extending from latitude 10°S to 44°S and
longitude 112°E to 154°E. The project has also resulted in a patched daily rainfall series at
4600 locations extending back to 1890. In addition, the AWAP dataset was produced by
the CSIRO and the Bureau of Meteorology (Jones et al., 2009) at the same resolution
using a different averaging procedure. These datasets have been compared (Beesley et
137
Continuous Rainfall Simulation
al., 2009) and found to be similar in many respects, while still resulting in a dampening of
high extremes due to averaging, as well as a over-simulation of the number of wet days in
a year. Similar biases occur in the representation of persistence attributes, possibly
distorting the specification of antecedent conditions prior to large rainfall events. Care
must be taken when using such datasets, especially if the intention is to simulate flow
extremes for the catchment.
• Use of radar or satellite derived rainfall measurements - While the above mentioned
gridded data are based on spatial interpolation of gauged rainfall alone, another option
that has been pursued with success is to combine gauge and remotely sensed rainfall,
which is known to improve accuracy especially in remote locations with limited gauge
coverage. Examples of approaches that have produced and assessed such combined
datasets include (Chappell et al., 2013). While they suffer from the same problems as
other gridded datasets, the advantages they offer in remote locations should be taken into
consideration.
• Use of statistical interpolation techniques based on nearby daily and sub-daily gauge
records - Refer to the alternatives for continuous simulation at ungauged locations
presented later in the chapter. These alternatives use separate approaches for daily and
sub-daily continuous generation at ungauged locations. The daily alternative amounts to
identifying nearby gauges that “mirror” key characteristics that would be expected of daily
rainfall at the location of interest. These nearby gauge records are then transformed to the
current location by adjusting for any difference in their annual mean. Each nearby gauge is
assigned a probability depending on how “similar” it may be to the location of interest,
which allows characterisation of the uncertainty associated with this procedure. In the sub-
daily case, a second step is adopted. Once the daily record has been generated, it is
disaggregated using data on sub-daily fragments based on a different set of
characteristics that define the sub-daily climate of the location. More details on these
procedures are presented later.
• Normal Ratio Method – This method estimates the missing rainfall �� at gauge � as a
weighted average of the measured rainfall at nearby rainfall gauges:
�─�
∑�� = 1 �
�─� � (2.7.1)
�� =
�
─ ─
where G represent the total number of rainfall gauges, �� and �� the average annual rainfall
at gauges g and i respectively, and �� the rainfall at gauge i for the time period being filled.
Care must be taken to ensure that the “host” rainfall gauges have similar climatic conditions
as the gauge where the missing observations are being infilled.
• Quadrant Method – This method is related to the Normal Ratio method, but aims to
account for the proximity of the rainfall gauges to the target location. The missing rainfall
138
Continuous Rainfall Simulation
1
∑4� = 1 ��
�2�
�� = 1 (2.7.2)
∑4� = 1 2
��
• Isohyetal Method – This method involves drawing isohyets (lines of equal rainfall) for the
storm duration over the network of rainfall gauges available, and inferring the rainfall at the
missing rainfall gauge by interpolation. The accuracy of the Isohyetal method depends
significantly on the number of rainfall gauges used and the interpolation algorithm being
used to construct the isohyets.
• Copula based interpolation – Bárdossy and Pegram (2014) presented an alternative for
interpolating existing data to infill missing values at a station of interest. They used a
copula-based specification of the conditional probability distribution of the missing rainfall
based on values at nearby gauges. They compared their approach with both regression
and other spatial interpolation based alternatives and found it to perform better using daily
rainfall data from South Africa. Another advantage of their approach is that it can include
conditioning on exogenous variables which could include atmospheric fields that are
common to all stations in the area of interest, thereby allowing additional information on
the nature of precipitation.
The above methods are fairly intuitive and modifications of the basic logic outlined are
common. For instance, in situations where data from nearby rainfall gauges are hard to find,
the interpolation is often from previous years of record at the same rainfall gauge, the period
being chosen to represent the same season and similar antecedent rainfall conditions.
The methods suggested above should be used with care, with consideration for the
distributional changes that occur as a result of the interpolation. For instance, if the stations
used for spatial averaging are at significant distances to the station where the interpolation is
required, then the interpolated rainfall is likely to be ‘smoother’ than the rainfall that would
have occurred at that location, potentially leading to an overestimation of wet days and an
underestimation of peak rainfall. Similarly, if the interpolation is performed at each time step
independently, the dependence of rainfall from one time to the next may not be accurately
represented. These considerations attain importance particularly when short time steps
(daily and sub-daily) are considered, and when the missing periods are a significant portion
of the overall record.
Missing data within historical rainfall records can be a serious problem, the amount of which
can affect the type of model structure considered. Few researchers explain adequately how
this is dealt with. Cowpertwait (1991) described a replacement strategy to handle missing
data but it is not apparent that this approach will be adequate with significant missing or
rejected data. Katz and Parlange (1995) and Gyasi-Agyei (1999) ignore and discard months
with any missing data. As a result, valuable information could be lost, particularly if there is
limited data in the first place. For some months of the year Gyasi-Agyei (1999) discarded up
to half of the available data. With an event-based approach, discarding storm events or inter-
event times containing missing intervals should introduce no significant bias into the
139
Continuous Rainfall Simulation
calibration, provided the occurrence of this corrupted data is random. Therefore, if part of a
month of data is missing it does not invalidate the remaining good quality data in that month.
• The seasonality of a range of rainfall statistics, including wet/dry days, averages and
extremes;
• The low-frequency variability, which causes below- or above-average rainfall to persist for
multiple consecutive years;
• The short-range (day-to-day and within-day) persistence of wet and dry periods; and
• The highly skewed distribution of rainfall, with the rainfall features often of most interest in
a design flood estimation context being located at the tail of the distribution.
Simulation of these aspects of rainfall requires careful formulation of the rainfall generation
model, often by using conditional variables that enforce this variability at multiple timescales.
Finally, although the current chapter does not discuss the case of stochastic generation at
multiple locations, this added consideration would require the specification of multivariate
conditional probability distributions characterising both the temporal evolution of the process,
as well its links in space.
In general, single site rainfall generation approaches fall into the following categories:
Many alternative models exist for each of these categories, as do their extensions to
ungauged or partly gauged locations. Readers are referred to Sharma and Mehrotra (2010)
for a review on these alternatives. A subset of these alternatives is discussed in Book 2,
Chapter 7, Section 4. It should be noted that some of the sub-daily models simulate daily
rainfall very well when aggregated to daily (refer to Frost et al. (2004)).
140
Continuous Rainfall Simulation
Assuming first or low-order dependence can result in the number of wet days in a year being
similar from one year to the next. This is contrary to the nature of rainfall in Australia and
elsewhere, with considerable variations from one year to the next often modulated by low-
frequency climatic anomalies such as the El Nino Southern Oscillation (ENSO)
phenomenon. This inability of rainfall generation models to simulate observed variability at
aggregated (annual or longer) scales is referred to as “over-dispersion” (Katz and Parlange,
1998).
Table 2.7.1 (adapted from Sharma and Mehrotra (2010)) summarises the approaches used
for generation of daily rainfall. The higher-order Markov approaches listed are especially
relevant for Australia, given the significant low-frequency variability that characterises
Australian rainfall. Misrepresentation of this variability can have serious implications in the
representation of pre-burst antecedent conditions, as well as the relationship between the
rainfall extremes and the longer-range antecedent rainfall, given both are known to be
modulated by climatic anomalies responsible for such variability in rainfall time series.
141
Continuous Rainfall Simulation
142
Continuous Rainfall Simulation
143
Continuous Rainfall Simulation
The daily generation models in Table 2.7.1 are often formulated using high quality observed
rainfall records, and then regionalised for use anywhere. Regionalisation of a rainfall
generation model is accomplished either by interpolating model parameters for use at
ungauged locations, or by sampling data from other locations as representative for the
location of interest. In the discussion that follows, two methods - the regionalised Nested
Transition Probability Model (N-TPM) and the Regionalised Modified Markov Model
(RMMM), are summarised due to their widespread use in Australia and the availability of
software to facilitate implementation within the country.
The TPM has been applied in a number of studies, and exists in a regionalised form for use
anywhere in Australia. The computer program for the TPM can be obtained from the
Stochastic Climate Library as part of the e-Water Toolkit (http://toolkit.net.au/Tools/SCL ).
Parameters for major city centres and recommendations for ungauged locations are
provided within the software. Table 2.7.2 and Table 2.7.3 present the number of states and
the rainfall amount associated with highest state used for major city centres in Australia. If
the number of states is less than seven the upper limit of the last state is infinite. Figure 2.7.4
144
Continuous Rainfall Simulation
provides regional extents that are used in applying the method to other locations not
included in the tables.
Table 2.7.2. Number of States used for Different Rainfall Stations in the Transition Probability
Model (Srikanthan et al., 2003)
Station Latitud Longitu J F M A M J J A S O N D
e °S de °E
Melbour 37 49 144 58 6 6 6 6 6 6 6 6 6 6 6 6
ne
Lerderd 37 30 144 22 6 6 6 6 6 6 6 6 6 6 6 6
erg
Monto 24 51 151 01 6 6 6 6 6 6 6 6 6 6 6 6
Cowra 33 49 148 42 6 6 6 6 6 6 6 6 6 6 6 6
Adelaid 34 56 138 35 6 6 6 6 6 6 6 6 6 6 6 6
e
Perth 31 57 115 51 6 6 6 6 6 6 6 6 6 6 6 6
Sydney 33 52 151 12 7 7 7 7 7 7 7 7 7 7 7 7
Brisban 27 28 121 06 7 7 7 7 7 7 7 7 7 7 7 7
e
Mackay 21 06 149 06 7 7 7 7 7 7 7 7 7 7 7 7
Kalgoorl 30 47 21 27 5 5 5 5 5 5 5 5 5 5 5 5
ie
Alice 23 49 133 53 4 4 4 4 4 4 4 4 4 4 4 4
Springs
Onslow 21 40 115 07 4 4 4 3 4 3 4 3 3 3 3 3
Bambo 22 03 119 38 6 6 6 5 5 5 2 2 2 2 2 2
o
Springs
Broome 17 57 122 15 7 7 7 3 3 3 3 3 3 3 3 4
Darwin 12 27 130 48 7 7 7 7 3 2 2 2 3 7 7 7
145
Continuous Rainfall Simulation
Figure 2.7.4. Rainfall Stations used in Table 2.7.1 for the Transition Probability Model
(Srikanthan et al., 2003)
Table 2.7.3. State Boundaries for Rainfall Amounts in the Transition Probability Model
State Number Upper State Boundary Limit (mm)
1 0.0
2 0.9
3 2.9
4 6.9
5 14.9
6 30.9
7 ∝
As the Transition Proability Method requires a correction for the misrepresentation of low-
frequency variability, several alternatives have been developed to address this limitation.
The Nested Transition Probability Method (Srikanthan and Pegram, 2009) operates by
aggregating the sequences of rainfall from the TPM to first a monthly and then to an annual
time scale. Once aggregated, rainfall is modelled as a Markov order-one process at the
aggregated time scale, accounting for the lag-one auto-correlation and variability that is
manifested in the aggregated process. This offers an effective means of correcting variability
in rainfall across a range of time scales, making the generated series more useable for
hydrological applications. As with the TPM, the computer program for the Nested TPM can
be obtained from the Stochastic Climate Library as part of the e-Water Toolkit (http://
toolkit.net.au/Tools/SCL ).
146
Continuous Rainfall Simulation
The algorithm for generating daily rainfall using the Modified Markov Model is presented in
Algorithm for step-wise daily rainfall generation using Modified Markov Model (Mehrotra and
Sharma, 2007a) .
147
Continuous Rainfall Simulation
Algorithm for step-wise daily rainfall generation using Modified Markov Model
(Mehrotra and Sharma, 2007a)
1. For all calendar days of the year calculate the transition probabilities of the standard
first-order Markov model using the observations falling within the moving window of
31 days centered on each day. Denote these transition probabilities as p11 for
previous day being wet and p01 for previous day being dry.
2. Also estimate the means, variances and co-variances of the higher time scale
predictor variables separately for occasions when current day is wet/day and
previous day is wet/dry. Mehrotra and Sharma (2007a), identified 2 variables
namely, previous 30 and 365 days wetness state)
3. Consider a day. Ascertain appropriate critical transition probability to the day t based
on previous day’s rainfall state of the generated series. If previous day is wet, assign
critical probability p as p11 otherwise assign p01.
4. Calculate the values of the 30 and 365 days wetness state for the day t and the
available generated sequence (Jo). To have values of wetness state in the beginning
of the simulation randomly pickup a year from the historical record and calculate
values of 30 and 365 days wetness states.
5. Modify the critical transition probability p of step 3 using the following equation and,
conditional means, variances, co-variances and tth day value of higher time scale
predictors for the generated day t. Denote the modified transition probability as � .
1 −1 � − �
− � − �1, � �1,
2 � � � 1, � ’
�
det �1, �
� = �1i 1 1
− �� − �1, � �−1 −1 � − �
1, � �� − �1, � ’ − �� − �1, � �1,
2 2 � � 1, � ’
� �
�1i + 1 − �1i
det �1, � det �1, �
where �� is the predictor set at time t, the �1, � parameters represent the mean
� �� �� = 1, �� − 1 = � and �1, � is the corresponding variance-co-variance matrix.
Similarly, �0, � and �0, � represent, respectively, the mean vector and the variance-co-
variance matrix of � when �� − 1 = � and �� = 0 . The �1i parameters represent the
baseline transition probabilities of the first order Markov model defined by
� �� = 1 �� − 1 = � and det represents the determinant operation.
7. Move to the next date in the generated sequence and repeat steps 2-5 until the
desired length of generated sequence is obtained.
Readers are referred to Mehrotra and Sharma (2007a) for details of the Modified Markov
Model rainfall generation algorithm. A R-package to generate daily rainfall at multiple
locations given observed rainfall time series has been developed Mehrotra et al. (2015) and
is available for download from Hydrology@UNSW Software website (http://
148
Continuous Rainfall Simulation
7.4.1.3.1. Regionalisation
The Modified Markov Modelrequires a representative sample of daily rainfall for generation
to proceed. This restricts its application only to locations having long-length observed
records. An attempt to regionalise the Modified Markov Model was presented in Mehrotra et
al. (2012), using a similar approach to the regionalised sub-daily generation model Westra et
al. (2012) in Book 2, Chapter 7, Section 4. Unlike the regionalised version of the Nested
TPM (Book 2, Chapter 7, Section 4), here the regionalisation involved identifying rainfall
records for locations deemed ‘similar’ to the target, followed by rescaling to adjust for
changed climatology, and then pooling to take account of relative similarities each nearby
location bears to the target location. This pooled record was then used as the basis for
generating the daily rainfall sequences.
As the regionalised approach relies on using data from nearby rainfall stations, it is
necessary to:
2. predict the probability that stations within a ‘neighbourhood’ of the target site are similar
by regressing against physiographic indicators such as the difference in latitude,
longitude, elevation and relative distance to coast between station pairs.
The relative distance to coast is obtained by dividing the difference in distance to coast
between two stations by the distance to coast of the target site. This is done to account for
the fact that the relative influence of distance to coast is likely to be greater for two stations
having greater proximity to the coastline.
‘Similarity’ between any two sites was assessed based on the similarity in the bivariate
probability distributions of a daily-scale attribute of interest, and the annual rainfall total.
Table 2.7.3 outlines the attributes used in formulation of the RMMM. Each of the attributes
listed were used to define similarity between stations based on a two sample, 2 dimensional
Kolgomorov-Smirnov test (Fasano and Fanceschini, 1987). The resulting classification of
similarity (‘1’ for similar and ‘0’ for dissimilar) for each attribute was pooled in a logistic
regression framework, using the difference in latitude for the two stations, difference in
longitude, and difference in the relative distance to coast as covariates.
Table x presents Daily scale attributes used to define similarity between locations. Each of
these variables were estimated for each location and each year of record, and then paired to
assess the best basis for defining ‘similarity’ between stations. Using 2708 separate rain
gauge stations with at least 25 years of data, this resulted in a total of 3,665,278 station
pairs.
Table 2.7.4. Daily Scale Attributes used to Define Similarity between Locations
149
Continuous Rainfall Simulation
This logistic regression framework can then be used to determine the similarity between any
two stations for the attribute of interest. Therefore, for a given target location where rainfall is
to be generated, one can now rank stations with data from most similar to least similar for
each attribute. The approach adopted in RMMM is to form an average rank using all
attributes for all nearby stations, and use the lowest S ranks to identify the stations to use as
the basis of rainfall generation. To account for the relative similarity across these S stations,
each station is selected with a probability equal to:
1
��
�� = 1 (2.7.3)
∑�� = 1 ��
where wi represents the weight associated with the ith station and ri the rank associated with
that station, used as the basis for probabilistically selecting nearby stations in the Modified
Markov Model. Lower ranked stations, which, by definition have rainfall attributes which are
most statistically similar to the target site, attain higher weight and therefore a higher
probability of being used in MMM. This rationale is summarised in Figure 2.7.5.
150
Continuous Rainfall Simulation
Figure 2.7.5. Identification of “Similar” Locations for Daily Rainfall Generation using RMMM
Once the ‘similar’ S stations have been identified, the generation of rainfall sequences at the
target location proceeds as per the generation algorithm for MMM in Algorithm for step-wise
daily rainfall generation using Modified Markov Model (Mehrotra and Sharma, 2007a), with
the inclusion of two additional steps. The first of these steps involves a rescaling of the
“similar” locations identified as described in Figure 2.7.5. The second of these steps is a
probabilistic selection of the “similar” locations, based on the weights associated with each
location. These steps are summarised in Figure 2.7.6.
151
Continuous Rainfall Simulation
Figure 2.7.6. Generation of Daily Rainfall Sequences using the Regionalised Modified
Markov Model Approach
It should be noted that the low frequency variable states (30 and 365 day wetness states in
MMM) are ascertained based on the generated sequence, and hence represent the
probabilistic average from the collation of the locations that have been selected as “similar”
for the generation procedure. The software first identified “similar" locations to the target
location of interest, and then estimates the parameters of the MMM for these locations. As
the criterion for selecting “similar” locations is defined as a function of differences in latitude,
longitude, elevation and rescaled distances from the coast, a new location with daily
observations can be included for the procedure to work. The parameters of the logistic
regression model have been ascertained using high quality daily rainfall observations, and
will be updated with significant updates in the daily rainfall datasets available in Australia.
It should also be noted that use of actual rainfall data from similar locations is followed by a
rescaling approach to account for changed climatology results in maximal use of observed
rainfall. The use of MMM has been shown to produce generated rainfall with low frequency
variability and extremes that are consistent with observations. Given not one but multiple
similar locations are used, the likelihood of over sampling rainfall attributes from a
misclassified similar location is reduced. An assessment by Mehrotra et al. (2012) indicates
that the method is able to capture the important attributes that define daily rainfall in both
gauged and ungauged locations in Australia.
152
Continuous Rainfall Simulation
153
Continuous Rainfall Simulation
Book 2, Chapter 7, Section 4 and Book 2, Chapter 7, Section 4. discuss two approaches
recommended for use in Australia. Both approaches exist in a regionalised form and can be
adopted at any location within the country. The Disaggregated Rainfall Intensity Pulse
approach represents a sub-daily rainfall generator that is calibrated using sub-daily data and
parameters regionalised for use anywhere, while the Regionalised Method of Fragments is a
daily to sub-daily disaggregation approach that relies on either the observed daily rainfall or
a generated daily rainfall sequence to convert to a sub-daily scale.
The Disaggregated Rectangular Intensity Pulse (DRIP) model was developed by (Heneker
et al., 2001) with the view to addressing several perceived deficiencies in existing event-
based models, particularly with regard to the simulation of extreme rainfall and aggregation
statistics. The DRIP modelling process is divided into two stages. The generation stage
(Figure 2.7.7) is represented by three random variables: dry spell or inter-event time ta, the
wet spell or storm duration td, and the average intensity i, with ta and tdboth described by a
generalised exponential distribution and the intensity (i) described by a Generalised Pareto
distribution. In the second stage, the individual events are disaggregated through a
constrained random walk (Figure 2.7.7b) to represent the rainfall temporal pattern for each
event.
154
Continuous Rainfall Simulation
Figure 2.7.7. Disaggregated Rectangular Intensity Pulse Model (extracted from Heneker et
al. (2001))
The random walk through a non-dimensional time-depth space is illustrated in Figure 2.7.2.
This is then used to disaggregate the rectangular pulse to time steps of the order of six or
�
fewer minutes. Time during the storm is non-dimensionalised by � = �� where t is the time
� �
since the start of the storm and depth is non-dimensionalised by � = ��� where � � is the
cumulative rainfall up to time t. The random walk progresses in discrete time intervals ��
from coordinate (0,0) to (1,1) in Figure 2.7.8, always with a non-negative slope. There are
two possibilities for a jump from � to � + ��:
1. An internal dry spell (represented by a horizontal segment in Figure Figure 2.7.8) whose
probability of occurrence is defined by a probability distribution; or
2. A rainfall burst (represented by a sloping segment in Figure Figure 2.7.8) whose non-
dimemsional depth �� is sampled from a probability distribution.
155
Continuous Rainfall Simulation
To fit a probability distribution to the observed inter-event time ta and storm duration
tdpopulations, a procedure was employed to extract independent events from the continuous
historical record. After analysis of correlation results, Heneker et al. (2001) adopted a
minimum inter-event time of 2 hours to distinguish independent storms and inter-storm
periods. This value provides a balance between ensuring consecutive events are sufficiently
independent and the need to have as much calibration storm data as possible within a fixed
length historical record. While different minimum inter-event times have been reported (e.g.
(Grace and Eagleson, 1966; Sariahmed and Kisiel, 1968; Koutsoyiannis and Xanthopoulos,
1990; Heneker et al., 2001)), Heneker et al. (2001) showed that 2 hours was shown to
assure independence of storm events across numerous Australian sites.
The generalised exponential distribution developed by (Lambert and Kuczera, 1998) was
used to model the distributions of inter-event time and storm duration. The generalised
exponential distribution takes the form:
156
Continuous Rainfall Simulation
−�� �, ��
� � �� = � � ≤ � �� = 1 − � ,� > 0 (2.7.4)
1 � �
log� 1 − � � �� = −� �, �� = log� 1 − �1 − �3� 4, �1 < 0, �2, �3, �4 > 0 (2.7.5)
�1 �2
The parameter vector �� is estimated using maximum likelihood techniques. The DRIP
parameters are usually calibrated for each month of the year to capture seasonal variability
in the rainfall process. Figure 2.7.9 and Figure 2.7.10 illustrate observed and fitted
probability distributions for inter-event and storm durations for Melbourne for select months
and demonstrate the good fit typically achieved by the generalized exponential distribution.
Noting that exponentially distributed data would plot as a straight line in Figure 2.7.9 and
Figure 2.7.10, the use of an exponential distribution for inter-event and storm durations
would be clearly inappropriate. A detailed comparison of the DRIP model with other point
process models is given in Frost et al. (2004).
Figure 2.7.9. Heneker et al. (2001) Model Fitted to Monthly Inter-event Time Data for
Melbourne in January
157
Continuous Rainfall Simulation
Figure 2.7.10. Heneker et al. (2001) Model Fitted to Monthly Storm Duration Data for
Melbourne in May
Recently, DRIP has been extended to any location where sufficient daily data is available,
thus greatly augmenting the domain of the approach. The basis of this regionalisation is a
‘master-target’ scaling relationship in which model calibration is undertaken at a ‘master’ site
with a long pluviograph record which is then updated and scaled to the ‘target’ site of interest
using the information from either a short pluviograph or daily rainfall record (Jennings, 2007),
with testing providing encouraging results for separations of up to 190 km between the
master and the target.
The software for DRIP is available via the Stochastic Climate Library as part of the e-Water
Toolkit (http://toolkit.net.au/Tools/DRIP).
158
Continuous Rainfall Simulation
Figure 2.7.11 illustrates the rationale behind the regionalised version of the state-based
Method of Fragments procedure used in Westra et al. (2012). A sub-daily time-step of 6-
minutes is used in Figure 2.7.11, although no change in the procedure is needed if an
alternate sub-daily time-step is to be adopted. Here, I(Rt) represents the state (wet or dry) of
the rainfall on day t. Conditioning the selection of a “similar” day in the historical record
involves selecting from a subset of days that (a) fall within a calendar window representative
of the season (chosen equal to +/-15 days in Westra et al. (2012)), and (b) represent the
same state � �� − 1 , � �� = 1 , � �� + 1 . Once these sub-sets of days are identified, they are
ranked based on their similarity with the rainfall amount that is sought to be disaggregated.
This forms the sample of days the fragments can be resampled from. Resampling proceeds
probabilistically using the k-nn resampling approach of Lall and Sharma (1996). Once the
fragments have been resampled, they are scaled back to rainfall amounts by multiplication
with the daily rainfall total for the day being disaggregated.
Figure 2.7.11. State-based Method of Fragments Algorithm used in the Regionalised Method
of Fragments Sub-daily Rainfall Generation Procedure
The logic used to regionalise the method is similar to that adopted in case of the
Regionalised Modified Markov Model (RMMM) (Book 2, Chapter 7, Section 4). Here, the
importance of regionalisation is all the more given the paucity of sub-daily rainfall records in
most parts of Australia (and the world). However, here the aim of the regionalisation is not to
159
Continuous Rainfall Simulation
identify locations having similar rainfall attributes as the target, but a similar daily to sub-daily
disaggregation relationships. As with the daily rainfall generation, a range of criteria were
used to characterise this relationship. These are listed in Table 2.7.6. Each of these
variables were estimated for each location, and then paired to assess the best basis for
defining ‘similarity’ between stations.
Maximum Sub-daily Intensity Maximum Intensity Fraction for each day for
6, 12, 30, 60, 120, 180 and 360 minute
durations.
Fraction of Zeroes Fraction of zero rainfall time-steps within
each day at a 6 minute time scale.
Timing of Maximum Intensity The timing associated with the maximum
intensity fraction for the day for 6, 12, 30, 60,
120, 180 and 360 minute time steps.
Using 232 separate rain gauge stations with at least 30 years of data, a total of 26 796
station pairs were formulated for each attribute. The similarity in each attribute across each
pair was then assessed using a two sample two dimensional Kolmogorov Smirnov test.
Using a significance level of 5%, this allowed the identification of pairs where the attributes
were similar. This then allowed the identification of covariates that could be used to
distinguish “similar” locations to allow the regionalisation to proceed. Use of attributes
pertaining to the maximum sub-daily fractions at multiple durations, as well as the timing of
the maximum, allowed similarity to be defined taking both diurnal pattern characterisation
and rainfall magnitudes into account. The use of fraction of zeroes allowed distinction
between locations having dominantly convective extremes from those that were spread over
the day.
The results of the significance testing described above were used as the basis for
formulating a logistic regression relationship for each attribute, with regression coefficients
being allowed to vary with season. The predictor variables found to be significant in defining
the relationship were the differences in latitude, longitude, elevation and the relative distance
to the coast. Based on this relationship, given any location in Australia, the user can identify
a subset of sub-daily locations having attributes that are most similar to the target location
sequences are needed at. This information is expressed as a probability, which is then used
to identify a defined number of sub-daily locations for use in the RMOF procedure.
The logistic regression of the binomial (0 for insignificant and 1 for a significant test outcome)
response for each sub-daily attribute can be expressed as:
��
Pr � = 1 = logit � = (2.7.6)
�� + 1
The logit function transforms the continuous predictor variables in Table 2.7.7 to the range
[0,1] as required when modelling a binomial response. In this equation, z is defined as:
with β representing the regression coefficients in Table 2.7.7 for the five predictor variables
used.
160
Continuous Rainfall Simulation
Table 2.7.7. Logistic Regression Coefficients for the Regionalised Method of Fragments Sub-
daily Generation Modela
Logistic Regression Coefficients
Season Sub-daily Intercept Latitude Longitude Latitudex Distance Elevation
Rainfall Longitude Coast
Attribute
DJF 6 min 0.426 -0.345 -0.0377 0.0064 -0.186 -0.00089
intensity
DJF 1 hr 0.823 -0.333 -0.0425 0.0093 -0.231 -0.00075
intensity
DJF Fraction -0.375 -0.253 -0.0318 0.0075 -0.242 -0.00065
of zeros
DJF 6 min 0.0979 -0.137 -0.0099 0.0022 -0.453 -0.00141
time
MAM 6 min -0.067 -0.192 -0.0065 NS -0.218 -0.00130
intensity
MAM 1 hr 0.308 -0.178 -0.0074 NS -0.107 -0.00098
intensity
MAM Fraction -0.806 -0.157 -0.0105 0.0025 -0.165 -0.00060
of zeros
MAM 6 min 1.256 -0.140 -0.0226 -0.0034 -0.227 -0.00092
time
JJA 6 min -0.197 -0.097 -0.0110 0.0034 -0.096 -0.00198
intensity
JJA 1 hr 0.471 -0.0102 -0.0204 0.0033 NS -0.00335
intensity
JJA Fraction -0.365 -0.073 -0.0171 0.0031 -0.101 -0.00116
of zeros
JJA 6 min 2.078 -0.098 -0.0321 0.0037 -0.156 -0.00069
time
SON 6 min 0.474 -0.387 -0.0722 0.0129 NS -0.00146
intensity
SON 1 hr 0.824 -0.325 -0.0835 0.0135 NS -0.00132
intensity
SON Fraction -0.382 -0.239 -0.0623 0.0104 -0.087 -0.00095
of zeros
SON 6 min 1.028 -0.162 -0.0287 0.0042 -0.317 NS
time
aAll predictors were found to be statistically significant (usually with a p-value <0.001 level), with the exception of
several predictors labelled as NS (Not Significant). Seasons include December-January-February (DJF), March-
April-May (MAM), June-July-August (JJA) and September-October- November (SON).
This allows the identification of the most to least similar sub-daily locations for each attribute
of interest, which forms the basis for identification of a subset of locations used to sample
the fragments. As multiple sub-daily attributes are considered in this choice, this subset is
selected based on a common rank averaged across all the attributes for each season. The
161
Continuous Rainfall Simulation
number of locations the fragments are pooled from depend on their respective data lengths
as a total of 500 years of data (including zeroes) is needed for the approach to work.
• For all the 1396 pluviograph stations in Australia (excluding the Sydney Airport gauge),
calculate each of the regression predictors; namely, difference in latitude, longitude,
latitude*longitude, elevation and normalised distance to coast, relative to the Sydney
Airport station;
• Having developed the 1396 x 5 predictor matrix, apply the regression model presented in
Equation (2.7.6) and Equation (2.7.7) using the regression coefficients shown in
Table 2.7.2 for each season and attribute to calculate the probability Pr(u =1);
• Separately for each season and attribute, rank the probabilities from highest to lowest;
• For each season calculate the average rank for each station across all attributes;
• Select the S lowest ranked stations for inclusion in the disaggregation model.
This algorithm yields different choices of stations for each season, as physiographic
influences may vary depending on the dominant synoptic systems occurring and different
times of the year. It is noted that the selection of the size of S represents a somewhat
subjective decision, as larger values of S increase the probability of selecting stations which
are statistically different to the target station, whereas smaller values of S will result in small
sample sizes. For this case we a total of 500 years of data (including zero rainfalls)
distributed over the 13 stations (S=13).
These lowest ranked 13 stations for the summer season are shown in Figure 2.7.12. As
expected, the lowest ranked stations (i.e. those with the greatest chance of being ‘similar’ to
Sydney Airport, brown dots) are those which are most proximate to this station, generally
within a small distance to coast, and all are at low coastal elevations. In this case, therefore,
the stations appear to be selected over a wide range of latitudes, which is probably due to
the strong increases in elevation and relative distance to coast with changing longitude.
162
Continuous Rainfall Simulation
It should be noted that the RMOF approach can be expanded to use more sub-daily data
than the 1306 stations used in the example for Sydney Airport presented above. New data
can be included without the need to update the coefficients of the logistic regression model
unless these inclusions are substantial enough to change the distributional characteristics of
the data being used. This allows improvements in the representativeness of the continuous
simulations as more data over time.
It should also be noted that the RMOF can be used at completely locations having no sub-
daily or daily rainfall observations, or to disaggregate daily rainfall records at locations where
sub-daily data is not available. In the first case, the use of a daily generation approach is
recommended such as the RMMM to generate daily sequences that should then be
disaggregated using RMOF. In the latter case, the observed daily sequence can be used
directly as the basis for disaggregation.
The software for the RMOF approach is available on request from the authors at this stage,
and will be uploaded to the Hydrology@UNSW Software website after a formal review
process (http://www.hydrology.unsw.edu.au/download/software/). This document will be
updated to reflect the full location once the download of the software is completed.
163
Continuous Rainfall Simulation
For cases where it is necessary to have consistency between the Bureau of Meteorology
IFDs and the IFDs derived from continuous simulation, a modification in the generation
algorithms for RMMM and RMOF was proposed. The main steps involved as illustrated in
Figure 2.7.13. First, annual extreme rainfall is corrected at multiple durations so that the IFD
based on the generated rainfall matches up with the observed IFD (henceforth referred as
‘target IFD’). Second, the non-extreme rainfall (i.e. rainfall that is not part of the annual
extreme series) is corrected in such a way that the cumulative rainfall before and after
correction is maintained. The dry periods are kept the same before and after bias correction,
hence no correction is required for dry periods. As the majority of the data is in the non-
extreme category, the corrections are markedly smaller for the non-extreme case.
Due to the inter-dependence of the extreme rainfall across various durations, it is necessary
to apply the above corrections in a recursive manner, with each recursion repeating the
above steps using a new set of durations exhibiting the maximum difference between the
generated and target intensities. This recursion is applied until the following objective
function reaches a minimum:
IFD�AEP − IFD�AEP
RMAEAEP = (2.7.8)
IFD�AEP
Minimisation of the RMAE in Equation (2.7.8) requires the specification of the set of target
durations to be used in its adjustment. The choice of durations is governed by the
dependence that the extremes for one duration have with the extremes for another. For
instance, it is more likely for 6 minute extremes to be a subset of 30 minute extremes than 6
hour extremes (say). In such a case, the durations should be selected keeping an interval
that maximises the independence between the extremes being evaluated. In practice, the
procedure uses two recursions with separate durations. For both recursions, three target
durations, i.e. D = 6 min, 1 hr and 3 hrs are considered, which keeps the distance between
the durations far enough to reduce the dependence between them. Options exist to use a
broader set of durations in the second iteration (6 min, 30 min, 1 hr, 3 hr, 6 hr, 12 hr)
although assessment with data for selected city centres in Australia indicated the benefits
from this were not significant.
164
Continuous Rainfall Simulation
Figure 2.7.13. Main Steps Involved in the Adjustment of Raw Continuous Rainfall
Sequences to Preserve the Intensity Frequency Duration relationships
The software for the post-processing approach described above is available on request from
the authors at this stage, and will be uploaded to the Hydrology@UNSW Software website
after a formal review process (http://www.hydrology.unsw.edu.au/download/software/). This
document will be updated to reflect the full location once the download of the software is
completed.
165
Continuous Rainfall Simulation
Alice Springs is an arid region with average annual rainfall of 280 mm. The observed record
at Alice Springs Airport exists for 67 years (1942-2008) and the sub-daily record for 57 years
(1951-2007, with missing periods). Each of the statistics presented are based on 100
realisations of length 67 years.
Table 2.7.8. Statistical Assessment of Daily Rainfall from RMMM for Alice Springs using 100
Replicates 67 years Long
Attribute Observed Simulated
At-site Regionalised
Average Annual Wet 41 40 31
Days (Nos)
Average Annual 279 297 306
Rainfall (mm)
Average Standard 13 12 15
Deviation of Annual
Wet Days (Nos)
Average Standard 152 160 189
Deviation of Annual
Rainfall (mm)
Figure 2.7.14 presents annual rainfall simulations for Alice Springs using 100 replicates. The
probability distribution of annual rainfall is well represented even in the case of the
regionalised simulation where at-site data was not used. This indicates a reasonable
representation of the inter-annual variability that characterises Australian rainfall.
166
Continuous Rainfall Simulation
Figure 2.7.14. Annual Rainfall Simulations for Alice Springs using 100 Replicates
As with the annual rainfall in Figure 2.7.13, the extremes are reasonably well simulated even
for the regionalised case, except for the most extreme event on record, which the model
under-simulates in the regionalised setting (Figure 2.7.15).
It should be noted that the results of the RMMM approach use 2708 daily rainfall stations
with long records, instead of the complete daily rainfall observation dataset for Australia.
One can expect better representation of underlying rainfall attributes as better and longer
datasets are used.
A. Daily and sub-daily rainfall record at the location of interest is available- Daily time series
are disaggregated using available at-site sub-daily time series. To obtain multiple
simulations, the same daily rainfall time series is used (at-site daily and at-site sub-daily).
167
Continuous Rainfall Simulation
B. Only a daily rainfall record at the location of interest is available - Daily time series is
disaggregated using sub-daily time series from nearby locations (regionalised sub-daily).
To obtain multiple simulations, the same daily rainfall time series is used (at-site daily and
regionalised sub-daily).
C. No daily or sub-daily rainfall record at the location of interest is available - First multiple
realisations of daily time series are obtained using regionalised daily model. In the second
step, each daily time series is disaggregated using sub-daily time series from nearby
locations (regionalised sub-daily) (regionalised daily and regionalised sub-daily).
Selected results from this assessment are presented in Table 2.7.9 and Figure 2.7.15. The
deterioration in the representation of extremes in the shorter duration case, is observed,
when regionalised options are considered, especially for the smallest duration (6 minute).
Table 2.7.9. Performance of extremes and representation of zeroes (for 6 minute time-steps)
from the sub-daily rainfall generation using RMOF for at-site generation using observed sub-
daily data (option 1), at-site disaggregation using observed daily data (option 2), and the
purely regionalised case (option 3).
Average Annual Maximum Rainfall (mm) in Spell of
Duration Observed Option A Option B Option C
6 min 5.5 6.75 6.77 8.02
30 min 16.71 18.07 18.23 20.97
1 hr 22.14 24.19 24.17 26.56
3 hr 32.58 34.77 33.56 34.94
6 hr 39.61 41.73 39.79 40.74
12 hr 48.18 47.65 46.5 46.78
Percentage of 98.54 98.62 98.78 98.68
zeros
Top panel presents results for option A, at-site daily rainfall and fragments, middle panel
presents results for option B, regionalised daily rainfall at-site fragments, while the bottom
panel presents results for option C, regionalised results using ‘nearby’ daily as well as sub-
daily records. Dark Blue dots represent observed data, the solid line represents the median
of 100 simulations, and dashed lines represent the 5 and 95 percentile simulated values
168
Continuous Rainfall Simulation
Figure 2.7.16. 6 minute (left column) and 6 hour (right column) Annual Maximum Rainfall
against Exceedance Probability for Alice Springs.
Results from this post-processing step for the continuous rainfall sequences from RMOF for
Alice Springs are presented in Figure 2.7.17. The broken lines (blue and green) indicate the
5 and 95 percentiles for raw and bias corrected data, respectively. The continuous series
that was generated has not used rainfall data from Alice Springs for the purpose of
generation. In addition to representing low frequency variability characteristics through the
proper simulation of daily rainfalls, these continuous sequences are able to mimic actual
IFDs and annual rainfall totals, thus making them suitable for continuous flow simulation.
169
Continuous Rainfall Simulation
Figure 2.7.17. Intensity Duration Frequency Relationships for Target and Simulated Rainfall
before and after Bias Correction at Alice Springs
Both the daily and the sub-daily continuous simulation alternatives discussed here will be
affected by climate change. Practitioners may need to use daily rainfall sequences that are
representative of future warmer climates, and are referred to the statistical downscaling
extensions of the RMMM daily generation approach discussed in Mehrotra and Sharma
(2010) for an alternative for generating daily sequences for any location of interest. This
generation requires selection of appropriate Global Climate Models (GCMs) and
atmospheric predictors, followed by sensible correction of GCM simulations to remove
known biases. Practitioners are referred to (Sharma et al., 2013) for a review of the
approaches used commonly for these purposes.
Generation of sub-daily sequences will require modification of the RMOF to alternatives that
take into account changes to extremes at sub-daily timescales (Westra et al., 2014) as well
as changes to associated temporal patterns (Wasko and Sharma, 2015). An alternative that
can be used to accommodate these changes is presented in (Westra et al., 2013). In
general, approaches for stochastically generating continuous (sub-daily) rainfall sequences
under a future climate are a rapidly evolving area of research, and detailed advice on theory
and approaches for continuous simulation under a future climate are outside of the scope of
this document.
7.6. References
Arnold, J.G. and Williams, J.R. (1989), Stochastic Generation of Internal Storm Structure at
a Point, Transactions of the ASAE, 32(1), 161-0167.
Beesley, C.A., Frost, A.J. and Zajaczkowski, J. (2009), A comparison of the BAWAP and
SILO spatially interpolated daily rainfall datasets, paper presented at 18th World IMACS /
MODSIM Congress, MSSANZ, Cairns, Australia.
170
Continuous Rainfall Simulation
Blazkova, S. and Beven, K. (2002), Flood frequency estimation by continuous simulation for
a catchment treated as ungauged (with uncertainty), Water Resources Research, Volume
38(8).
Boughton, W.C. (1999), A daily rainfall generating model for water yield and flood
studiesRep. Report 99/9, 21pp pp, CRC for Catchment Hydrology, Monash University,
Melbourne.
Boughton, W. and Droop, O. (2003), Continuous simulation for design flood estimation - a
review, Environmental Modelling & Software, 18(4), 309-318.
Bárdossy, A. and Pegram, G.G.S. (2009), Copula based multisite model for daily
precipitation simulation, Hydrol. Earth Syst. Sci., 13(12), 2299-2314.
Bárdossy, A. and Plate, E.J. (1992), Space-time model for daily rainfall using atmospheric
circulation patterns, Water Resources Research, 28(5), 1247-1259.
Cameron, D., Beven, K., Tawn, J. and Naden, P. (2000), Flood frequency estimation by
continuous simulation (with likelihood based uncertainty estimation), Hydrology and Earth
System Sciences, 4(1), 23-34.
Carey-Smith, T., Sansom, J. and Thomson, P. (2014), A hidden seasonal switching model for
multisite daily rainfall, Water Resources Research, 50(1), 257-272.
Chapman, T.G. (1997), Stochastic models for daily rainfall in the Western Pacific,
Mathematics and Computers in Simulation, 43: 351-358.
Chappell, A., Renzullo, L.J., Raupach, T.H. and Haylock, M. (2013), Evaluating geostatistical
methods of blending satellite and gauge data to estimate near real-time daily rainfall for
Australia, Journal of Hydrology, 493: 105-114.
Charles, S.P., Bates, B.C. and Hughes, J.P. (1999), A spatiotemporal model for downscaling
precipitation occurrence and amounts, Journal of Geophysical Research-Atmospheres,
104(D24), 31657-31669.
Chin, E.H. and Miller, J.F. (1980), On the Conditional Distribution of Daily Precipitation
Amounts, Mon. Wea. Rev., 108(9), 1462-1464.
Ciach, G.J. (2003), Local random errors in tipping-bucket rain gauge measurements, Journal
of Atmospheric and Oceanic Technology, 20(5), 752-759.
171
Continuous Rainfall Simulation
Coe, R. and Stern, R.D. (1982), Fitting Models to Daily Rainfall Data, Journal of Applied
Meteorology, 21(7), 1024-1031.
Cole, J.A. and Sherriff, J.D.F. (1972), Some single- and multi-site models of rainfall within
discrete time increments, Journal of Hydrology, 17(1-2), 97-113.
Connolly, R.D., Schirmer, J. and Dunn, P.K. (1998), A daily rainfall disaggregation model,
Agricultural and Forest Meteorology, 92(2), 105-117.
Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E. (2002), A space-time Neyman-Scott
model of rainfall: Empirical analysis of extremes - art. no. 1131, Water Resources Research,
38(8), 1131-1131.
Cowpertwait, P.S.P., O'Connell, P.E., Metcalfe, A.V. and Mawdsley, J.A. (1996), Stochastic
point process modelling of rainfall. II. Regionalisation and disaggregation, Journal of
Hydrology, 175(1-4), 47-65.
Das, D., Kodra, E., Obradovic, Z. and Ganguly, A.R. (2012), Mining extremes: Severe rainfall
and climate change, in Frontiers in Artificial Intelligence and Applications, edited, pp:
899-900.
Deidda, R., Benzi, R. and Siccardi, F. (1999), Multifractal modeling of anomalous scaling
laws in rainfall, Water Resources Research, 35(6), 1853-1867.
Dennett, M.D., Rodgers, J.A. and Stern, R.D. (1983), Independence of Rainfalls through the
Rainy Season and the Implications for the Estimation of Rainfall Probabilities, Journal of
Climatology, Volume 3(4), pp.375-384.
Eagleson, P.S. (1978), Climate, Soil, and Vegetation 2. The Distribution of Annual
Precipitation Derived From Observed Storm Sequences, Water Resources Research, 14(5),
713-721.
Econopouly, T.W., Davis, D.R. and Woolhiser, D.A. (1990), Parameter transferability for a
daily rainfall disaggregation model, Journal of Hydrology, 118(1-4), 209-228.
Evin, G., and Favre, A.C. (2012), Further developments of a transient Poisson-cluster model
for rainfall, Stochastic Environmental Research and Risk Assessment, 27(4), 831-847.
Feyerherm, A.M. and Bark, L.D. (1965), Statistical Methods for Persistent Precipitation
Patterns, Journal of Applied Meteorology, 4(3), 320-328.
Feyerherm, A.M. and Bark, L.D. (1967), Goodness of Fit of a Markov Chain Model for
Sequences of Wet and Dry Days, Journal of Applied Meteorology, 6(5), 770-773.
Frost, A., R. Srikanthan, and P. Cowpertwait (2004), Stochastic generation of point rainfall
data at subdaily timescales: a comparison of DRIP and NSRP, Technical report 04/09,
Cooperative Research Centre - Catchment Hydrology, Melbourne, Australia.
172
Continuous Rainfall Simulation
Gabriel, K.R. and Neumann, J. (1962), A Markov chain model for daily rainfall occurrence at
Tel Aviv, Q.J Royal Met. Soc., 88(375), 90-95.
Gates, P. and Tong, H. (1976), On Markov Chain Modeling to Some Weather Data, Journal
of Applied Meteorology, 15(11), 1145-1151.
Grace, R.A. and Eagleson, P.S. (1966). The Synthesis of Short-Time-Increment Rainfall
Sequences, Hydrodynamics Lab. Report 91, Dept. of Civil Engineering, Massachusetts
Institute of Technology, Cambridge, MA.
Gregory, J.M., Wigley, T.M.L. and Jones, P.D. (1993), Application of Markov models to area-
average daily precipitation series and interannual variability in seasonal totals, Clim. Dyn., 8:
299-310.
Gupta, V.K. and Waymire, E.C. (1993), A Statistical Analysis of Mesoscale Rainfall as a
Random Cascade, Journal of Applied Meteorology, 32(2), 251-267.
Haan, C.T., Allen, D.M. and Street, J.D. (1976), A Markov chain model of daily rainfall, Water
Resour. Res., 12: 443-449.
Harrold, T.I., Sharma, A. and Sheather, S.J. (2003a), A nonparametric model for stochastic
generation of daily rainfall amounts - art. no. 1343, Water Resources Research, 39(12),
1343-1343.
Harrold, T.I., Sharma, A. and Sheather, S.J. (2003b), A nonparametric model for stochastic
generation of daily rainfall occurrence - art. no. 1300, Water Resources Research, 39(10),
1300-1300.
Hasan, M.M., Sharma, A., Johnson, F., Mariethoz, G., Seed, A. (2014), Correcting bias in
radar Z-R relationships due to uncertainty in point raingauge networks. Journal of Hydrology
519(B), 1668-1676.
Hay L.E., McCabe G.J., Wolock D.M. and Ayers M.A. (1991), Simulation of precipitation by
weather type anaylsis. Water Resources Research, 27: 493-501.
Heaps, S.E., Boys, R.J. and Farrow, M. (2015), Bayesian modelling of rainfall data by using
non-homogeneous hidden Markov models and latent Gaussian variables, Journal of the
Royal Statistical Society: Series C (Applied Statistics), 64(3), 543-568.
Heneker, T.M., Lambert, M.F. and Kuczera, G. (2001), A point rainfall model for risk-based
design, Journal of Hydrology, 247(1-2), 54-71.
Hopkins, J.W. and Robillard, P. (1964), Some Statistics of Daily Rainfall Occurrence for the
Canadian Prairie Provinces, Journal of Applied Meteorology, 3(5), 600-602.
173
Continuous Rainfall Simulation
Hughes, J.P. and Guttorp, P. (1994), Incorporating Spatial Dependence and Atmospheric
Data in a Model of Precipitation, Journal of Applied Meteorology, 33(12), 1503-1515.
Jeffrey, S.J., Carter, J.O., Moodie, K.B. and Beswick, A.R. (2001), Using spatial interpolation
to construct a comprehensive archive of Australian climate data, Environmental Modelling &
Software, 16(6), 309-330.
Jennings S. (2007). A High Resolution Point Rainfall Model Calibrated to Short Pluviograph
or Daily Rainfall Data, PhD Thesis, The University of Adelaide, Australia.
Jha, S.K., Mariethoz, G., Evans, J., McCabe, M.F. and Sharma, A. (2015), A space and time
scale dependent non-linear geostatistical approach for downscaling daily precipitation and
temperature, Water Resources Research, available at: http://onlinelibrary.wiley.com/doi/
10.1002/2014WR016729/full accesses [20 July 2015].
Jones, P.G. and Thornton, P.K. (1997), Spatial and temporal variability of rainfall related to a
third-order Markov model, Agricultural and Forest Meteorology, 86(1-2), 127-138.
Jones, D., Wang, W.and Fawcett, R. (2009), High-quality spatial climate data-sets for
Australia, Australian Meteorological and Oceanographic Journal, 58: 233-248.
Katz, R.W. and Parlange, M.B. (1998), Overdispersion phenomenon in stochastic modeling
of precipitation, Journal of Climate, 11(4), 591-601.
Katz, R.W., Parlange, M.B. and Tebaldi, C. (2003), Stochastic modeling of the effects of
large-scale circulation on daily weather in the southeastern US, Climatic Change, 60(1-2),
189-216.
Kavvas, M.L. and Delleur, J.W. (1981), A stochastic cluster model of daily rainfall sequences,
Water Resources Research, 17(4), 1151-1160.
Kim, D., Kim, J. and Cho, Y.-S. (2014), A Poisson Cluster Stochastic Rainfall Generator That
Accounts for the Interannual Variability of Rainfall Statistics: Validation at Various
Geographic Locations across the United States, Journal of Applied Mathematics, 2014, pp:
1-14.
Kim, Y., Katz, R.W. Rajagopalan, B., Podestá, G.P. and Furrer, E.M. (2012), Reducing
overdispersion in stochastic weather generators using a generalized linear modeling
approach, Climate Research, 53(1), 13-24.
Kleiber, W., Katz, R.W. and Rajagopalan, B. (2012), Daily spatiotemporal precipitation
simulation using latent and transformed Gaussian processes, Water Resources Research,
48(1).
174
Continuous Rainfall Simulation
Koutsoyiannis, D., Onof, C. and Wheater, H.S. (2003), Multivariate rainfall disaggregation at
a fine timescale - art. no. 1173, Water Resources Research, 39(7), 1173-1173.
La Barbera, P., Lanza, L.G. and Stagi, L. (2002), Tipping bucket mechanical errors and their
influence on rainfall statistics and extremes, Water Science and Technology, 45(2), 1-10.
Lall, U. and Sharma, A. (1996), A nearest neighbor bootstrap for time series resampling,
Water Resources Research, 32(3), 679-693.
Lall, U., Rajagopalan, B. and Tarboton. D.G. (1996), A nonparametric wet/dry spell model for
resampling daily precipitation, Water Resources Research, 32(9), 2803-2823.
Lamb, R., and Kay, A.L. (2004), Confidence intervals for a spatially generalized, continuous
simulation flood frequency model for Great Britain, Water Resources Research, 40(7).
Lambert, M., and Kuczera, g. (1998), Seasonal generalized exponential probability models
with application to interstorm and storm durations, Water Resources Research, 34(1),
143-148.
Lambert M.F., Whiting J.P. and Metcalfe, A.V. (2003). A non-parametric hidden Markov
model for climate state identification. Hydrology and Earth Systems Sciences, 7(5), pp.
652-667.
Leblois, E., and Creutin J.-E. (2013), Space-time simulation of intermittent rainfall with
prescribed advection field: Adaptation of the turning band method, Water Resources
Research, 49(6), 3375-3387.
Leonard, M., Lambert, M.F. Metcalfe, A.V. and Cowpertwait, P.S.P. (2008), A space-time
Neyman-Scott rainfall model with defined storm extent, Water Resources Research, 44(9).
Lovejoy, S. and Schertzer, D. (1990), Multifractals, universality classes and satellite and
radar measurements of cloud and rain fields, J. Geophys. Res., 95(D3), 2021-2034.
Mehrotra, R., and Sharma, A. (2006), A nonparametric stochastic downscaling framework for
daily rainfall at multiple locations, Journal of Geophysical Research-Atmospheres,
111(D15101), doi:10.1029/2005JD006637.
Mehrotra, R., and Sharma, A. (2010), Development and Application of a Multisite Rainfall
Stochastic Downscaling Framework for Climate Change Impact Assessment, Water
Resources Research, 46.
175
Continuous Rainfall Simulation
Mehrotra, R., Westra, S., Sharma, A. and Srikanthan, R. (2012), Continuous rainfall
simulation: 2 - A regionalised daily rainfall generation approach, Water Resources Research,
48(1), W01536
Mehrotra, R., Li, J., Westra, S. and Sharma, A. (2015), A programming tool to generate
multi-site daily rainfall using a two-stage semi parametric model Environmental Modelling &
Software, 63: 230-239.
Menabde, M. and Sivapalan, M. (2000), Modeling of rainfall time series and extremes using
bounded random cascades and Levy-stable distribution, Water Resources Research, 36(11),
3293-3300.
Menabde, M., Seed, D., Harris and Austin, G. (1997), Self-similar random fields and rainfall
simulation, J. Geophys. Res., 102(D12), 13509-13515.
Micevski, T. and Kuczera, G. (2009), Combining Site and Regional Flood Information Using
a Bayesian Monte Carlo Approach, Water Resources Research(doi:
10.1029/2008WR007173), available at: online on http://www.agu.org/journals/wr/
papersinpress.shtml.
Onof, C. and Townend, J. (2004), Modelling 5-minute rainfall extremes, paper presented at
British Hydrological Society International Conference, London.
Onof, C., Chandler, R.E. Kakou, A., Northrop, P., Wheater, H.S. and Isham, V. (2000),
Rainfall modelling using Poisson-cluster processes: a review of developments, Stochastic
Environmental Research and Risk Assessment, 14(6), 384-411.
Oriani, F., Straubhaar, J., Renard, P. and Mariethoz, G. (2014), Simulation of rainfall time
series from different climatic regions using the direct sampling technique, Hydrology and
Earth System Sciences, 18(8), 3015-3031.
Over, T. M. and Gupta, V.K. (1996), A space-time theory of mesoscale rainfall using random
cascades, J. Geophys. Res., 101(D21), 26319.
Pathiraja, S., Westra, S. and Sharma, A. (2012), Why Continuous Simulation? The Role of
Antecedent Moisture in Design Flood Estimation, Water Resources Research, 48(W06534).
Pegram, G.G.S. (1980), An Autoregressive Model for Multilag Markov Chains, Journal of
Applied Probability, 17(2), 350.
Pui, A., Lal, A. and Sharma, A. (2011), How does the Interdecadal Pacific Oscillation affect
design floods in Australia?, Water Resources Research, 47(5).
176
Continuous Rainfall Simulation
Pui, A., Sharma, A., Santoso, A. and Westra, S. (2012), Impact of the El Niño Southern
Oscillation, Indian Ocean Dipole, and Southern Annular Mode on daily to sub-daily rainfall
characteristics in East Australia, Monthly Weather Review, 140: 1665-1681.
Racsko, P., Szeidl, L. and Semenov, M. (1991), A serial approach to local stochastic weather
models, Ecological Modelling, 57(1-2), 27-41.
Ramirez, J. A. and Bras, R.L. (1985), Conditional Distributions of Neyman-Scott Models for
Storm Arrivals and Their Use in Irrigation Scheduling, Water Resources Research, 21(3),
317-330.
Rodriguez-Iturbe, I., Gupta, V.K. and Waymire, E. (1984), Scale considerations in the
modeling of temporal rainfall, Water Resources Research, 20(11), 1611-1619.
Rodriguez-Iturbe, I., Cox, D.R. and Isham, V. (1987), Some Models for Rainfall Based on
Stochastic Point Processes, Proceedings of the Royal Society A: Mathematical, Physical
and Engineering Sciences, 410(1839), 269-288.
Rodriguez-Iturbe, I., Cox, D.R. and Isham, V. (1988), A Point Process Model for Rainfall:
Further Developments, Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 417(1853), 283-298.
Roldan, J., and Woolhiser, D.A. (1982), Stochastic daily precipitation models 1. A
comparison of occurrence models, Water Resources Research, 18(5), 1451-1459.
Rosjberg, D., Madsen, H. and Rasmussen, P. F. (1992). Prediction in Partial Duration Series
with Generalized Pareto-Distirbuted Exceedences, Water Resources Research, 28(11),
3001-3010.
Schertzer, D. and Lovejoy, S. (1987), Physical modeling and analysis of rain and clouds by
anisotropic scaling multiplicative processes, J. Geophys. Res., 92(D8), 9693.
Seed, A.W., Srikanthan, R. and Menabde, M. (1999), A space and time model for design
storm rainfall, Journal of Geophysical Research-Atmospheres, 104(D24), 31623-31630.
Serinaldi, F. and Kilsby, C.G. (2014), Simulating daily rainfall fields over large areas for
collective risk estimation, Journal of Hydrology, 512: 285-302.
Sharma, A. and Lall, U. (1999), A nonparametric approach for daily rainfall simulation,
Mathematics and Computers in Simulation, 48(4-6), 361-371.
177
Continuous Rainfall Simulation
Sharma, A. and Mehrotra, R. (2010), Rainfall Generation, in Rainfall: State of the Science,
edited by F.Y. Testik and M. Gebremichael, p: 287, American Geophysical Union, San
Francisco.
Sharma, A., Mehrotra, R. and Johnson, F. (2013), A New Framework for Modeling Future
Hydrologic Extremes: Nested Bias Correction as a Precursor to Stochastic Rainfall
Downscaling (invited), in Climate Change Modeling, Mitigation, and Adaptation, edited by
R.Y. Surampalli, T.C. Zhang, C.S.P. Ojha, B.R. Gurjar, R. D. Tyagi and C.M. Kao, p: 698,
American Society of Civil Engineers, Reston, Virginia.
Singh, S.V. and Kripalani, R.H. (1986), Analysis of persistence in daily monsoon rainfall over
India, Journal of Climatology, 6(6): 625-639.
Sivakumar, B., Berndtsson, R., Olsson, J. and Jinno, K. (2001), Evidence of chaos in the
rainfall-runoff process, Hydrological Sciences Journal, 46(1), 131-145.
Srikanthan, R. and McMahon, T.A. (1985), Stochastic generation of rainfall and evaporation
data, Tech. Paper No. 84, Aust. Water Resour. Council, Canberra, 1985, Australian Water
Resources Council, Technical Paper No.84.
Srikanthan, R., McMahon, T., Harrold, T. and Sharma, A. (2003), Stochastic Generation of
Daily Rainfall DataRep., CRC for Catchment Hydrology Technical Report 03, Melbourne,
Australia.
Steinschneider, S., and Lall, U. (2015), A hierarchical Bayesian regional model for
nonstationary precipitation extremes in Northern California conditioned on tropical moisture
exports, Water Resources Research, 51(3), 1472-1492.
Stern, R.D. and Coe, R. (1984), A Model Fitting Analysis of Daily Rainfall Data, Journal of
the Royal Statistical Society. Series A (General), 147(1), 1-34.
Thyer, M. and Kuczera, G. (2003a), A hidden Markov model for modelling long-term
persistence in multi-site rainfall time series 1. Model calibration using a Bayesian approach
Journal of Hydrology, 275(1-2), 12-26.
Thyer, M. and Kuczera, G. (2003b), A hidden Markov model for modelling long-term
persistence in multi-site rainfall time series. 2. Real data analysis Journal of Hydrology,
275(1-2), 27-48.
Viney, N.R. and Bates, B.C. (2004), It never rains on Sunday: The prevalence and
implications of untagged multi-day rainfall accumulations in the Australian high quality data
set, International Journal of Climatology, 24: 1171-1192.
178
Continuous Rainfall Simulation
Wallis, T.W.R. and Griffiths, J.F. (1997), Simulated meteorological input for agricultural
models, Agricultural and Forest Meteorology, 88(1-4), 241-258.
Wang, Q. J. and Nathan, R.J. (2007), A method for coupling daily and monthly time scales in
stochastic generation of rainfall series, Journal of Hydrology, 346(3-4), 122-130.
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Waymire, E. and Gupta, V.K. (1981a), The mathematical structure of rainfall representations:
1. A review of the stochastic rainfall models, Water Resources Research, 17(5), 1261-1272.
Waymire, E., and Gupta, V.K. (1981b), The mathematical structure of rainfall
representations: 2. A review of the theory of point processes, Water Resources Research,
17(5), 1273-1285.
Westra, S., Mehrotra, R., Sharma, A. and Srikanthan, R. (2012), Continuous rainfall
simulation: 1 - A regionalised sub-daily disaggregation approach, Water Resources
Research, 48(1), W01535.
Westra, S., Fowler, H.J., Evans, J.P., Alexander, L.V., Berg, P., Johnson, F., Kendon, E.J.,
Lenderink, g. and Roberts, N.M. (2014), Future changes to the intensity and frequency of
short-duration extreme rainfall, Rev Geophys, 52.
Westra, S., Evans, J.P., Mehrotra, R. and Sharma, A. (2013), A conditional disaggregation
algorithm for generating fine time-scale rainfall data in a warmer climate, Journal of
Hydrology, 479: 86-99.
Wheater, H. S., Isham, V.S., Cox, D.R., Chandler, R.E., Kakou, A., Northrop, P.J., Oh, L.,
Onof, C. and Rodriguez-Iturbe, I. (2000), Spatial-temporal rainfall fields: modelling and
statistical aspects, Hydrology and Earth System Sciences, 4(4), 581-601.
Wilby, R. L., Wigley, T.M.L., Conway, D., Jones, P.D., Hewitson, B.C., Main, J. and Wilks,
D.S. (1998), Statistical Downscaling of General Circulation Model Output - a Comparison of
Methods, Water Resources Research, 34(11), 2995-3008.
Wilks, D.S. (1999a), Multisite downscaling of daily precipitation with a stochastic weather
generator, Climate Research, 11(2), 125-136.
Woolhiser, D.A. (1992), Modeling daily precipitation - progress and problems, in Statistics in
the environmental and earth sciences, edited by A.T. Walden and P. Guttorp, p: 306, Edward
Arnold, London, U.K.
Woolhiser, D.A. and Pegram, G.G.S (1979), Maximum Likelihood Estimation of Fourier
Coefficients to Describe Seasonal Variations of Parameters in Stochastic Daily Precipitation
Models, Journal of Applied Meteorology, 18(1), 34-42.
Woolhiser, D.A. and Roldán J. (1982), Stochastic daily precipitation models: 2. A comparison
of distributions of amounts, Water Resources Research, 18(5), 1461-1468.
179
Continuous Rainfall Simulation
Woolhiser, D.A. and Roldán, J. (1986), Seasonal and Regional Variability of Parameters for
Stochastic Daily Precipitation Models: South Dakota, U.S.A, Water Resources Research,
22(6), 965-978.
de Lima, M.I.P. and Grasman, J. (1999), Multifractal analysis of 15-min and daily rainfall from
a semi-arid region in Portugal, Journal of Hydrology, 220(1-2), 1-11.
180
BOOK 3
clxxxiii
Peak Flow Estimation
2.8.7. Example 7: Improving poor fits using censoring of low flow data .............. 85
2.8.8. Example 8: A Non-Homogeneous Flood Probability Model ....................... 88
2.8.9. Example 9: L-moments fit to gauged data ................................................. 92
2.8.10. Example 10: Improving poor fits using LH-moments ............................... 94
2.8.11. Example 11: Fitting a probability model to POT data ............................... 98
2.9. References ........................................................................................................... 99
3. Regional Flood Methods ............................................................................................... 105
3.1. Introduction ......................................................................................................... 105
3.2. Conceptual Framework ...................................................................................... 106
3.2.1. Definition of Regional Flood Frequency Estimation ................................. 106
3.2.2. Formation of Regions .............................................................................. 106
3.2.3. Development of Regional Flood Frequency Estimation Technique ......... 107
3.2.4. Data Required to Develop Regional Flood Frequency Estimation
Technique .......................................................................................................... 108
3.2.5. Accuracy Considerations ......................................................................... 109
3.3. Statistical Framework ......................................................................................... 110
3.3.1. Region Of Influence (ROI) Approach ....................................................... 110
3.3.2. Parameter Regression Technique ........................................................... 110
3.3.3. Generalised Least Squares Regression ................................................... 111
3.3.4. Development of Confidence Limits for the Estimated Flood Quantiles .... 112
3.4. RFFE Techniques for Humid Coastal Areas ....................................................... 112
3.4.1. Data Used to Develop RFFE Technique .................................................. 112
3.4.2. Adopted RFFE Regions ........................................................................... 115
3.4.3. Adopted Estimation Equations ................................................................. 116
3.5. RFFE Techniques for Arid/Semi-Arid Areas ....................................................... 116
3.5.1. Data Used to Develop RFFE Technique .................................................. 116
3.5.2. Adopted Regions ..................................................................................... 117
3.5.3. Adopted Estimation Equations ................................................................. 118
3.6. Fringe Zones ...................................................................................................... 118
3.7. Relative Accuracy of the RFFE Technique ......................................................... 119
3.8. Bias Correction ................................................................................................... 121
3.9. Combining Regional and At-Site Quantile Estimates ......................................... 122
3.10. Impact of Climate Change ................................................................................ 122
3.11. Progressive Improvement of RFFE Model 2015 v1 .......................................... 122
3.12. RFFE Implementation and Limitations ............................................................. 123
3.12.1. Overview of RFFE Accuracy .................................................................. 123
3.12.2. RFFE Implementation ............................................................................ 124
3.13. Practical Considerations for Application of the RFFE Technique ..................... 125
3.13.1. Urban Catchments ................................................................................. 125
3.13.2. Catchments Containing Dams and Other Artificial Storage ................... 126
3.13.3. Catchments Affected by Mining ............................................................. 126
3.13.4. Catchments with Intensive Agricultural Activity ..................................... 126
3.13.5. Catchment Size ..................................................................................... 127
3.13.6. Catchment Shape .................................................................................. 127
3.13.7. Atypical Catchment Types ..................................................................... 127
3.13.8. Catchment Location ............................................................................... 128
3.13.9. Arid and Semi-arid Areas ....................................................................... 128
3.13.10. Baseflow .............................................................................................. 129
3.14. RFFE Model 2015 ............................................................................................ 129
3.15. Worked Examples ............................................................................................. 133
clxxxiv
Peak Flow Estimation
clxxxv
List of Figures
3.2.1. Peak Over Threshold series ........................................................................................ 8
3.2.2. Annual Exceedance Probability (AEP) - Exceedances per Year (EY) Relationship .. 11
3.2.3. Hydrograph for a 1000 km2 Catchment Illustrating Difficulty
of Assessing Independence of Floods ........................................................................ 13
3.2.4. Depiction of Censored and Gauged Flow Time Series Data ..................................... 15
3.2.5. Rating Curve Extension by Fitting to an Indirect Discharge Estimate ....................... 20
3.2.6. Rating Curve Extension by Slope-Conveyance Method ........................................... 21
3.2.7. Annual Average Interdecadal Pacific Oscillation Time Series ................................... 23
3.2.8. NSW Regional Flood Index Frequency Curves for Positive and Negative Interdecadal
Pacific Oscillation epochs (Kiem et al., 2003) ............................................................. 24
3.2.9. Rating error multiplier space diagram for rating curve .............................................. 39
3.2.10. Bilinear channel storage-discharge relationship ...................................................... 53
3.2.11. Simulated rainfall and flood frequency curves with major
floodplain storage activated at a threshold discharge of 3500
m3/s ............................................................................................................................... 54
3.2.12. Comparison between true peak flow and 9:00 am flow for Goulburn River at Coggan 55
3.2.13. Comparison between true peak flow and 9:00 am flow for Hunter River at
Singleton ..................................................................................................................... 55
3.2.14. TUFLOW Flike Splash Screen ................................................................................ 56
3.2.15. Create New .fld file .................................................................................................. 57
3.2.16. Flike Editor Screen .................................................................................................. 58
3.2.17. Observed Values Screen ......................................................................................... 59
3.2.18. Import Gauged Values Screen ................................................................................ 60
3.2.19. View Gauged Values in Text Editor ......................................................................... 61
3.2.20. Observed Values screen with Imported Data .......................................................... 62
3.2.21. Rank Data Screen ................................................................................................... 63
3.2.22. General Screen – After Data Import ........................................................................ 64
3.2.23. Blank TUFLOW-Flike Screen .................................................................................. 65
3.2.24. Probability Plot ........................................................................................................ 66
3.2.25. Results File .............................................................................................................. 67
3.2.26. Probability Plot using Gumbel Scale ....................................................................... 68
3.2.27. Censoring observed values tab ............................................................................... 70
3.2.28. Configured Flike Editor ............................................................................................ 71
3.2.29. Probability plot of the Singleton data with historic
information ................................................................................................................... 72
3.2.30. Probability plot of the Singleton data with historic
information ................................................................................................................... 73
3.2.31. Gaussian prior distributions ..................................................................................... 75
3.2.32. Prior for Log-Pearson III window ............................................................................. 75
clxxxvi
Peak Flow Estimation
clxxxvii
Peak Flow Estimation
3.3.8. RFFE Model 2015 Screen Shot for Data Input for Four Mile Brook at
Netic Rd, SW WA (Region 5) .................................................................................. 136
3.3.9. RFFE Model 2015 Screen Shot for Model Output for Four Mile Brook at
Netic Rd, SW WA (Region 5) .................................................................................. 137
3.3.10. RFFE Model 2015 vs. At-site FFA Flood Estimates for Four Mile Brook
at Netic Rd, SW WA (Region 5) .............................................................................. 138
3.3.11. RFFE Model 2015 Screen Shot for Data Input for the Morass Creek at
Uplands, VIC (Region 1) ......................................................................................... 139
3.3.12. RFFE Model 2015 Screen Shot for Model Output for the Morass Creek at
Uplands, VIC (Region 1) ......................................................................................... 140
3.3.13. RFFE Model 2015 vs. At-site FFA Flood Estimates (for the Morass Creek
at Uplands, VIC) (Region 1) .................................................................................... 141
3.3.14. Standardised Residuals vs. Normal Scores for Region 1 Based on Leave-one-out Validation
for AEPs of 50%, 20%, 2% and 1% .......................................................................... 142
3.3.15. Standardised Residuals vs. Normal Scores for Region 1 Based on Leave-one-out Validation
for AEPs of 2% and 1% ............................................................................................. 143
clxxxviii
List of Tables
3.2.1. Selected Homogeneous Probability Models Families for use in Flood Frequency
Analysis ..................................................................................................................... 27
3.2.2. Frequency factors for standard normal distribution. .................................................. 29
3.2.3. L-moments for several distributions (from
Stedinger et al. (1993)) .............................................................................................. 46
3.2.4. LH-moments for GEV and Gumbel distributions (from
Wang (1997)) ......................................................................................................... 48
3.2.5. Polynomial coefficients for use with Equation (3.2.73) .............................................. 49
3.2.6. Selected Results ....................................................................................................... 68
3.2.7. Gauged flows on the Hunter River at
Singleton ................................................................................................................... 69
3.2.8. Posterior Mean, Standard Deviation and Correlation for the LP III ........................... 72
3.2.9. Comparison of Selected Quantiles with 90% Confidence
Limits ........................................................................................................................... 72
3.2.10. Comparison of LP III Parameters with and without prior information ...................... 77
3.2.11. Selected Results ...................................................................................................... 77
3.2.12. Selected Results ..................................................................................................... 84
3.2.13. L-moment and GEV Parameter Estimates .............................................................. 93
3.2.14. Comparison of Quantiles using a Bayesian and LH-moments Inference Methods . 98
3.3.1. Summary of Adopted Catchments from Humid Coastal Areas of Australia ............ 114
3.3.2. Distribution of Shape Factors for the Selected Catchments .................................... 115
3.3.3. Details of RFFE Technique for Humid Coastal Areas of Australia .......................... 115
3.3.4. Summary of adopted stations from arid/semi-arid areas of Australia ...................... 117
3.3.5. Details of RFFE technique for arid/semi-arid regions .............................................. 118
3.3.6. Upper bound on (absolute) median relative error (RE) from leave-one-out validation of
the RFFE technique (without considering bias correction) ........................................ 121
3.3.7. Region Names for Application of the RFFE Model 2015 (see
Figure 3.3.2 for the Extent of the Regions) ......................................................... 129
3.3.8. Application Data for the Wollomombi River at Coinside, NSW (Region 1)
(Basic Input Data) .................................................................................................... 134
3.3.9. Fifteen Gauged Catchments (Used in the Development of RFFE Model 2015)
Located Closest to Wollomombi River at Coinside, NSW ....................................... 135
3.3.10. Application Data for Four Mile Brook at Netic Rd, SW Western Australia (Region 5)
(Basic Input Data) .................................................................................................. 136
3.3.11. Application Data for the Morass Creek at Uplands, VIC (Region 1) (Basic Input
Data) ........................................................................................................................ 138
3.3.12. Further Information on the Development and Testing of RFFE Technique 2015 .. 141
clxxxix
Chapter 1. Introduction
James Ball, Erwin Weinmann, George Kuczera
Following the concepts outlined in Book 1 for estimation of design flood parameters, where
adequate data of sufficient quality are available, it is recommended that an at-site Flood
Frequency Analysis (FFA) be used for estimation of the design peak flood discharges
quantiles. Details of suitable approaches are outlined in Book 3, Chapter 2.
For many other situations no observed data of a suitable quality for at-site Flood Frequency
Analysis are available for estimation of the desired flood quantiles. It is recommended that in
these situations, Regional Flood Frequency Estimation (RFFE) techniques be applied.
Details of suitable approaches are outlined in Book 3, Chapter 3.
While a consistent methodology for Regional Flood Frequency Estimation for any region in
Australia is outlined in Book 3, Chapter 3, designers are reminded of the guidance provided
in Book 1, Chapter 1, Section 1; namely, where circumstances warrant, flood engineers have
a duty to use other procedures and data that are more appropriate for their design flood
problem than those recommended in this Edition of Australian Rainfall and Runoff. This
guidance is particularly relevant where approaches have been developed for limited regions
of the country without the aim of these approaches being suitable for application across the
whole country or being subject to same development testing as the RFFE model proposed
herein. An example of this situation is the Pilbara Region of Western Australia where
independent studies by Davies and Yip (2014) and Flavell (2012) have developed Regional
Flood Frequency Estimation techniques for this region.
1
Introduction
available, the philosophy of Flood Frequency Analysis and its application underpin many of
the approaches presented in Book 3, Chapter 3 for rural ungauged catchments. It is
considered, therefore, to be a fundamental component of the estimation of peak flood
quantiles.
Presented in Book 3, Chapter 3 is a range of regional flood methods for estimation of peak
flood discharge quantiles in ungauged catchments. These techniques use the results of at-
site flood frequency analyses at gauged sites to derive peak discharge estimation
procedures for ungauged locations in the same hydrologic region. As the flood
characteristics vary considerably between different regions, a range of methods (similar
philosophical development but differing in parameter values) have been developed to suit
the specific conditions and requirements in different regions.
Different to previous versions of Australian Rainfall and Runoff, in the development of this
edition of Australian Rainfall and Runoff there has been an assumption that users will have
computing resources available. The techniques presented in the following sections therefore
require computing resources for their implementation. Therefore, the discussion in the
following sections focusses on both the theoretical basis of the techniques and their
implementation.
In early editions of Australian Rainfall and Runoff, application of this criterion was not always
possible because of the paucity of observed flood data technology limitations and the limited
analysis of the available data. Hence it was necessary to recommend many arbitrary
methods based purely on engineering judgement. The previous approaches towards
estimation of the Rational Method runoff coefficient (“C”) for urban catchments is an example
of this necessity. As discussed by Hicks et al. (2009), the approach for estimation of the
urban runoff coefficient presented by O'Loughlin and Robinson (1987) did not have a
scientific foundation but was included to provide the necessary guidance in the application of
this method.
For significant portions of Australia, this is no longer the case, and data are available for the
development of techniques that have undergone review by the profession from both a
scientific and a practical perspective. In these regions, the continued use of arbitrary design
methods and information cannot be justified.
It is worthwhile noting that the continued collection of data is necessary to enable ongoing
and continued improvements in the design methods, particularly in the robustness of
predictions and the detection of inappropriate flood quantile estimates.
1.4. Scope
This book has been prepared as a guide or manual, rather than a mandatory code of
practice. Rules and methods appropriate to various situations are presented, together with
relevant background information. Since catchments and the problems involved are diverse,
and the related technology is changing, recommendations herein should not be taken as
binding. They should be considered together with other information and local experience
when being implemented.
2
Introduction
The contents of this book within Australian Rainfall and Runoff are intended for a wide
readership including engineers, students, technicians, surveyors and planners. Readers
should be familiar with the basic concepts of catchment hydrology and hence have a basic
knowledge of hydrology and hydraulics.
1.5. Terminology
Many terms associated with design flow estimation have been used in a loose manner, and
sometimes quite incorrectly and in a misleading fashion. As outlined in Book 1, Chapter 2,
the National Committee on Water Engineering of Engineers Australia had three major
concerns:
• Clarity of meaning;
In view of the loose and frequently incorrect manner in which many terms often are used, it
was considered that Australian Rainfall and Runoff should adopt terminology that is
technically correct, as far as this is possible and in harmony with other objectives. Even if
this terminology is not entirely popular with all users, it was considered that Engineers
Australia has a responsibility to encourage and educate engineers regarding correct and
consistent terminology. It was recognised also by the National Committee on Water
Engineering that as well as being correct technically, the terms adopted should be relatively
simple and suitable for use in practical design as this would facilitate acceptance by the
profession.
The issue of terminology is particularly relevant to the usage of the term model. There are
many and varied usages of this term within the field of design flood estimation. For example,
the software used for implementation of a particular approach commonly is called a model
by users while others refer to the model as the encapsulation of the design flood estimation
approach, the calculations necessary for implementation of the approach (usually in software
but could be hand calculations) and the data necessary for implementation of the approach.
In the following definitions of the terms “model”, “technique” and “approach”, the
explanations used are suitable for the guidance contained within this book.
While the major terminology is discussed in Book 1 of Australian Rainfall and Runoff, those
terms pertinent only to the contents of this book are presented herein.
1.6. References
Davies, J.R. and Yip, E. (2014), Pilbara Regional Flood Frequency Analysis, Proc.
Hydrology and Water Resources Symposium, Perth, February 2014, Engineers Australia,
pp: 182-189.
Flavell, D. (2012), Design Flood Estimation in Western Australia, Australian Journal of Water
Resources, 16(1), 1-20.
Hicks, B., Gray, S. and Ball, J.E. (2009), A Critical Review of the Urban Rational Method,
Proceedings of H2009, 32nd Hydrology and Water Resources Symposium, Engineers
Australia, ISBN 978-08258259461.
3
Introduction
O'Loughlin, G.G. and Robinson, D.K., (1987), Urban stormwater drainage, Chapter 14 in
Australian Rainfall and Runoff - A Guide to Flood Estimation, DH Pilgrim editor, I.E.Aust,
Barton, ACT.
4
Chapter 2. At-Site Flood Frequency
Analysis
George Kuczera, Stewart Franks
2.1. Introduction
Flood Frequency Analysis (FFA) refers to procedures that use recorded and related flood
data to identify underlying probability model of flood peaks, at a particular location in the
catchment, which can then be used to perform risk-based design and flood risk assessment,
while providing input to regional flood estimation methods.
The primary purpose of this chapter is to present guidelines on performing Flood Frequency
Analyses1.Often judgment will need to be exercised when applying these techniques. To
inform such judgments, this chapter describes the key conceptual foundations that underpin
Flood Frequency Analysis – the practitioner will need an understanding of elementary
probability theory and statistics to get maximum benefit. In addition, a number of worked
examples are provided to aid deeper insight with the implied caveat that the examples are
not exhaustive in their scope. While it is expected that most practitioners will use software
written by others to implement the methods described in this chapter, sufficient information is
provided to enable practitioners to develop their own software applications.
In its most general form, the flood probability model can be described by its Probability
Density Function (pdf) � � � � where � � is the vector (or list) of parameters dependent
on x, a vector of exogenous or external variables such as climate indexes. The symbol ‘|’ is
interpreted as follows: the variable to the left of ‘|’ is a random variable, while the variables to
the right of ‘|’ are known values.
1The chapter represents a substantial revision of Chapter 10 of the 3rd Edition of Australian Rainfall and Runoff
(Pilgrim and Doran, 1987). Where appropriate, original contribution by Pilgrim and Doran has been retained. Major
changes include introduction of non-homogeneous probability models, replacement of product log-moments with
more efficient estimation methods, use of Bayesian methods to make better use of available flood information (such
as censored flow data, rating curve error and regional information), reduced prescription about the choice of flood
probability model, improved identification of potentially influential low flows and guidance on fitting frequency curves
to “difficult” data sets.
5
At-Site Flood Frequency
Analysis
� �≤�� � = ∫� � � �
0
�� (3.2.1)
Empirically, the pdf of q is the limiting form of the histogram of q, as the number of samples
approaches infinity. Importantly, Equation (3.2.1) shows that the area under the pdf is
interpreted as probability.
The simplest form of flood probability model arises when the parameters � do not depend on
an exogenous vector x. In such a case, each flood peak is considered to be a random
realisation from the same probability model � � � . Under this assumption, flood peaks form
a homogeneous time series.
A more complicated situation arises when flood peaks do not form a homogeneous time
series. This may arise for a number of reasons including the following:
• Rainfall and flood mechanisms may be changing over time. For example, long-term
climate change due to global warming, land use change and river regulation may render
the flood record non-homogeneous.
• Climate may experience pseudo-periodic shifts that persist over periods lasting from
several years to several decades. There is growing evidence that parts of Australia are
subject to such forcing and that this significantly affects flood risk, for example, (Franks
and Kuczera, 2002; Franks, 2002a; Franks, 2002b; Kiem et al., 2003; Micevski et al.,
2003).
The practioner needs to assess the significance of such factors and identify appropriate
exogenous variables x to condition the flood probability model. Although this chapter will
provide some guidance it is stressed that this is an area of continuing research –
practitioners are therefore advised to keep abreast of new developments.
6
At-Site Flood Frequency
Analysis
• Peak-Over-Threshold Series
The data in the AM series can be used to estimate the probability that maximum flood
discharge in a year exceeds a particular magnitude w. In ARR, this probability is called the
Annual Exceedance Probability AEP(w) and is formally defined as:
∞
��� � = � � ≤ � � � = ∫� � � �
�
�� (3.2.2)
where w is the maximum flood discharge in a year. Often it is convenient to express the AEP
as a percentage X% or alternatively for rare events. as a ratio 1 in Y. For example, the 1%
AEP is equivalent to an AEP of 1 in 100 or 0.01.
The data in the POT series can be used to estimate the probability distribution of the time to
the next peak discharge that exceeds a particular magnitude:
where t is time expressed in years and EY(q), the number of exceedances per year, is the
expected number of times in a year that the peak discharge exceeds q.
7
At-Site Flood Frequency
Analysis
The objective is to derive the distribution of the maximum flood peak within a specified
interval of time. Referring to the continuous streamflow times series Figure 3.2.1, let
the random variable q be a local peak discharge defined as a discharge that has lower
discharge on either side of the peak. This presents an immediate problem as any bump
on the hydrograph would produce a local peak. To circumvent this problem, we focus
on peaks greater than some threshold defined as qo. The threshold is selected so that
the peaks above the threshold are sufficiently separated in time to be statistically
independent of each other.
It is assumed that all peaks above the threshold qo are sampled from the same
distribution denoted by the pdf � � � > �0 .
Suppose, over a time interval of length t years, there are k peaks over the threshold qo.
This defines the POT time series {q1,…,qn}, which consists of k independent
realisations sampled from the pdf � � � > �0 .
Let w be the maximum value in the POT time series; that is,
For w to be the maximum value, each peak within the POT series must be less than or
equal to w. In probability theory this condition is expressed by the compound event
consisting of the intersection of the following k events:
�1 ≤ � ∩ �2 ≤ � ∩ … ∩ �� ≤ � (3.2.5)
Because the peaks are assumed to be statistically independent, the probability of the
compound event is the product of the probabilities of the individual events. Therefore
the probability that the random variable W≤ w in a POT series with n events occurring
over the interval t simplifies to:
� � ≤ �, � = � �1 ≤ � ∩ … ∩ �� ≤ �
= � �1 ≤ � � �2 ≤ � …� �� ≤ � (3.2.6)
�
=� �≤�
8
At-Site Flood Frequency
Analysis
The number of POT events k occurring over an interval t is random. Suppose that the
random variable k follows a Poisson distribution with � being the average number of
POT events per year; that is:
�� ��−�� (3.2.7)
� � ≤ �, � = , � = 0, 1, 2, …
�!
Application of the total probability theorem yields the distribution of the largest peak
magnitude over the time interval with length t:
∞
� � ≤ �, � = ∑ � � ≤ �, � � � ≤ �, �
�=0 (3.2.8)
=�− �� � � > �
where � � ≤ �, � is the probability that the largest peak over time interval t is less
than or equal to w. When the time interval t is set to one year, Equation (3.2.8) defines
the distribution of the AM series.
��� � = 1 − � � ≤ � � = 1 (3.2.9)
where AEP(w) is the probability of the largest peak in a year exceeding magnitude w.
We now derive the probability distribution of the time to the next flood peak which has a
magnitude in excess of w. With regard to Equation (3.2.8), if the largest peak during the
interval t is less than or equal to w, then the time to the next peak with magnitude in
excess of w must be greater than t. It therefore follows that the distribution of the time
to the next peak with magnitude exceeding w is
ARR defines this parameter as EY(w) which stands for Exceedances per Year, but
more strictly, is the expected number of peaks that exceed w in a year.
9
At-Site Flood Frequency
Analysis
��� � = 1 − � � ≤ � � = 1
= 1 − � −�� �>�
(3.2.11)
−�� �
=1−�
�� � = − log� 1 − ��� �
1 (3.2.12)
= − log� 1 −
� �
This relationship assumes peaks in the POT series are statistically independent and
that there is no seasonality in the sense that the probability density of the POT peak
above a threshold � � � > �0 does not change over the year. While the no-seasonality
assumption appears questionable on first inspection, in practice the threshold q0 is
selected so that the expected number of peaks exceeding the threshold q0 in any year
is of the order of 1. This is done to ensure the POT peaks are genuine floods and
statistically independent. As a consequence of the high threshold selected in practice,
the impact of seasonality is diminished.
where AEP(w) is expressed as the ratio 1 in Y(w). This relationship is plotted in Figure 3.2.2.
For AEPs less than 10% ( 0.1 or 1 in 10 i.e. events rarer than 10% AEP), EY and AEP are
numerically same, from a practical perspective. However, as the AEP increases beyond 10%
(i.e. for events more frequent than 10% AEP), EY increases more rapidly than AEP. This
occurs because in years with a large annual maximum peak, the smaller peaks of that year
may exceed the annual maximum peak in other years.
10
At-Site Flood Frequency
Analysis
Figure 3.2.2. Annual Exceedance Probability (AEP) - Exceedances per Year (EY)
Relationship
The question arises when should one use AM or POT approaches. Consistent with the
guidelines provided in Book 1, the following guideline is offered:
i. AEP of interest < 10% (i.e. events rarer than 10% AEP)
AEPs, in this range, are generally required for estimation of a design flood for a structure
or works at a particular site. Use of AM series is preferred as it yields virtually identical
answers to POT series in most cases, provides a more robust3 estimate of low AEP
floods and is easier to extract and define.
ii. EY of interest > 0.2 events per year (i.e. events more frequent than 0.2 EY)
Use of a POT series is generally preferred because all floods are of interest in this range,
whether they are the highest in the particular year of record or not. The AM series may
omit many floods of interest. The POT series is appropriate for estimating design floods
with a relatively high EY in urban stormwater contexts and for diversion works, coffer
dams and other temporary structures. However, in practice, flow records are not often
available at sites where minor works with a design EY greater than 0.1 events per year is
required.
peaks are largely smaller peaks. In the case when the data is not well fitted by the chosen probability model, the fit
to the upper part of the distribution may be compromised in order to obtain a good fit to the smaller peaks where the
bulk of the data lies.
11
At-Site Flood Frequency
Analysis
Flood peaks are the product of a complex joint probability process involving the interaction of
many random variables associated with the rainfall event, antecedent conditions and the
rainfall-runoff transformation. Peak flood records represent the integrated response of the
storm event with the catchment. They provide a direct measure of flood exceedance
probabilities. As a result, Flood Frequency Analysis is not subject to the potential for bias,
possibly large, that can affect alternative methods based on design rainfall (Kuczera et al.,
2006).
Other advantages of Flood Frequency Analysis include its comparative simplicity and
capacity to quantify uncertainty arising from limited information.
• The true probability distribution family is unknown. Unfortunately, different models can fit
the flood data with similar capability, yet can diverge in the right hand tail when
extrapolated beyond the data.
• Short records may compromise the utility of flood estimates. Confidence limits inform the
practitioner about the credibility of the estimate.
• It may be difficult or impossible to adjust the data if the catchment conditions under which
the flood data were obtained have changed during the period of record, or are different to
those applying to the future economic life of a structure or works being designed.
12
At-Site Flood Frequency
Analysis
Figure 3.2.3. Hydrograph for a 1000 km2 Catchment Illustrating Difficulty of Assessing
Independence of Floods
Lack of homogeneity of the population of floods is another practical problem, especially if the
data sample from the past is used to derive flood estimates applicable to the design life of
the structure or works in future. Examples of changes in collection of data or in the nature of
the catchment that lead to lack of homogeneity are:
1. Inability to allow for change of station rating curve, for example, resulting from insufficient
high-stage gauging;
13
At-Site Flood Frequency
Analysis
5. Changes in land use such as clearing, different farming practices, soil conservation
works, re-forestation, and urbanisation.
The record should be carefully examined for these and other causes of lack of homogeneity.
In some cases, recorded values can be adjusted by means such as routing pre-dam floods
through the storage to adjust them to equivalent present values, correcting rating errors
wherever possible, or making some adjustment for urbanisation. Such decisions must be
made largely by judgement. As with all methods of flood estimation, it is important that likely
conditions during the design life are considered, instead of existing conditions at the time of
design. Some arbitrary adjustment of derived values for likely changes in the catchment may
be possible, but the recorded data must generally be accepted for analysis and design.
Fortunately, the available evidence indicates that unless changes to the catchment involve
large proportions of the total area or large changes in the storage on the catchment, the
effects on flood magnitudes are likely to be low. In addition, the effects are likely to be larger
for frequent floods than for the rare floods that are of primary interest in design.
It is important to check how the peak discharges were obtained from the gauged record.
Peak discharges may be derived from daily readings, possibly with some intermediate
readings during some floods, for part of the record, and continuous readings from the
remainder of the record. If part of the record consists of daily readings, it is necessary to
assess whether daily readings adequately approximate the instantaneous peak discharge
(refer to Book 3, Chapter 2, Section 8 for instances of adequate and inadequate
approximations). If the daily reading is deemed as an unreliable estimate of the peak
discharge during that day, the reading need not be discarded but treated as a censored
discharge.
1 if t�ℎ flood peak > threshold �
�� � = (3.2.14)
−1 if t�ℎ flood peak ≤ threshold �
They arise in a number of ways. For example, prior to gauging, water level records may be
kept only for rare floods above some perception threshold. Therefore, all we may know is
that there were na flood peaks above the threshold and nb peaks below the threshold.
Sometimes, frequent floods below a certain threshold may be deliberately excluded, since
the overall fit gets unduly influenced by small floods.
14
At-Site Flood Frequency
Analysis
Figure 3.2.4 presents a graphical depiction of gauged and censored time series data. In the
first part of the record, all the peaks are below a threshold, while in the second part, daily
readings define a lower threshold for the peak. Finally, in the third part, continuous gauging
yields instantaneous peaks.
Figure 3.2.4. Depiction of Censored and Gauged Flow Time Series Data
2. The series is easily and unambiguously extracted. Most data collection agencies have
annual maxima on computer file and/or hard copy.
15
At-Site Flood Frequency
Analysis
An important advantage of the POT series is that when the selected base value is sufficiently
high, small events that are not really floods are excluded. With the AM series, non-floods in
dry years may have an undue influence on shape of the distribution. This is particularly
important for Australia, where both the range of flows and the non-occurrence of floods are
greater than in many other countries such as the United States and the United Kingdom. For
this reason it would also be expected that the desirable ratio of m to n would be lower in
Australia than in these countries (refer to Book 3, Chapter 2, Section 3).
A criterion for independence of successive peaks must also be applied in selecting events.
As discussed by Laurenson (1987), statistical independence requires physical independence
of the causative factors of the flood, mainly rainfall and antecedent wetness. This type of
independence is necessary if the POT series is used to estimate the distribution of annual
floods. On the other hand, selection of POT series floods for design flood studies should
consider the consequences of the flood peaks in assessing independence of events where
damages or financial penalties are the most important design variables. Factors to be
considered might include duration of inundation and the time required to repair flood
damage. In both cases, the size or response time of the catchment will have some effect.
The decision regarding a criterion for independence, therefore requires subjective judgement
by the practitioner, designer or analyst in each case. There is often conflict that some flood
effects are short-lived, perhaps only as long as inundation, while others, such as the
destruction of an annual crop, may last as long as a year. It is thus not possible to
recommend a simple and clear-cut criterion for independence. The circumstances and
objectives of each study, and the characteristics of the catchment and flood data, should be
considered in each case before a criterion is adopted. It is inevitable that the adopted
criterion will be arbitrary to some extent.
While no specific criterion can be recommended, it may be helpful to consider some criteria
that were used in past studies:
• Bulletin 17B of the Interagency Advisory Committee on Water Data (1982) states that no
general criterion can be recommended and the decision should be based on the intended
use in each case, as discussed above. However, in Appendix 14 of that document, a study
16
At-Site Flood Frequency
Analysis
by Beard (1974) is summarised where the criterion is that it should use independent flood
peaks should be separated by five days plus the natural logarithm of the square miles of
drainage area, with the additional requirement that intermediate discharges must drop to
below 75% of the lower of the two separate flood peaks. This may only be suitable for
catchments larger than 1000 km2. Jayasuriya and Mein (1985) used this criterion.
• The UK Flood Studies Report (Natural Environment Research Council, 1975) used a
criterion that flood peaks should be separated by three times the time to peak and that the
flow should decrease between peaks to two-thirds of the first peak.
• McIllwraith (1953), in developing design rainfall data for flood estimation, used the
following criteria, based on the rainfall causing the floods:
• For rainfalls of short duration up to two hours, only the one highest flood within a period
of 24 hours.
• For longer rainfalls, a period of 24 hours in which no more than 5 mm of rain could occur
between rain causing separate flood events.
• In a study of small catchments, Potter and Pilgrim (1971) used a criterion of three calendar
days between separate flood events but lesser events could occur in the intervening
period. This was the most satisfactory of five criteria tested on data from seven small
catchments located throughout eastern New South Wales. It also gave the closest
approximation to the above criteria used by McIllwraith (1953).
• Pilgrim and Doran (1987) and McDermott and Pilgrim (1983) adopted monthly maximum
peak flows to give an effective criterion of independence in developing a design procedure
for small to medium sized catchments. This was based primarily on the assumption that
little additional damage would be caused by floods occurring within a month, and thus
closer floods would not be independent in terms of their effects. This criterion was also
used by Adams and McMahon (1985) and Adams (1987).
The criteria cited above represent a wide range and illustrate the difficult and subjective
nature of the choice. It is stressed that these criteria have been described for illustrative
purposes only. In each particular application the practitioner, designer or analyst should
choose a criterion suitable to the analysis and relevant to all of the circumstances and
objectives.
Maximum monthly flows are an approximation to the POT series in most parts of Australia,
as the probability of two large independent floods occurring in the same month is low.
Tropical northern Australia, the west coast of Tasmania and the south-west of Western
Australia may be exceptions. It should be noted that not every monthly maximum flood will
be selected, but only those large enough to exceed a selected base discharge, as is the
case for the POT series. The monthly series has two important advantages over the POT
series, which it approximates:
17
At-Site Flood Frequency
Analysis
2. It can be argued that a flood occurring within a month of a previous large flood is of little
concern in design, as repairs will not have been undertaken and little additional damage
will result.
With the monthly series, care is required to check any floods selected in successive months
for independence. Where the dates are close, the lower value should be discarded. The
second highest flood in that month could then be checked from the records, but this would
generally not be worthwhile. An example of use of the monthly series is described by Pilgrim
and McDermott (1982).
Seasonal flood frequencies are sometimes required. For these cases, the data are selected
for the particular month or season as for the annual series, and the flood frequency analysis
is carried out in a similar fashion to that for the annual series.
• Station-Year Method
The principal shortcoming of the regression approach is that uncertainty in the transfer
process is ignored resulting in an overstatement of information content. To guard against
this, an approximate criterion for deciding whether the regression should be used is that the
correlation coefficient of the relation should exceed 0.85 (Fiering, 1963; Matalas and Jacobs,
1964). More rigorous criteria are discussed in ARR 1987 Book 3 Section 2.6.5.
Care is needed when annual floods are used. The dates of the corresponding annual floods
on the adjacent catchments should be compared. Not infrequently, the dates are different,
resulting in a lack of physical basis for the relation. Although relationships of this type seem
to have been used in some regional flood frequency procedures, it is recommended that
regressions should only be used when the corresponding floods result from the same storm.
This problem is discussed further by Potter and Pilgrim (1971).
When floods resulting from the same storm on adjacent catchments are plotted against each
other, there is often a large scatter. Frequently, a large flood occurs on one catchment but
18
At-Site Flood Frequency
Analysis
only a small flood occurs on the other. The scatter is generally greater than for the physically
unrealistic relation using floods which are the maximum annual values on the two
catchments but which may have occurred on different dates. The resulting relation using
floods that occurred in the same storm is often so weak that it should not be used to extend
records.
Wang (2001) describes a Bayesian approach that rigorously makes allowance for the noise
in the transfer process. This approach is considered superior to the traditional regression
transfer.
1. For large floods the rating curve typically is extrapolated or fitted to indirect discharge
estimates. This can introduce a systematic but unknown bias.
2. If the gauging station is located at a site with an unstable cross-section the rating curve
may shift causing a systematic but unknown bias.
The conceptual model of rating error presented in this section is based on Kuczera (1999)
and is considered to be rudimentary and subject to refinement. It is assumed the cross-
section is stable with the primary source of rating error arising from extension of the rating
curve to large floods.
Potter and Walker (1981) and Potter and Walker (1985) observe that flood discharge is
inferred from a rating curve which is subject to discontinuous measurement error. Consider
Figure 3.2.5 which depicts a rating curve with two regions having different error
19
At-Site Flood Frequency
Analysis
characteristics. The interpolation zone consists of that part of the rating curve well defined by
discharge-stage measurements; typically the error Coefficient of Variation (CV) would be
small, say 1 to 5%. In the extension zone the rating curve is extended by methods such as
slope-conveyance, log-log extrapolation or fitting to indirect discharge estimates. Typically
such extensions are smooth and, therefore, can induce systematic under- or over-estimation
of the true discharge over a range of stages. The extension error CV is not well known but
(Potter and Walker, 1981; Potter and Walker, 1985) suggest it may be as high as 30%.
Figure 3.2.5 and Figure 3.2.6 illustrate two cases of smooth rating curve extension wherein
systematic error is introduced. In Figure 3.2.5, the estimate was below the true discharge. In
the absence of any other information the rating curve is extended to pass smoothly through
this point thereby introducing a systematic underestimate of large flood discharges. Even if
more than one indirect discharge estimate were available, it is likely the errors will be
correlated because the same biases in estimating Manning's n, conveyance and friction
slope would be present
In Figure 3.2.6 the rating curve is extended using the slope-conveyance method. The
method relies on extrapolating gauged estimates of the friction slope so that the friction
slope asymptotes to a constant value. Depending on how well the approach to asymptotic
conditions is defined by the data considerable systematic error in extrapolation may occur.
Perhaps of greater concern is the assumption that Manning's n and conveyance can be
reliably estimated in the overbank flow regime particularly when there are strong contrasts in
roughness along the wetted perimeter.
Though Figure 3.2.5 represents an idealisation of actual rating curve extension two points of
practical significance are noted:
1. The error is systematic in the sense that the extended rating curve is likely to diverge from
the true rating curve as discharge increases. The error, therefore, is likely to be highly
correlated- in fact, it is perfectly correlated in the idealisation of Figure 3.2.5.
2. The interpolation zone anchors the error in the extension zone. Therefore, the error in the
extension zone depends on the distance from the anchor point and not from the origin.
This error is termed incremental because it originates from the anchor point rather than
the origin of the rating curve.
20
At-Site Flood Frequency
Analysis
Care is needed in assessing historical floods. Only stages are usually available, and these
may be determined by flood marks recorded on buildings or structures, by old newspaper
reports, or from verbal evidence. Newspaper or other photographs can provide valuable
information. Verbal evidence is often untrustworthy, and structures may have been moved. A
further problem is that the channel morphology, and hence the stage-discharge relation of
the stream, may have changed from those applying during the period of gauged record.
It is desirable to carry out Flood Frequency Analyses both by including and excluding the
historical data. The analysis including the historical data should be used unless in the
comparison of the two analyses, the magnitudes of the observed peaks, uncertainty
regarding the accuracy of the historical peaks, or other factors, suggest that the historical
peaks are not indicative of the extended period or are not accurate. All decisions made
should be thoroughly documented.
Considerable work has been carried out in the United States on the assessment of
paleofloods. These are major floods that have occurred outside the historical record, but
which are evidenced by geological, geomorphological or botanical information. Techniques
of paleohydrology have been described by Costa (1978), Costa (1983), Costa (1986) and
Kochel et al. (1982) and more recently by O'Connell et al. (2002), and a succinct summary is
given by Stedinger and Cohn (1986). Although high accuracy is not possible with these
estimates, they may only be marginally less accurate than other estimates requiring
extrapolation of rating curves, and they have the potential for greatly extending the database
and providing valuable information on the tail of the underlying flood distribution. A
procedure for assessing the value of paleoflood estimates of Flood Frequency Analysis is
given by Hosking and Wallis (1986). Only a little work on this topic has been carried out in
Australia, but its potential has been indicated by its use to identify the five largest floods in
the last 700 years in the Finke River Gorge in central Australia (Baker et al., 1983; Baker,
1984), and for more frequent floods, by identification of the six largest floods that occurred
since a major flood in 1897 on the Katherine River in the Northern Territory (Baker, 1984).
While the use of paleoflood data should be considered, it needs to be recognized that there
21
At-Site Flood Frequency
Analysis
are not many sites where paleofloods can be estimated and that climate changes may have
affected the homogeneity of long-term flood data.
A number of known climate phenomena impact on Australian climate variability. Most well
known is the inter-annual El Nino/Southern Oscillation (ENSO). The cold ENSO phase, La
Nina, results in a marked increase in flood risk across Eastern Australia, whereas El Nino
years are typically without large floods (Kiem et al., 2003).
There is also mounting evidence that longer-term climate processes also have a major
impact on flood risk. The Interdecadal Pacific Oscillation (IPO) is a low frequency climate
process related to the variable epochs of warming and cooling in the Pacific Ocean and is
described by an index derived from low pass filtering of Sea Surface Temperature (SST)
anomalies in the Pacific Ocean (Power et al., 1998; Power et al., 1999; Allan, 2000). The
IPO is similar to the Pacific Decadal Oscillation (PDO) of Mantua et al (1997), which is
defined as the leading principal component of North Pacific monthly sea surface temperature
variability.
The IPO time series from 1870 is displayed in Figure 3.2.7. It reveals extended periods
where the index either lies below or above zero. Power et al. (1999) have shown that the
association between ENSO and Australian climate is modulated by the IPO- a strong
association was found between the magnitude of ENSO impacts during negative IPO
phases, whilst positive IPO phases showed a weaker, less predictable relationship.
Additionally, Kiem et al. (2003) and Kiem and Franks (2004) analysed New South Wales
flood and drought data and demonstrated that the IPO negative state magnified the impact
of La Nina events. Moreover, they demonstrated that the IPO negative phase, related to mid-
latitude Pacific Ocean cooling, appears to result in an increased frequency of cold La Nina
events. The net effect of the dual modulation of ENSO by IPO is the occurrence of multi-
decadal periods of elevated and reduced flood risk. To place this in context, Figure 3.2.8
shows regional flood index curves based on about 40 NSW sites for the different IPO states
(Kiem et al., 2003) – the 1% AEP flood during years with a positive IPO index corresponds to
the 1 in 6 AEP flood during years with a negative IPO index. Micevski et al. (2003)
investigating a range of sites in NSW found that floods occurring during IPO negative
periods were, on average, about 1.8 times bigger than floods with the same frequency
during IPO positive periods.
22
At-Site Flood Frequency
Analysis
A key area of current research is the spatial variability of ENSO and IPO impacts. The
associations between ENSO, IPO and eastern Australian climate have been investigated
from a mechanistic approach. Folland et al. (2002) showed that ENSO and IPO both affect
the location of the South Pacific Convergence Zone (SPCZ) providing a mechanistic
justification for the role of La Nina and IPO negative periods in enhancing flood risk in
eastern Australia.
Whilst the work to date has primarily focused on eastern Australia, a substantial step change
in climate also occurred in Western Australia around the mid-1970’s, in line with the IPO and
PDO indices (Franks, 2002b), however the role of ENSO is less clear and is likely to be
additionally complicated by the role of the Indian Ocean.
The finding that flood risk in parts of Australia is modulated by low frequency climate
variability is recent. Practitioners are reminded that this is an area of active research and
therefore should keep abreast of future developments.
23
At-Site Flood Frequency
Analysis
Figure 3.2.8. NSW Regional Flood Index Frequency Curves for Positive and Negative
Interdecadal Pacific Oscillation epochs (Kiem et al., 2003)
24
At-Site Flood Frequency
Analysis
For AM series the missing record period is of no consequence and can be included in the
period of record, if it can be determined that the largest discharge for the year occurred
outside the gap, or that no large rainfall occurred during the gap. However the rainfall
records and streamflow on nearby catchments might indicate that a large flood could have
occurred during the period of missing record. If a regression with good correlation can be
derived from concurrent records, the missing flood can be estimated and used as the annual
flood for the year. If the flood cannot be estimated with reasonable certainty, the whole year
should be excluded from the analysis.
For POT series data, treatment of missing records is less clear. McDermott and Pilgrim
(1982) tested seven methods, leading to the following recommendations based on the
assumption that the periods of missing data are random occurrences and are independent of
the occurrence of flood peaks.
1. Where a nearby station record exists covering the missing record period, and a good
relation between the flood peaks on the two catchments can be obtained, then use this
relation and the nearby station record to fill in the missing events of interest.
2. Where a nearby station record exists covering the missing record period, and the relation
between the flood peaks on the two catchments is such that only the occurrence of an
event can be predicted but not its magnitude, then:
• For record lengths less than 20 years, ignore the missing data and include the missing
period in the overall period of record;
• For record lengths greater than 20 years, subtract an amount from each year with
missing data proportional to the ratio of the number of peaks missed to the total number
of ranked peaks in the year.
3. Where no nearby station record exists covering the missing record period, or where no
relation between flood peaks on the catchment exists, then ignore the missing data and
include the missing record period in the overall period of record.
25
At-Site Flood Frequency
Analysis
in certain parts of Australia, the flood record is not homogeneous due to variations in long-
term climate controls.
� 1 �
�+ 1 − −log� 1 − ,� ≠ 0
� �
�� = (3.2.15)
1
� − �log� −log� 1 − ,� = 0
�
26
At-Site Flood Frequency
Analysis
Table 3.2.1. Selected Homogeneous Probability Models Families for use in Flood Frequency
Analysis
Family Distribution Moments
Generalized 1 �
�(� − �) � ���� � = � + �
1−� 1+�
Extreme − [1 −
�
] 1
Value (GEV) 1 �(� − �) � − 1 ��� � > − 1
�(� �) = � � [1 − �
]
�2 2
�������� � = � 1 + 2� − � 1 + �
�2
1 1
�(� − �) �
��� � > − 2
− [1 − ]
�
�ℎ��� � is the gamma function
�(� ≤ � �) = �
�
�ℎ�� � > 0, � < � + � ;
�
�ℎ�� � < 0, � > � + �
Gumbel �−� −
�−� Mean � = � + 0.5772�
1 − � −� �
� �|� = �
� � �2�2
Variance � = 6
�−�
−
−� � Skew � = 1.1396
� � ≤ �|� = �
Log Pearson −�(log�� − �) �
�(� �) =
�
[�(log�� − �)]� − 1 e Mean log� � = � = � + �
III (LP III) ��(�)
�
Variance log� � = �2 =
�>0 �2
Exponential � − �* Mean � = �* + �
−
1 �
� �|� = �
� Variance � = �2
� − �* Skew � = 2
−
�
� � ≤ �|� = 1 − � , � ≥ �*
27
At-Site Flood Frequency
Analysis
Of the widely used distribution families, the GEV distribution has the strongest theoretical
appeal as it is the asymptotic distribution of extreme values for a wide range of underlying
parent distributions. In the context of flood frequency, suppose there are N flood peaks in a
year. Provided N is large and the flood peaks are identically and independently distributed,
the distribution of the largest peak discharge in the year approaches the GEV under quite
general conditions. However, it is questionable whether these assumptions are satisfied in
practice.
The number of independent flood peaks in any year may not be sufficient to ensure
asymptotic behaviour particularly for catchments that tend to aridity. Moreover, in strongly
seasonal climates, it is unlikely that the within-year independent flood peaks are random
realisations from the same probability distribution.
The GEV has gained widespread acceptance (for example, Natural Environment Research
Council (1975), Wallis and Wood (1985), and (Stedinger et al., 1993)).
The LP III distribution is widely accepted in practice as it consistently fits flood data as well, if
not better than other probability families. It has performed best of those that have been
tested on data for Australian catchments (Conway, 1970; Kopittke et al., 1976; McMahon,
1979; Mcmahon and Srikanthan, 1981). It is the recommended distribution for the United
States in Bulletin 17B of the Interagency Advisory Committee on Water Data (1982).
A problem arises when the absolute value of the skew of log q exceeds 2; that is, when
� ≥ 1. When � < 1, the LP III has a gamma-shaped density. However, when � ≥ 1, the
density changes to a J-shaped function. Indeed when � = 1, the pdf degenerates to that of
an exponential distribution with scale parameter � and location parameter �. For � ≥ 1, the
J-shaped density seems to be over-parameterised with three parameters. In such
circumstances, it is pointless to use the LP III. It is suggested that either the GEV or the
Generalized Pareto (GP) Distributions should be used as a substitute.
An analytical form of the distribution function is not available for the LP III and log-Normal
distributions. To compute the quantile qy (that is, the discharge with a 1 in Y AEP) the
following equation may be used:
log �� = � + �� � � (3.2.16)
where m, s and g are the mean, standard deviation and skewness of the log discharge and
KY is a frequency factor well-approximated by the Wilson-Hilferty transformation:
28
At-Site Flood Frequency
Analysis
3
2 � �
�� − +1 − 1 if � > 0
�� � = � 6 6 (3.2.17)
0 if � = 0
for |g| < 2 and AEPs ranging from 99% to 1% AEP. The term ZY is the frequency factor for
the standard normal distribution which has a mean of zero and standard deviation of 1; ZY is
the value of the standard normal deviate with exceedance probability 1 �. Table 3.2.2 lists ZY
for selected exceedance probabilities. Comprehensive tables of KY can be found in (Pilgrim,
1987).
� 1 �
�* + 1− ,� ≠ 0
� �
�� = (3.2.18)
1
�* − �log� ,� = 0
�
The GP distribution has an intimate relationship with the GEV. If the GP describes the
distribution of peaks over a threshold, then for Poisson arrivals of the POT peaks with �
being the average number of arrivals per year, it can be shown that the distribution of Annual
Maximum peaks is GEV with shape parameter �, scale parameter and location parameter:
� = ��−� (3.2.19)
�
� = �* + (3.2.20)
�
29
At-Site Flood Frequency
Analysis
1. There is a (fixed) probability P0 that the peak flow equals the zero-threshold flow q0, which
may be zero or a near-zero flow; and
2. There is a probability 1- P0 that the peak flow exceeds the threshold. In that case the
distribution of the peak flow follows a standard probability model.
A formal definition of this model can be found in Formal Definition of Zero-Threshold Mixture
Distribution.
�0 if � = �0
� �≤�� = � � ≤ � � − � � ≤ �0 � (3.2.21)
�0 + 1 − �0 if � > �0
� � > �0 �
The pdf of the mixture model can be expressed using the generalized probability
density which allows the random variable to take discrete values as well as continuous
values:
�0 if � = �0
� �� = 1 − �0 (3.2.22)
� � � if � > �0
� � > �0 �
30
At-Site Flood Frequency
Analysis
If the gauged record adequately samples the distribution of x, it is not necessary to identify a
non-homogeneous model. It suffices to fit a probability model to all the record to estimate the
long-term flood risk.
However, if the gauged record does not adequately sample x, significant bias in flood risk
may result if only at-site data are used. In such instances, it will be necessary to employ
regional frequency methods that take the non-homogeneity into account. This is an area of
current research. Practitioners are advised to keep abreast of new developments.
Book 3, Chapter 2, Section 8 illustrates the impact on flood risk arising from multi-decadal
persistence in climate state as represented by the IPO index. It illustrates how the
exogenous variable x can be constructed, demonstrates the serious bias in flood risk that
can arise if short records do not adequately sample different climate states and illustrates
the use of Equation (3.2.23).
If the true value of � were known, then the pdf � � � can be used to compute the flood
quantile �� � . For AM series the 1 in Y AEP quantile is defined as:
�(� ≤ �� �) =
1
�
=
∫
��
�(� �)�� (3.2.24)
31
At-Site Flood Frequency
Analysis
However, in practice the true value of � (as well as the distribution family) is unknown. All
that is known about �, given the data D, is summarized by a probability distribution with pdf
� � � . Book 3, Chapter 2, Section 6 describes how this distribution may be obtained – this
distribution is the posterior distribution if performing a Bayesian analysis or the sampling
distribution if performing a bootstrap analysis.
If the true value of � is not known, it follows that the true value of the quantile �� � is not
known. The uncertainty about � described by the pdf � � � , translates into uncertainty
about the quantile, described by the quantile predictive pdf �(�� �). The question then
arises, which value from the quantile predictive pdf �(�� �)should be adopted as the flood
quantile estimate? This section presents two approaches for determining the best estimate
of a flood discharge with a 1 in Y AEP knowing the pdf � � � .
The loss function � �� � , �� � describes the loss or consequence when the true quantile
�� � , which depends on �, is incorrectly estimated by �� � , which depends on the data D.
Because the true value of � is uncertain, the best estimator �� � ��� is the one that
minimises the expected loss:
�� � ��� min∫� �
�
� � , �� � � � � �� (3.2.25)
The expected value of this loss function is referred to as the Mean Squared Error (MSE),
which shows that the optimal quantile estimator is the expected value of �� � (DeGroot,
1970):
� �� � = ∫�
�
� � � � � �� (3.2.27)
Stedinger (1983) observes that this integral may not exist for some of the probability
distributions used in Flood Frequency Analysis. To avoid this problem the following first-order
approximation may be used:
� �� � = �� � � � (3.2.28)
32
At-Site Flood Frequency
Analysis
�� � = ∫�� � � �� (3.2.29)
The term �� [E(θ|D)] is referred to as the expected parameter 1 in Y AEP quantile. When
this term is used it is understood that the quantile is computed with � assigned [�(� �)].
where � and � are the loss coefficients for under and over-design respectively. It can be
shown that the quantile estimator that minimises the expected asymmetric linear loss must
satisfy the following (DeGroot, 1970):
�
� �� ≤ �� � ��� � = (3.2.31)
�+�
When � equals �, �� � ���, it is the median of the quantile predictive distribution. However,
when � equals 4�, it implies that the consequences of under-design are four times more
severe than over-design, which is the 80-percentile of the predictive distribution, a far more
conservative estimate than the median.
Stedinger (1983) showed that the dependence of the flood peak pdf on uncertain
parameters can be removed using total probability to yield the design flood distribution:
� �� = ∫� � � � � � ��
�
(3.2.32)
This distribution only depends on the data D (and the assumed probability family).
� � > �� � = ∫ ∫� � � �� � � � �� = �1
� ��
(3.2.33)
This quantile is greater than the expected parameter X% AEP quantile �� [E(θ|D)]. To gain
further insight suppose we make qY equal to the expected parameter X% AEP quantile and
compute its design flood AEP using Equation (3.2.33). Noting the inner integral is the
probability of the flood peak exceeding �� given a particular value of � , Equation (3.2.33)
can be rewritten as:
33
At-Site Flood Frequency
Analysis
∞
� � > �� � = ∫ ∫� � � �� � � � ��
� ��
= ∫� � > �
�
� � � � � ��
(3.2.34)
1
=
�
It thus follows that the expected parameter 1 in Y AEP quantile ��[�(� �)]. Moreover, it
follows that the expected parameter 1 in Y AEP quantile ��[�(� �)] has an expected AEP
given by �(� > �� �) which exceeds 1 �.
i. Sizing of a structure for a given AEP is the primary consideration and minimizing
quadratic loss, as expressed by Equation (3.2.26), is a reasonable criterion.
ii. Unbiased estimates of annual flood damages are required (Doran and Irish, 1980).
i. Design is required for many sites in a region, and the objective is to attain a desired
AEP over all sites.
ii. Probability of exceedance is of primary importance for design at a single site (such as
in floodplain management).
2. If there are significant differences between the consequences of under and over-design,
consideration should be given to using Equation (3.2.31) as the flood quantile estimator.
1. Calibrating the model to the available data D to determine the parameter values
consistent with the data D.
34
At-Site Flood Frequency
Analysis
Two calibration approaches are described involving Bayesian and L-moment techniques. For
each approach, the algorithms are documented and illustrated with worked examples.
Implementation of the algorithms in software requires specialist skill; therefore, a typical
practitioner is advised to make use of the available software. The use of the method of
product-moments applied to log flows is not recommended. The choice of calibration
methods depends on the type of data available (gauged and censored), the extent of
measurement error associated with the rating curve and the availability of regional
information about parameters.
The following steps describe the production of a probability plot for gauged Annual Maximum
floods:
• Rank the gauged discharges in descending order (that is, from largest to smallest) yielding
the series {q(1), q(2),…,q(n)} where q(i) is the rank i or the ith largest flood;
• Estimate the AEP for each q(i) using a suitable plotting position; and
For analysis of the AM series, a general formula (Blom, 1958) for estimating the AEP of an
observed flood is:
i−�
�(�) = (3.2.35)
� + 1 − 2�
where i is the rank of the gauged flood, n is the number of years of gauged floods and � is a
constant whose value is selected to preserve desirable statistical properties.
• � = 0 yields the Weibull plotting position that produces unbiased estimates of the AEP of
q(i);
• � = 0.375 yields the Blom’s plotting position that produces unbiased quantile estimates for
the normal distribution; and
• � = 0.4 yields the Cunnane (1978) plotting position that produces nearly unbiased quantile
estimates for a range of probability families.
While there are arguments in favour of plotting positions that yield unbiased AEPs, usage
has favoured plotting positions that yield unbiased quantiles. To maintain consistency, it is
recommended that the Cunnane plotting position is used, namely:
35
At-Site Flood Frequency
Analysis
� − 0.4
�(�) = (3.2.36)
� + 0.2
A more complete discussion on plotting positions can be found in Stedinger et al. (1993).
It is stressed that plotting positions should not be used as an estimate of the actual AEP or
EY of an observed flood discharge. Such estimates should be obtained from the fitted
distribution.
Judicious choice of scale for the probability plot can assist the evaluation of goodness of fit.
The basic idea is to select a scale so that the data plot as a straight line if the data is
consistent with the assumed probability model.
This is best illustrated by an example. Suppose that floods follow an exponential distribution,
then from Table 3.2.1, the distribution function is:
� − �*
−
� (3.2.37)
� �≤� =1−�
�(�) − �*
− (3.2.38)
�
1 − �(�) = 1 − �
If q(i) is plotted against log��(�), the data will plot approximately as a straight line if they are
consistent with the exponential distribution.
• For the log normal distribution plot log q(i) against the standard normal deviate with
exceedance probability P(i). Data following a LP III distribution will plot as a curved line.
When visually evaluating the goodness of fit care needs to be exercised in judging the
significance of departures from the assumed distribution. Plotting positions are correlated.
As a result, they do not scatter about the fitted distribution independently of each other. The
correlation can induce “waves” or regions of systematic departure from the fitted distribution.
To guard against this it is suggested that statistical tests be used to assist goodness-of-fit
assessment. Stedinger et al. (1993) discuss the use of the Kolmogorov-Smirnov test, the
Filiben probability plot correlation test and L-moment diagrams and ratio tests.
The estimation of plotting positions for censored and historic data is more involved and in
some cases can be inaccurate refer to Stedinger et al. (1993) for more details.
36
At-Site Flood Frequency
Analysis
The core of the Bayesian approach is described below – refer to Lee (1989) and Gelman et
al. (1995) for general expositions. The data D is hypothesised to be a random realisation
from a probability model with pdf � � � where � is a vector of unknown parameters. The pdf
� � � is given two labels depending on the context. When � � � is used to describe the
probability model generating the sample data D for a given �, it is called the sampling
distribution. However, when inference about the parameter � is sought, � � � is called the
likelihood function to emphasise that the data D is known and the parameter � is the object
of attention. The same notation for the sampling distribution and likelihood function is used to
emphasise its oneness.
Bayes theorem is then used to process the information contained in the data D by updating
what is known about the true value of � as follows:
� �� � �
� �� = ∝� �� � � (3.2.40)
� ��
The posterior density � � � describes what is known about the true value of � given the
data D, prior information and the probability model. The denominator p(D) is the marginal
likelihood defined as:
� � = ∫� � � � � �� (3.2.41)
Usually the marginal likelihood is not computed as it does not depend on � and serves as a
normalizing constant.
37
At-Site Flood Frequency
Analysis
2. m censored records in which ai annual flood peaks in (ai + bi) ungauged years exceeded
a threshold with true discharge si, i=1,..,m.
This data is denoted by D = {qi, i=1,..,n; (ai, bi, si), i=1,..,m}. It is shown in Likelihood function:
No-error-discharge case that the likelihood function is:
� �
� � � ∝ ∏�=1
� �� � ∏�=1
1 − � � ≤ �� �
��
� � ≤ �� �
��
(3.2.42)
The likelihood function is, by definition, the joint pdf of the observed given the
parameter vector �.
The likelihood function for the gauged data is the joint pdf of the n gauged floods.
Given the AM flood peaks are statistically independent, the likelihood can be simplified
to (Stedinger and Cohn, 1986):
�
� �1, …, �� � = ∏ � �� � (3.2.43)
�=1
The likelihood of the binomial censored data relies on the fact that the probability of
observing exactly x exceedances in n years is given by the binomial distribution
� � �, � = ��� 1 − � �−� �
� (3.2.44)
Provided each censoring threshold does not overlap over time with any other censoring
threshold, the likelihood of the censored data becomes:
� �� �� �
� censored data � = ∏ 1 − � � ≤ �� � � � ≤ �� � = ∏ � ��, �� ��, � (3.2.45)
�=1 �=1
38
At-Site Flood Frequency
Analysis
Figure 3.2.9 presents a rating error space diagram. In zone 1 (Figure 3.2.9), the
interpolation zone it is assumed the rating error multiplier e1 equals 1 – that is, errors
within the rated part of the rating curve are deemed negligible. As a result the
estimated discharge w equals the true discharge q. However, in zone 2, the extension
zone, the rating error multiplier e2 is assumed to be a random variable with mean of 1.
The anchor point (q1,w1) separates the interpolation and extension zones. The rating
error model can represented mathematically as:
� if � ≤ �1
�= (3.2.46)
�1 + �2 � − �1 if � > �1
The rating error multiplier e2 is sampled only once at the time of extending the rating
curve. Therefore, all flood discharge estimates exceeding the anchor value of q1 (which
equals w1) are corrupted by the same rating error multiplier. It must be stressed that the
error e2 is not known – at best, only its probability distribution can be estimated. For
practical applications one can assume e2 is distributed as either a log-Normal or normal
distribution with mean 1 and standard deviation �2.
Figure 3.2.9. Rating error multiplier space diagram for rating curve
39
At-Site Flood Frequency
Analysis
Data are assigned to each of the two zones, i=1,2, in the rating error space diagram.
The rating error multiplier standard deviation for the extension zone �2 is assigned a
value with �1 = 0. There are ni annual flood peak estimates wji satisfying the zone
constraint wi-1 ≤ wji < wi, j=1,..,ni where w0=0 and w2= ∞. In addition, there are mi
threshold discharge estimates wji for which there are aji exceedances in (aji+bji) years,
j=1,..,mi. Collectively this data is represented as:
� = ��, � = 1, 2
(3.2.47)
= � ��, � = 1, ...��; � ��, � ��, � ��, � = 1, ..., �� , � = 1, 2
Kuczera (1999) it can be shown for the two-zone rating error model of Figure 3.2.9 the
likelihood reduces to:
∞
where
��
1 � �� − �� − 1
� ��, �� �1 = ∏ �
� �� − 1 +
��
�
�=1 �
�� (3.2.49)
� �� − �� − 1
= ∏ � � ��, � �� �� − 1 + ,�
�=1
��
|� �i �i is the rating error multiplier pdf with mean 1 and standard deviation �i and
� �, � �, � ) is the binomial probability of observing exactly a exceedances above the
threshold discharge s in (a+b). This is a complex expression which can only be
evaluated numerically. However, it makes the fullest use of information on annual flood
peaks and binomial-censored data in the presence of rating curve error. Book 3,
Chapter 2, Section 3 offers limited guidance on the choice of �2.
� ∼ � ��, �� (3.2.50)
In the absence of prior information, the covariance matrix can be made non-informative as
illustrated in the following equation for a three-parameter model:
� 00
�� � ∞
0 � 0 (3.2.51)
0 0 �
Informative prior information can be obtained from a regional analysis of flood data.
The use of an informative prior based on regional analysis is strongly recommended in all
Flood Frequency Analyses involving at-site data. Even with long at-site records, the shape
40
At-Site Flood Frequency
Analysis
parameter in the LP III and GEV distribution is subject to considerable uncertainty. Regional
priors can substantially reduce the uncertainty in the shape (and even scale) parameter.
The regional procedures in Book 3, Chapter 3 are designed to express the prior information
in the form of Equation (3.2.50) for the Log Pearson III probability model. They should be
used in any Flood Frequency Analysis involving the Log Pearson III distribution unless there
is evidence that the regional prior is not applicable to the catchment of interest.
41
At-Site Flood Frequency
Analysis
Importance sampling is a widely used method Gelman et al. (1995) for sampling
parameters from a target probability model for which there is no algorithm to draw
random samples. The basic idea is to sample from a probability model for which a
sampling algorithm exists – the probability model is called the importance distribution
and the samples are called particles. The particles are then weighted so that they
represent samples from the target distribution. The closer the importance distribution
approximates the target, the more efficient the sampling.
Any robust search method can be used to locate the value of �, which maximises the
logarithm of the posterior probability density; that is,
� maxlog� � � (3.2.52)
�
where � is the most probable value of �. The shuffled complex evolution algorithm of
Duan et al. (1992) is a recommended search method.
� � ∼ � �, � (3.2.53)
where θ is interpreted as the mean and the posterior covariance � is defined as the
inverse of the Hessian
−1
∂2 log��(� �)
�= − (3.2.54)
∂2 �
An adaptive difference scheme should be used to evaluate the Hessian. Particular care
needs to be exercised when selecting finite difference perturbations for the GEV and
LP III distributions when upper or lower bounds are close to the observed data.
2. � �� �
Calculate particle probability weights according to � �� = ,i=1,...,N
�� ��
42
At-Site Flood Frequency
Analysis
1. Draw N samples from the posterior distribution ��, ��, � = 1, …, � where wi is the
normalized weight assigned to the sample ��.
3. �
For each ranked quantile evaluate the non-exceedance probability ∑ � � where W(j) is
�=1
the weight for the jth ranked quantile �� � � .
4. The lower and upper confidence limits are approximated by the quantiles whose non-
� �
exceedance probabilities are nearest to 2
and 1 − 2
respectively.
These parameters can then be used to compute the expected parameter 1 in Y AEP
quantiles described in Book 3, Chapter 2, Section 5.
Finally, the expected AEP probability for a flood of magnitude qY can be estimated:
�
� � � > �� � = ∑ � � � � > � � �� (3.2.56)
�=1
• Presence of outliers in the upper or lower tail of the distribution. Outliers in Flood
Frequency Analysis represent observations that are inconsistent with the trend of the
remaining data and typically would lie well outside confidence limits; and
Poor fits to the standard probability models may arise for a variety of reasons including the
following:
1. Small AM peaks may not be significant floods and thus may be unrepresentative of
significant flood peaks;
43
At-Site Flood Frequency
Analysis
2. By chance, one or more observed floods may be unusually rare for the length of gauged
record. Recourse to the historical flood record may be useful in resolving this issue;
4. A change in hydraulic control with discharge may affect the shape of the frequency curve
as illustrated in Book 3, Chapter 2, Section 8;
If it is decided the poor fit is due to inadequacy of the probability model, three strategies are
available to deal with the problem:
2. The data responsible for the unsatisfactory fit may be given less weight; and
Data in the upper part of the distribution is typically of more interest and therefore a strong
case needs to be made to justify reduction in weight of such data.
Cohn et al. (2013) developed a generalisation of the Grubbs-Beck test that was
recommended in Bulletin 17B (Interagency Advisory Committee on Water Data, 1982) to
identify PILFs. The multiple Grubbs-Beck test, checks if the kth smallest flow is unusually low
and, if it is, uses this discharge to define a threshold for censoring discharges below the
threshold. The test involves two steps:
1. The outward sweep starts at the median discharge and moves towards the smallest
discharge. Each flow is tested at the 0.5% significance level. If the kth smallest flow is
identified as a low outlier, the outward sweep stops; and
2. The inward sweep starts at the smallest discharge and moves towards the median. Each
discharge is tested at the 10% significance level. If the mth flow is identified as a low
outlier, the inward sweep stops.
44
At-Site Flood Frequency
Analysis
The total number of low outliers is then the maximum of k and m - 1. The flows identified as
low outliers are treated as censored flows.
The multiple Grubbs-Beck test is recommended for general use but must be conducted in
unison with a visual assessment of the fitted frequency curve.
2.6.3.10. Software
The Bayesian approach to calibrating flood probability models is numerically complex and is
best implemented in a high level programming language. The web-based software called
TUFLOW FLIKE supporting the Bayesian methods described in this chapter. The reader is
advised that this does not preclude use of other software if it is fit for purpose.
Book 3, Chapter 2, Section 8 illustrates the identification of PILFs using the multiple Grubbs-
Beck test and fitting a LP III distribution with PILFs treated as censored discharges. It is
recommended that multiple Grubbs-Beck test be performed in all Flood Frequency Analyses.
Book 3, Chapter 2, Section 8 illustrates how three–parameter distributions such as the GEV
and LP III can be made to fit data exhibiting sigmoidal behaviour. Because interest is in
fitting the higher discharges the low discharges are de-emphasized by treating them as
censored observations. In this example the multiple Grubbs-Beck test improved the fit but a
more severe manual censoring produced an even better fit to the right-hand tail.
The L-moment approach is simpler than the Bayesian approach but limited in capability. It is
restricted to applications involving gauged discharge data where there is no useful regional
information and rating curve errors do not require special attention.
45
At-Site Flood Frequency
Analysis
�1 = E[X]1:1 (3.2.57)
1
�2 = E �2:2 − �1:2 (3.2.58)
2
1
�3 = E �3:3 − 2X2:3 + �1:3 (3.2.59)
3
1
�4 = E �4:4 − 3X3:4 + 3X2:4 − �1:4 (3.2.60)
4
where Xj:m is the jth smallest variable in a sample of size m and E stands for expectation.
Wang (1996) justifies L-moments as follows: "When there is only one value in a sample, it
gives a feel of the magnitude of the random variable. When there are two values in a
sample, their difference gives a sense of how varied the random variable is. When there are
three values in a sample, they give some indication on how asymmetric the distribution is.
When there are four values in a sample, they give some clue on how peaky, roughly
speaking, the distribution is."
When many such samples are considered, the expectations �1 and �2 give measures of
location and scale. Moreover, the L-moment ratios:
�3
�3 = (3.2.61)
�2
�4
�4 = (3.2.62)
�2
give measures of skewness and kurtosis respectively. Hosking termed τ3 L-skewness and τ4
L-kurtosis. Hosking also defined the L-coefficient of variation as:
�2
�2 = (3.2.63)
�1
Table 3.2.3. L-moments for several distributions (from Stedinger et al. (1993))
Family L-moments
Generalized Extreme Value (GEV) �
�1 = � + �
1−� 1+�
�
�2 = � � 1 + � 1 − 2−κ
2 1 − 3−κ
�3 = −3
1 − 2−κ
46
At-Site Flood Frequency
Analysis
Family L-moments
Gumbel �1 = � + 0.5772α
�2 = � ln 2
�3 = 0.1699
�4 = 0.1504
Generalized Pareto �
�1 = �* + 1+�
�
�2 = 1+� 2+�
1−�
�3 = 3+�
1−� 2−�
�4 = 3+� 4+�
�1 = �
1
�1
∑
�=1
�(�) (3.2.64)
�2 =
1 1
2 �� ∑
2 �=1
�−1
� 1 − � − �� 1 �(�) (3.2.65)
�3 =
1 1
3 ��� ∑
3 �=1
�−1
� 2 − 2� − 1� 1
�−�
� 1 + � − �� 2 �(�) (3.2.66)
�4 =
1 1
4 �� ∑
4 �=1
�−1
� 3 − 3� − 1� 2
�−�
� 1 + 3� − 1� 1
�−�
� 2 − � − �� 3 �(�) (3.2.67)
�!
� �� � ≤ �
�� = �!(� − �)! (3.2.68)
0 �� � > �
47
At-Site Flood Frequency
Analysis
1
��2 = E �(η+2):(η+2) − �(η+1):(η+3) (3.2.70)
2
1
��3 = E �(η+3):(η+3) − 2X(η+2):(η+3) + �(η+1):(η+3) (3.2.71)
3
1
��4 = E �(η+4):(η+4) − 3X(η+3):(η+4) + 3X(η+2):(η+4) − �(η+1):(η+4) (3.2.72)
4
Table 3.2.4 presents the relationship between the first four LH-moments and the
parameters of the GEV and Gumbel distributions.
Table 3.2.4. LH-moments for GEV and Gumbel distributions (from Wang (1997))
Family LH-moments
λ =τ+ λ=
Generalise n n
α (η+2)αΓ(1+κ)
d extreme 1 κ
[1‐Γ(1+κ)(η+1)‐κ] 2 2!κ
[‐(η+2)‐κ+(η+1)‐κ]
value
λ=
n
(η+3)αΓ(1+κ)
(GEV) 3 3!κ
[‐(η+4)(η+3)‐κ+2(η+3)(η+2)‐κ‐(η+2)(η+1)‐κ]
λ=
n
(η+4)αΓ(1+κ)
4 4!κ
[‐(η+6)(η+5)(η+4)‐κ+3(η+5)(η+4)(η+3)‐κ
‐3(η+4)(η+3)(η+2)‐κ+(η+3)(η+2)(η+1)‐κ]
where κ≠0
48
At-Site Flood Frequency
Analysis
Family LH-moments
λ =τ+�[0.5772+ln(�+1)]λ = [ln(η+2)‐ln(η+1)
Gumbel n n
(η+2)α
1 2 2!
λ = [(η+4)ln(η+3)‐2(η+3)ln(η+2)+(η+2)ln(η+1)]
n
(η+3)α
3 3!
λ = [(η+6)(η+5)ln(η+4)‐3(η+5)(η+4)ln(η+3)
n
(η+4)α
4 4!κ
+3(η+4)(η+3)ln(η+2)‐(η+3)(η+2)ln(η+1)]
For ease of computation Wang (1997) derived the following approximation for the
shape parameter �:
2 3
� = �0 + �1[��3] + �2[��3] + �3[��3] (3.2.73)
η a0 a1 a2 a3
0 0.2849 -1.8213 0.8140 -0.2835
1 0.4823 -2.1494 0.7269 -0.2103
2 0.5914 -2.3351 0.6442 -0.1616
3 0.6618 -2.4548 0.5733 -0.1273
49
At-Site Flood Frequency
Analysis
η a0 a1 a2 a3
4 0.7113 -2.5383 0.5142 -0.1027
Wang (1997) derived the following estimators for LH-moments with shift parameter �:
�
1
��1 = � ∑ � − 1�
�� + 1 � = 1
��(�) (3.2.74)
�
� 1 1
�2 =
2 �C ∑ �−1
C �+1 − � − 1C �
�−�
C 1 �(�) (3.2.75)
�+2�=1
�
� 1 1
�3 =
3 �C ∑ �−1
C �+2 − 2 � − 1C �+1
�−�
C 1 + � − 1C �
�−�
C 2 �(�) (3.2.76)
�+3�=1
�
� 1 1
�4 =
4 �C ∑ ( � − 1C �+3 − 3 � − 1C �+2
�−�
C 1
�+4�=1 (3.2.77)
�−1 �−� �−1 �−�
+3 C �+1 C 2 − C � C 3) �(�)
The selection of the best shift parameter requires some form of goodness-of-fit test.
Wang (1998) argued that the first three LH-moments are used to fit the GEV model
leaving the fourth LH-moment available for testing the adequacy of the fit. Wang (1998)
proposed the following approximate test statistic:
� �4 − ��4
�= (3.2.78)
�(� �4 � �3 = ��3)
where � �4 is the sample estimate of the LH-kurtosis, ��4 is the LH-kurtosis derived from
the GEV parameters fitted to the first three LH-moments, and �(� �4 � �3 = ��3) is the
standard deviation of � �4 assuming the sample LH-skewness equals the LH-skewness
derived from the GEV parameters fitted to the first three LH-moments. Under the
hypothesis that the underlying distribution is GEV, the test statistic z is approximately
normal distributed with mean 0 and variance 1. Wang (1998) describes a simple
relationship to estimate �(� �4 � �3 = ��3) .
50
At-Site Flood Frequency
Analysis
Parametric bootstrap
The sampling distribution of an estimator can be approximated using the Monte Carlo
method known as the parametric bootstrap:
2. Set i=1
2.6.4.7. Software
Implementation of L and LH-moments requires extensive computation. The TUFLOW FLIKE
software, supports the L and LH-moment estimation described in this section.
51
At-Site Flood Frequency
Analysis
� + 0.2
�(i) = (3.2.79)
� − 0.4
where i is the rank of the gauged flood (in descending order) and n is the number of years of
record.
In the first approach, the annual flow series distribution has been estimated from the POT
series on the basis that the latter considers all of the relevant data and should thus provide a
better estimate. The basis for the procedure is Equation (3.2.14), which links Poisson
arrivals of flood peaks with a distribution of flood magnitudes above some threshold.
Jayasuriya and Mein (1985), Ashkar and Rousselle (1983) and Tavares and Da Silva (1983)
explored this approach using the exponential distribution. The approach has been fairly
successful, but some results have diverged from the distribution derived directly from the
annual series. Given the concern that the fit to the right tail of the annual maximum series
may be compromised by the leverage exerted by the low flows, this approach cannot be
recommended as a replacement for the analysis of annual series data. If possible, the base
discharge for this approach should be selected so that the number of floods in the POT
series m is at least 2 to 3 times the number of years of record n. However, it may be
necessary to use a much lower value of m in regions with low rainfall where the number of
recorded events that could be considered as floods is low.
52
At-Site Flood Frequency
Analysis
The routing model parameters were selected so that major flood terrace storage is activated
by floods of less than 1 in 100 AEP. This situation was chosen to represent a river with
multiple flood terraces with the lowest terraces accommodating the majority of floods and the
highest terrace only inundated by extreme floods.
Figure 3.2.11 presents the flood frequency curve based on 30 000 simulated years – it
shows a clear break in slope around the 1 in 100 AEP corresponding to the activation of
major flood terrace storage. Indeed the flood frequency curve displays downward curvature
despite that the fact the rainfall frequency curve displays upward curvature in the 1 in 100 to
1 in 1000 AEP range. In contrast the flood frequency curve based on 100 years of “data”
shows no evidence of downward curvature. This is because in a 100 year record there is
little chance of the major flood terrace storage being activated. Indeed without knowledge of
the underlying hydraulics one would be tempted to extrapolate the 100 year flood record
using a straight line extrapolation. Such an extrapolation would rapidly diverge from the
“true” frequency curve.
53
At-Site Flood Frequency
Analysis
Figure 3.2.11. Simulated rainfall and flood frequency curves with major floodplain storage
activated at a threshold discharge of 3500 m3/s
Although the example idealises the dominant rainfall-runoff dynamics it delivers a very
strong message. Extrapolation of flood frequency curves fitted to gauged discharges records
requires the exercise of hydrologic judgment backed up by appropriate modelling. The
problem of extrapolation is much more general. For example, in this case, if a rainfall-runoff
approach were used with the rainfall-runoff model calibrated to small events the simulated
flood frequency curve is likely to be compromised in a similar way.
Figure 3.2.12 and Figure 3.2.13, taken from Micevski et al. (2003), compare instantaneous
annual maximum discharge against the discharge recorded at 9am on the same day for two
gauging stations in the Hunter Valley: Goulburn River at Coggan with area 3340 km2 and
Hunter River at Singleton with area 16 400 km2. The dashed line represents equality.
Figure 3.2.12 demonstrates that the true peak flow can be up to 10 times the 9:00 am flow.
In contrast the estimation error is much smaller for the larger catchment shown in
Figure 3.2.13.
54
At-Site Flood Frequency
Analysis
Figure 3.2.12. Comparison between true peak flow and 9:00 am flow for Goulburn River at
Coggan
Figure 3.2.13. Comparison between true peak flow and 9:00 am flow for Hunter River at
Singleton
The example demonstrates the need to check the representativeness of daily readings by
comparing instantaneous peak flows against daily readings.
55
At-Site Flood Frequency
Analysis
distribution to an annual maximum series for the Hunter River at Singleton. The analysis will
be undertaken using TUFLOW Flike which has been developed to undertake flood frequency
analysis as described in this book, that is, it has the ability to fit a range of statistical
distributions using a Bayesian Inference method.
Once TUFLOW Flike has been obtained and installed, launch TUFLOW Flike and the
screen in Figure 3.2.14 will appear.
56
At-Site Flood Frequency
Analysis
Create and save a new .fld file in an appropriate location, such as in a folder under the job
directory, and give it a logical name, in this case Example_3.fld. A message will appear
asking if you want to create the file, select Yes. Note that the window is titled Open, but it
works for creating new files as well. Once the .fld file has been saved, the Flike Editor
window will open which will be used in the next step.
57
At-Site Flood Frequency
Analysis
58
At-Site Flood Frequency
Analysis
To import the flood series select the Import button and the Import gauged values window
opens as shown in Figure 3.2.18. Now select the Browse button and navigate to the
Singleton flood series. This example data are included in the TUFLOW Flike download, a
copy of which was installed in the data folder in the install location of TUFLOW-Flike. By
default, this location is C:\TUFLOW Flike\data\singletonGaugedFlows.csv. This data also
appears at the end of this example.
59
At-Site Flood Frequency
Analysis
Once the data file has been selected, the program will return to the Import gauged values
window. As the input data format is flexible TUFLOW Flike needs to be told how to interpret
the data file. To view the format of the data, select the View button and the data will be open
in your default text editor (see Figure 3.2.19). In the example data the first line contains a
header line and the data follows this. The flow values are in the first column and the year in
the fourth column. Having taken note of the data structure close the text editor and return to
the Import gauged values window. It's a good habit to check the data in the text editor to
ensure that the format of the data is known and the file has not been corrupted or includes a
large number of trailing comma or whitespace. This last issue commonly occurs when
deleting information from excel files, but it is easy to fix. Simply delete any trailing comma or
white space in a text editor.
60
At-Site Flood Frequency
Analysis
The next step is to configure the import of the data. As the example data has a header, the
first line needs to be skipped. Enter 1 into the Skip first __ records and then text field.
This will skip the first line. Ensure that the Read to the end-of-file option is selected (this is
the default). Occasionally, there may be a need to specify how many records to read, in
which case this can be achieved by selecting the Read next __ records option and entering
the desired number of records to read. Next, specify which column the flood data are in, by
filling the gauged values are in column __ text box, in this example data this is column 1.
Next, select the Years available in column __ text box and specify the column that this data
is in (column 4). Finally, select OK to import the data. The Import gauged values window
should look similar to Figure 3.2.18.
The Value and Year columns in the Observed values tab will now be filled with the data in
the order that they were in the data file as shown in Figure 3.2.20. The data can be sorted by
value and year using the Rank button. Selecting this button will open a new window
(Figure 3.2.21) where there are five choices to rank by, these are:
61
At-Site Flood Frequency
Analysis
• Descending flow: Ranks the data in order of values from largest to smallest
• Ascending flow: Ranks the data in order of values from smallest to largest
• Descending year: Ranks the data in order of year from largest to highest
• Ascending year: Ranks the data in order of year from highest to largest
It is always a good idea to initially rank your data in descending order so you can check the
largest flows. For this data series the value is 12 525.66 m3/s. Leave the data ranked in
descending order for this example.
Note that the value name and units can be specified by entering values in the Valuename
and Unit text boxes. These titles do not affect the computations in any way, they do,
however, assist in reviewing the results, particularly when presenting results to external
audiences.
62
At-Site Flood Frequency
Analysis
Before configuring the model it is worthwhile checking that TUFLOW Flike has interpreted
the data correctly. The number of observed data is reported in the Number of observed
data text box. In this case the number of observations or length of the data series is 31 as
shown in Figure 3.2.22. Before continuing, check that this is the case.
Next, select the probability model; the Log Pearson Type III. To do this ensure that the radio
button next to the text Log Pearson Type III (LP3) is selected (this is the default) as in
Figure 3.2.22.
The final task is to choose the fitting method. In this example the Bayesian Inference method
will be used. To do this, ensure that the radio button next to Bayesian with is selected and
the radio button next to No prior information is selected as shown in Figure 3.2.22. Again,
both of these are the defaults.
63
At-Site Flood Frequency
Analysis
Both of these will be explored in this example and both should be consulted when
undertaking a Flood Frequency Analysis. Before we proceed with this example the length of
the x-axis in the plot needs to be specified; that is, the lowest probability (rarest event) to be
displayed. It is recommended to always enter a value greater than the 1 in Y AEP event that
you are interested in. This is specified in the Maximum AEP 1 in Y in probability plot ___
years text box. In this example, enter the 1 in 200 year AEP event as shown in Figure 9
[http://localhost:8889/Fit.model.png]. By default the plot window automatically launches
when a distribution is fitted.
In addition to the plot window a report file can also be automatically launched in a text editor.
This can be quite helpful when you are developing a model, as it allows you to more readily
compare the results. To do this select the appropriate radio button next to Always display
report file as shown in Figure 3.2.22.
64
At-Site Flood Frequency
Analysis
Fitmodel. This will run TUFLOW-Flike and present you with a Probability Plot as well as
opening the Report File in a text editor.
65
At-Site Flood Frequency
Analysis
66
At-Site Flood Frequency
Analysis
When fitting a flood series to a probability distribution it is essential that the results are
viewed and reviewed. This is most easily achieved by first viewing the results in the
Probability Plot. If the Probability Plot window has been closed, it can be reopened by
selecting the Option dropdown menu and then Viewplot. The plot contains information
about the fit as well as the quantile values and confidence limits. Within the plot window the
y-axis contains information on discharge (or log discharge depending on the Plot scale
selected) and x-axis displays the Annual Exceedance Probability (AEP) in terms of 1 in Y
years. The plot displays the:
• Log-Normal probability plot of the gauged flows with plotting position determined using the
Cunnane plotting position, shown as blue triangles;
• X% AEP quantile curve (derived using the posterior mean parameters), shown as a black
line;
67
At-Site Flood Frequency
Analysis
For the data contained in this example the resulting plot displays a good fit to the gauged
data and appears to have tight confidence limits with all gauged data points falling within the
90% confidence limits; by default the figure plots the logarithm of the flood peaks. The plot
can be rescaled to remove the log from the flow values. Select the Plotscale button and
choose one of the non-log options, that is, either Gumble or Exponential and the uncertainty
changes as in Figure 3.2.26. This will present a more sobering perspective on the model fit
with the confidence limit appearing much larger for rarer flood quantiles. This can be
confirmed by reviewing the results in the Result file. Table 3.2.6 presents a subset of the
results found in the Result file of selected X% AEP quantiles qY and their 90% confidence
limits. For example, for the 1% AEP flood, the 5% and 95% confidence limits are
respectively 37% and 546% of the quantile qY! The 0.2% AEP confidence limits are so wide
as to render estimation meaningless. Note the expected AEP for the quantile qY consistently
exceeds the nominal X% AEP. For example, the 1% (1 in 100) AEP quantile of 19 572 m3/s
has an expected AEP of 1.35% (1 in 74).
68
At-Site Flood Frequency
Analysis
From the information about the flood history at Singleton we can make the following
conclusions:
Note that the ungauged record length is 118 years, that is, all years from 1820 to 1937 are
included as it is assumed each year has an event. Also, note that the ungauged period
cannot overlap with the gauged period.
69
At-Site Flood Frequency
Analysis
The historical data needs to be entered into the Censoring of observed values tab, that is,
we need to let TUFLOW-Flike know that there has been one flood greater than the 1955
flood between 1820 and 1937. So:
• The Years greater than the threshold (Yrs > threshold) is one (1) – the 1820 flood.
• The Years less than or equal to the threshold (Yrs <= threshold) is 117 – there were 117
years between 1820 and 1937 with flood less than the 1820 flood.
Once the data has been entered, select OK which will return the main TUFLOW-Flike
window. TUFLOW-Flike preforms some checks of the data to ensure that it has been
70
At-Site Flood Frequency
Analysis
entered correctly. However, these are only checks and it is up to the user to ensure they
have correctly configured the historic censoring.
Return to the General tab by selecting Options and then Edit data and it should appear as
in Figure 3.2.28. Note the Number of censoring thresholds text field has been populated
with the number 1, so TUFLOW-Flike has recognised that there censoring has been
configured.
As with the previous example, check that the Always display report file radio button has
been selected.
2.8.4.5. Results
Table 3.2.8 presents the posterior mean, standard deviation and correlation for the Log
Pearson Type III parameters: m, loges and g which are respectively the mean, standard
deviation and skewness of loge(q) taken from the Report File. Comparison with Example 3
reveals the censored data have reduced by almost 17% the uncertainty in the skewness (g)
parameter. This parameter controls the shape of the distribution, particularly in the tail region
where the floods of interest are.
71
At-Site Flood Frequency
Analysis
Table 3.2.8. Posterior Mean, Standard Deviation and Correlation for the LP III
The resulting Probability plot is shown in Figure 3.2.30. This figure displays on a log normal
probability plot the gauged flows, the X% AEP quantile curve (derived using the posterior
mean parameters), the 90% quantile confidence limits and the expected probability curve.
Compared with Example 3 the tightening of the confidence limits is noticeable.
Figure 3.2.29. Probability plot of the Singleton data with historic information
The following table (Table 3.2.9) of selected 1 in Y AEP quantiles qY and their 90%
confidence limits illustrates the benefit of the information contained in the historic data. For
example, for the 1% AEP flood the 5% and 95% confidence limits are respectively 58% and
205% of the quantile qY! This represents a major reduction in quantile uncertainty compared
with Example 3 which yielded limits of 38% and 553%. This is illustrated in graphically
Figure 3.2.30.
72
At-Site Flood Frequency
Analysis
Note that Report File presents the Expected AEP in 1 in Y years whereas Table 3.2.9
presents as the Expected AEP as a percentage.
This example highlights the significant reductions in uncertainty that historical data can offer.
However, care must be exercised to ensure the integrity of the historic information – see
Book 3, Chapter 2, Section 3 for more details.
Figure 3.2.30. Probability plot of the Singleton data with historic information
73
At-Site Flood Frequency
Analysis
In this hypothetical example, a regional analysis of skewness has been conducted and the
expected regional skew was found to be 0.00 with a standard deviation of 0.30. This
information can be incorporated into the Bayesian analysis undertaken by TUFLOW Flike as
shown in this example.
The regional skewness (0.00) is entered into the Mean Skew of log Q text box and the
standard deviation of the regional skew (0.300) is entered into the Standard Deviation
Skew of log Q as shown in Figure 3.2.32. Note in practice careful attention to the units
being used is required.
Very large prior standard deviations are assigned to the Mean of log Q and Standard
deviation of log Q parameters to ensure there is no prior information about theses
parameters. If the Log Pearson III distribution has been selected, the option to import the
prior information from the ARR Regional Flood Frequency Estimation method is available
(Book 3, Chapter 3).
74
At-Site Flood Frequency
Analysis
75
At-Site Flood Frequency
Analysis
Figure 3.2.33 presents the probability plot for the LP III model fitted to the gauged data with
prior information on the skewness. Comparison of the results from this example with the
results from Example 3 (see Figure 3.2.34) reveals substantially reduced uncertainty in the
right hand tail.
76
At-Site Flood Frequency
Analysis
Figure 3.2.34. Comparison between the results from Example 3 and Example 5
Table 3.2.10. Comparison of LP III Parameters with and without prior information
Table 3.2.11 presents selected AEP quantiles qY and their 90% confidence limits. This table
further illustrates the benefit of incorporating regional information. For example, for the 1%
AEP flood the 5% and 95% confidence limits are respectively 37% and 546% of the quantile
q1% when no prior information is used. These limits are reduced to 46% and 292%,
respectively using prior regional information.
77
At-Site Flood Frequency
Analysis
This example will examine the influence of PILFs and demonstrate how to use the multiple
Grubbs-Beck test to safely remove them from the flood frequency analysis.
Once this has been done and the Gauged values have been ranked in descending order
the Flike Editor window should look like Figure 3.2.35.
78
At-Site Flood Frequency
Analysis
Once these settings have been selected, select OK and run TUFLOW Flike in the usual way.
In Figure 3.2.36, the fit to the right-hand tail is not satisfactory. The expected quantiles are
significantly greater than the gauged data, further the largest 3 data points fall outside of the
lower 90% confidence limits.
79
At-Site Flood Frequency
Analysis
Figure 3.2.36. Initial probability plot for Wimmera data with GEV
80
At-Site Flood Frequency
Analysis
On agreeing to censor these flows, TUFLOW Flike automatically performs two changes to
the inference setup:
2. A censored threshold is added, with the information that there are 27 Annual Maximum
discharges that lie below the threshold of 54.396m3/s which corresponds the 28th ranked
discharge.
81
At-Site Flood Frequency
Analysis
82
At-Site Flood Frequency
Analysis
A comparison of Figure 3.2.36 and Figure 3.2.40 shows the improved fit, in Figure 3.2.40 all
of the gauged data points fall within the 90% confidence limits. Further, censoring the PILFs
using the multiple Grubbs-Beck test has significantly altered the quantile estimates and
reduced the confidence limits as shown in Table 3.2.12. For instance the quantile q1% when
PILFs are excluded is around 21% of the initial estimate. The lower and upper confidence
limits have been considerable reduced, initially they were 30% and 500% of the quantile
q1% and following the removal of PILFs they became 68% and 220% of the quantile q1%.
83
At-Site Flood Frequency
Analysis
Figure 3.2.40. GEV fit - 56 years AM of gauged discharge - Using multiple Grubbs-Beck test
84
At-Site Flood Frequency
Analysis
Often the poor fit of a distribution is associated with a sigmoidal probability plot as illustrated
in Figure 3.2.41. In such cases a four or five-parameter distributions which have sufficient
degrees of freedom can be used to track the data in both upper and lower tails of the
sigmoidal curve. Alternatively a calibration approach that gives less weight to smaller floods
can be adopted. The second approach is adopted in this example.
Figure 3.2.41. Bayesian fit to all gauged data Gumbel probability plot
Figure 3.2.41 displays the GEV Bayesian fit on a Gumbel probability plot. Although the
observed floods are largely contained within the 90% confidence limits, the fit, nonetheless,
is poor – the data exhibit a sigmoidal trend with reverse curvature developing for floods with
85
At-Site Flood Frequency
Analysis
an AEP greater than 50%. It appears that the confidence limits have been inflated because
the GEV fit represents a poor compromise.
Now run TUFLOW Flike and fit the model. Changing the plot scale and rescale the y-axis as
above will result in Figure 3.2.42.
Figure 3.2.42 displays the fit after censoring the 5 low outliers identified by the multiple
Grubbs-Beck test. The improvement in fit is marginal at best over Figure 3.2.41.
Figure 3.2.42. Bayesian fit with 5 low outliers censored after application of multiple Grubbs-
Beck test
• A censored record consisting of 23 floods below the threshold of 250m3/s and 0 floods
above this threshold.
To do this in TUFLOW Flike there are two steps, as in Example 6, these are:
86
At-Site Flood Frequency
Analysis
This is essentially the same process that was undertaken to exclude flows in Example 6
except it needs to be done manually. This is outlined below.
The censored record provides an anchor point for the GEV distribution – it ensures that the
chance of an Annual Maximum flood being less than 250m3/s is about 23/50 without forcing
the GEV to fit the peaks below the 250m3/s threshold. The fit effectively disregards floods
with a greater than 50% AEP and provides a good fit to the upper tail. Another benefit is the
substantially reduced 90% confidence limits which can be reviewed by examining the results
files.
87
At-Site Flood Frequency
Analysis
Figure 3.2.43. Bayesian fit with floods below 250 m3/s threshold treated as censored
observations
1 ifIPO�≥IPOthresh
�� = (3.2.81)
0 ifIPO�< IPOthresh
IPOt is the IPO index for year t and IPOthresh is a threshold value equal to -0.125.
88
At-Site Flood Frequency
Analysis
At each of the 33 NSW sites considered by Micevski et al. the AM peak flows were stratified
according to the indicator It. A 2-parameter log-Normal distribution was fitted to the gauged
flows with indicator equal to 1 – this is the IPO+ distribution. Likewise, a 2-parameter log-
Normal distribution was fitted to the gauged flows with indicator equal to 0 – this is the IPO-
distribution. Figure 3.2.44 presents the histogram for the ratio of the IPO- and IPO+ floods
for selected 1 in Y AEPs. If the IPO+ and IPO- distributions were homogeneous then about
half of the sites should have a flood ratio < 1 – Figure 3.2.44 shows otherwise.
Figures Figure 3.2.45 and Figure 3.2.46 present log normal fits to the IPO+ and IPO- annual
maximum flood data for the Clarence river at Lilydale respectively. Though the adequacy of
the log normal model to fit high floods may be questioned, in the AEP range 1 in 2 to 1 in 10
years, the IPO- floods are about 2.6 times the IPO+ floods with the same AEP.
89
At-Site Flood Frequency
Analysis
Figure 3.2.45. Log-Normal fit to 43 years of IPO+ data for the Clarence river at Lilydale (units
ML/day).
Figure 3.2.46. Log-Normal fit to 33 years of IPO- data for the Clarence river at Lilydale (units
ML/day).
90
At-Site Flood Frequency
Analysis
Figure 3.2.47. Log-Normal fit to 76 years of data for the Clarence river at Lilydale (units ML/
day).
To avoid bias in estimating long-term flood risk it is essential that the gauged record
adequately span both IPO+ and IPO- years. In this example, the IPO+ record is 43 years
and the IPO- record is 33 years in length. With reference to Figure 7 this length of record
appears to adequately sample both IPO epochs. This suggests that fitting to all the data will
yield a largely unbiased estimate of the long-term flood risk. Figure 3.2.47 illustrates a log
normal fit to all the data.
The marginal flood risk can be derived by combining the IPO+ and IPO- distribution using
Equation (3.2.23) to give
� �
The exogenous variable x can take two values, 0 or 1, depending on the IPO epoch. P(x=0),
the probability of being in an IPO- epoch, is assigned the value 33/76 based on the
observation that 33 of the 76 years of record were in the IPO- epoch. Likewise P(x=1), the
probability of being in an IPO+ epoch, is assigned the value 43/76. It follows that p(z|�,x=0)
and p(z|�,x=1) are the log normal pdfs fitted to IOP- and IPO+ data respectively.
91
At-Site Flood Frequency
Analysis
The derived marginal distribution is plotted in Figure 3.2.48. It almost exactly matches the
log normal distribution fitted to all the data.
Figure 3.2.48. Marginal, IPO+ and IPO+ log-Normal distributions for the Clarence River at
Lilydale
The procedure for fitting distributions by L-moments can be completed by hand, and also
using TUFLOW Flike. Both of these techniques will be outlined in this example.
92
At-Site Flood Frequency
Analysis
The following table lists 47 ranked flows for the Styx River at Jeogla.
93
At-Site Flood Frequency
Analysis
Ensure that the Bayesian with button is still checked. If the LH-moments fit to observed
values with radio button is checked the Censoring of observed values tab cannot be
accessed.
To remove the censoring threshold, select the Censoring of observed values tab and
select the Clear all button.
To include all the flood data, select the Observed values tab and select the Include all
button. Scroll through the data to ensure that all the crosses (x) in the Exclude column have
been removed.
TUFLOW Flike will only fit LH-moments with H >= 1 for the GEV distribution, however it will
fit L-moments (H = 0) for all distributions. Ensure that the GEV probability model has been
selected.
The configured Flike Editor should look like Figure 3.2.50. Select OK and run TUFLOW
Flike. As usual, a probability plot will appear together with the report file. Rescale the plot so
it looks like Figure 3.2.51.
94
At-Site Flood Frequency
Analysis
95
At-Site Flood Frequency
Analysis
Figure 3.2.51 displays the GEV L-moment fit on a Gumbel probability plot. Although the
observed floods are largely contained within the 90% confidence limits, the fit, nonetheless,
is poor with systematic departures from the data which exhibits reverse curvature.
96
At-Site Flood Frequency
Analysis
The very significant reduction in the quantile confidence intervals is largely due to the shape
parameter Κ changing from –0.17 to 0.50. The L-moment fit in Figure 2 was a compromise;
most of the small and medium-sized floods suggested an upward curvature in the probability
plot which resulted in a negative GEV shape parameter (to enable upward curvature). In
contrast, the LH-moment fit favoured the large-sized floods which exhibit a downward
curvature resulted in a positive shape parameter. For positive Κ the GEV has an upper
bound. In this case the upper bound is about 2070 m3/s which is only 17% greater than the
largest observed flood.
A comparison of the quantile derived from the Bayesian inference method with censoring of
PILFs and those determined using Optimised LH-moments in presented in Table 3.2.14. The
two different inference methods produce similar results in terms of the calculated quantiles;
however, the confidence limits are smaller using the Bayesian framework. This highlights
how LH-moment results could be used to inform the selection of the censoring threshold for
PILFs in the Bayesian framework.
97
At-Site Flood Frequency
Analysis
The first two L-moments were estimated as 226.36 and 79.2. Noting that the exponential
distribution is a special case of the generalised Pareto when � = 0, it follows from Table 3.2.3
that the exponential parameters are related to the L-moments by
�
�1 = �* + � �2 = (3.2.83)
2
which yields values for q* and β of 68.11 and 158.24 respectively. Therefore the probability
of the peak flow q exceeding w in any POT event is
� − �*
− � − 68.11
� −
158.24 (3.2.84)
�(� > �) = e =e
The second step obtains the distribution of annual maximum peaks. Using Equation (3.2.11),
the expected number of peaks that exceed w in a year is
98
At-Site Flood Frequency
Analysis
� − �*
− (3.2.85)
�
��(�) = ��(� > �) = �e
where v is the average number of flood peaks above the threshold q* per year.
�* �
log���(�) = log�� + − (3.2.86)
� �
A plot of log���(�) versus w should follow a straight line if the underlying POT distribution is
exponential.
Given that 47 peaks above the threshold occurred in 47 years, v equals 1.0. The following
figure presents a plot of the fitted POT exponential model against the observed POT series.
Figure 3.2.53. Plot of the fitted POT exponential model against the observed POT series
2.9. References
ASCE (American Society of Civil Engineers) (1949), Hydrology Handbook.
Adams, C.A. (1987), Design flood estimation for ungauged rural catchments in Victoria Road
Construction Authority, Victoria. Draft Technical Bulletin.
Adams, C.A. and McMahon, T.A. (1985), Estimation of flood discharge for ungauged rural
catchments in Victoria. Hydrol. and Water Resources Symposium 1985, Inst Engrs Aust.,
Natl Conf. Publ. No.85/2, pp: 86-90.
99
At-Site Flood Frequency
Analysis
Alexander, G.N. (1957), Flood flow estimation, probability and the return period. Jour. Inst.
Engrs Aust, 29: 263-278.
Allan, R.J. (2000), ENSO and climatic variability in the last 150 years, in El Nino and the
Southern Oscillation, Multi-scale variability, Global and Regional Impacts, edited by H.F. Diaz
and V. Markgraf, Cambridge University Press, Cambridge, uk, pp: 3-56.
Ashkanasy, N.M. and Weeks, W.D. (1975), Flood frequency distribution in a catchment
subject to two storm rainfall producing mechanisms. Hydrol. Symposium 1975, Inst Engrs
Aust, Natl Conf. Publ. (75/3), 153-157.
Ashkar, F. and Rousselle, J. (1983) Some remarks on the truncation used in partial flood
series models. Water Resources Research, 19: 477-480.
Baker, V. (1984), Recent paleoflood hydrology studies in arid and semi-arid environments
(abstract). EOS Trans. Amer. Geophys. Union, Volume 65, p: 893.
Baker, V., Pickup, G. and Polach, H.A. (1983) ,Desert paleofloods in central Australia.
Nature, 301: 502-504.
Beard, L.R. (1974), Flood flow frequency techniques Univ. of Texas at Austin, Center for
Research in Water Resources, Tech. Report CRWR119, October.
Beran, M., Hosking, J.R.M. and Arnell, N. (1986), Comment on 'Two-component extreme
value distribution for flood frequency analysis.' Water Resources Research, 22: 263-266.
Blom, G. (1958), Statistical Estimates and Transformed Beta-Variables. Wiley, New York, p:
176.
Cohn, T.A., England, J.F., Berenbrock, C.E., Mason, R.R., Stedinger, J.R. and Lamontagne,
J.R. (2013), A generalized Grubbs-Beck test statistic for detecting multiple potentially
influential low outliers in flood series, Water Resour. Res., 49(8), 5047-5058.
Conway, K.M. (1970), Flood frequency analysis of some N.S.W. coastal rivers. Thesis
(M.Eng Sc.), Univ.NSW.
Costa, J.E. (1978), Holocene stratigraphy in flood frequency analysis. Water Resources
Research, 4: 626-632.
Costa, J.E. (1983), Palaeohydraulic reconstruction or flashflood peaks from boulder deposits
in the Colorado Front Range. Geol. Soc. America Bulletin, 94(8), 986-1004.
Costa, J.E. (1986), A history of paleoflood hydrology in the United States, 1800-1970. EOS
Trans. Aoler Geophysical Union, 67(17), 425-430.
Cunnane, C. (1985), Factors affecting choice of distribution for flood series. Hydrological
Sciences Journal, 30: 25-36.
Cunnane, C. (1978), Unbiased plotting positions - a review. Jour. of Hydrology, 37: 205-222.
100
At-Site Flood Frequency
Analysis
Doran, D.G. and Irish, J.L. (1980), On the nature and extent of bias in flood damage
estimation. Hydrol. and Water Resources Symposium 1980, Inst. Engrs Aust, Natl Conf.
Publ., (80/9), 135-139.
Duan, Q., Sorooshian, S. and Gupta, V. (1992), Effective and efficient global optimization for
conceptual rainfall-runoff models, Water Resources Research, 28(4), 1015-1031.
Erskine, W. D. and Warner, R.F. (1988), Geomorphic effects of alternating flood and drought
dominated regimes on a NSW coastal river, in Fluvial Geomorphology of Australia, edited by
R.F. Warner, pp. 223-244, Academic Press, Sydney.
Fiering, M.B. (1963), Use of correlation to improve estimates of the mean and variance. U.S.
Geological estimates of the mean and variance. U.S. Geological Survey Professional Paper
434-C.
Fiorentino, M., Versace, P. and Rossi, F. (1985), Regional flood frequency estimation using
the two-component extreme value distribution. Hydrol. Sciences Jour., 30: 51-64.
Folland, C.K., Renwick, J.A., Salinger, M.J. and Mullan, A.B. (2002), Relative influences of
the Interdecadal Pacific Oscillation and ENSO on the South Pacific Convergence Zone,
Geophys. Res. Lett., 29(13), doi:10.1029/2001GL014201.
Franks, S.W. and Kuczera, G. (2002), Flood Frequency Analysis: Evidence and Implications
of Secular Climate Variability, New South Wales, Water Resources Research, 38(5),
10.1029/2001WR000232.
Franks, S.W. (2002a), Identification of a change in climate state using regional flood data,
Hydrol. Earth Sys. Sci., 6(1), 11-16.
Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B. (1995), Bayesian data analysis,
Chapman and Hall, p: 526.
Hosking, J.R.M. (1990), L-moments: Analysis and estimation of distributions using linear
combinations of order statistics, J. Roy. Statist. Soc., Ser. B, 52(2), 105-124.
Hosking, J.R.M. and Wallis, J.R, (1986) Paleoflood hydrology and flood frequency analysis.
Water Resources Research, 22: 543-550.
Houghton, J.C. (1978), Birth of a parent: The Wakeby distribution for modeling flood flows.
Water Resources Research, 14: 1105-1109.
Interagency Advisory Committee on Water Data (1982), Guidelines for determining flood
flow frequency. Bulletin 17B of the Hydrology Sub-committee, Office of Water Data
Coordination, Geological Survey, U.S. Dept of the Interior.
Jayasuriya, M.D.A. and Mein, R.G. (1985), Frequency analysis using the partial series.
Hydrol. and Water Resources Symposium 1985, Inst. Engrs Aust., Natl Conf. Publ. No. 85/2,
pp: 81-85.
Kiem, A.S. and Franks, S.W. (2004), Multidecadal variability of drought risk - eastern
Australia, Hydrol. Proc., 18, doi:10.1002/hyp.1460.
101
At-Site Flood Frequency
Analysis
Kiem, A.S., Franks, S.W. and Kuczera, G. (2003), Multi-decadal variability of flood risk,
Geophysical Research Letters, 30(2), 1035, DOI:10.1029/2002GL015992.
Kochel. R.C., Baker, V.R and Patton, P.C. (1982), Paleohydrology of southwestern Texas.
Water Resources Research, 18: 1165-1183.
Kopittke, R.A., Stewart, B.J. and Tickle, K.S. (1976), Frequency analysis of flood data in
Queensland. Hydrol. Symposium 1976, Inst Engrs Aust., Natl Conf. Publ. No. 76/2, pp:
20-24.
Kuczera, G. (1999), Comprehensive at-site flood frequency analysis using Monte Carlo
Bayesian inference, Water Resources Research, 35(5), 1551-1558.
Kuczera, G. (1996), Correlated rating curve error in flood frequency inference, Water
Resources Research, 32(7), 2119-2128.
Kuczera, G., Lambert, M.F., Heneker, T., Jennings, S., Frost, A. and Coombes, P. (2006),
Joint probability and design storms at the crossroads, Australian Journal of Water
Resources, 10(2), 5-21.
Laurenson, E.M. (1987), Back to basics on flood frequency analysis. Civ. Engg Trans., Inst
Engrs Aust, CE29: 47-53.
Lee, P.M. (1989), Bayesian statistics: An introduction, Oxford University Press (NY).
Mantua, N. J., Hare, S.R., Zhang, Y., Wallace, J.M. and Francis, R.C. (1997), A Pacific
interdecadal climate oscillation with impacts on salmon production, Bull. Amer. Meteorol.
Soc., 78(6), 1069-1079.
Matalas, N.C. and Jacobs, B. (1964), A correlation procedure for augmenting hydrologic
data. U.S. Geological Survey Professional Paper 434-E.
McDermott, G.E. and Pilgrim, D.H. (1982), Design flood estimation for small catchments in
New South Wales. Dept of National Development and Energy, Aust Water Resources
Council Tech. Paper No.73, p: 233
McDermott, G.E. and Pilgrim, D.H. (1983), A design flood method for arid western New
South Wales based on bankfull estimates. Civ. Engg Trans., Inst Engrs Aust, CE25: 114-120.
McIllwraith, J.F. (1953), Rainfall intensity-frequency data for New South Wales stations. Jour.
Inst Engrs Aust, 25: 133-139.
McMahon, T.A. (1979), Hydrologic characteristics of Australian streams. Civ. Engg Research
Reports, Monash Univ., Report No.3/1979.
McMahon, T.A. and Srikanthan, R. (1981), Log Pearson III distribution- is it applicable to
flood frequency analysis of Australian streams? Jour. of Hydrology, 52: 139-147.
Micevski, T., Kiem, A.S., Franks, S.W. and Kuczera, G. (2003), Multidecadal Variability in
New South Wales Flood Data, Hydrology and Water Resources Symposium, Institution of
Engineers, Australia, Wollongong.
Natural Environment Research Council (1975), Flood Studies Report, Vol.1, Hydrological
Studies, London.
102
At-Site Flood Frequency
Analysis
O'Connell, D.R., Ostemaa, D.A., Levish, D.R. and Klinger, R.E. (2002) Bayesian flood
frequency analysis with paleohydrologic bound data, Water Resources Research, 38(5),
1058, DOI:10.1029/2000WRR000028.
Pedruco, P., Nielsen, C., Kuczera, G. and Rahman, A. (2014), Combining regional flood
frequency estimates with an at site flood frequency analysis using a Bayesian framework:
Practical considerations, Hydrology and Water Resources Symp., Perth, Engineers
Australia.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Pilgrim, D.H. and Doran, D.G. (1987), Flood frequency analysis, in Australian Rainfall and
Runoff: A guide to flood estimation, Pilgrim, D.H. (ed), The Institution of Engineers, Australia,
Canberra.
Pilgrim, D.H. and McDermott, G.E. (1982), Design floods for small rural catchments in
eastern New South Wales. Civ. Engg Trans., Inst Engrs Aust., CE24: 226-234.
Potter, D.J. and Pilgrim, D.H. (1971), Flood estimation using a regional flood frequency
approach. Final Report, Vol.2, Report on Analysis Components. Aust. Water Resources
Council, Research Project 68/1, Hydrology of Small Rural Catchments. Snowy Mountains
Engg Corporation, April.
Potter, K.W. and Walker, J.F. (1981), A model of discontinuous measurement error and its
effects on the probability distribution of flood discharge measurements, Water Resources
Research, 17(5), 1505-1509.
Potter, K.W. and Walker, J.F. (1985), An empirical study of flood measurement error, Water
Resources. Research, 21(3), 403-406.
Power, S., Tseitkin, F., Torok, S., Lavery, B., Dahni, R. and McAvaney, B. (1998), Australian
temperature, Australian rainfall and the Southern Oscillation, 1910-1992: coherent variability
and recent changes, Aust. Met. Mag., 47(2), 85-101.
Power, S., Casey, T., Folland, C., Colman, A. and Mehta, V. (1999), Inter-decadal modulation
of the impact of ENSO on Australia, Climate Dynamics, 15(5), 319-324.
Rossi, F., Fiorentino, M. and Versace, P. (1984), Two-component extreme value distribution
for flood frequency analysis. Water Resources Research, 20: 847-856.
Slack, J.R., Wallis, J.R. and Matalas, N.C. (1975), On the value of information in flood
frequency analysis, , Water Resources. Research, 11(5), 629-648.
Stedinger, J.R. (1983), Design events with specified flood risk, Water Resources Research,
19(2), 511-522.
Stedinger, J.R and Cohn, T.A. (1986), Flood frequency analysis with historical and
paleoflood information. Water Resources Research, 22: 785-793.
Tavares, L.V. and Da Silva, J.E. (1983), Partial series method revisited. Jour. of Hydrology,
64: 1-14.
103
At-Site Flood Frequency
Analysis
Wallis, J.R. and Wood, E.F. (1985), Relative accuracy log Pearson III procedures. Proc.
Amer. Soc. Civ. Engrs. J. of Hydraul. Eng., lll(7), 1043-1056.
Wang, Q.J. (2001), A Bayesian joint probability approach for flood record augmentation,
Water Resources Research, 37(6), 1707-1712.
Wang, Q.J. (1996), Direct sample estimators of L-moments, Water Resources Research,
32(12), 3617-3619.
Wang, Q.J. (1997), LH moments for statistical analysis of extreme events, Water Resources
Research, 33(12), 2841-2848.
104
Chapter 3. Regional Flood Methods
Ataur Rahman, Khaled Haddad, George Kuczera, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
3.1. Introduction
Estimation of peak flows on small to medium sized rural catchments is required for the
design of culverts, small to medium sized bridges, causeways, soil conservation works and
for various planning and regulatory purposes. Typically, most design flood estimates for
projects on small to medium sized catchments are on catchments that are ungauged or have
little recorded streamflow data. In these cases, peak flow estimates can be obtained using a
Regional Flood Frequency Estimation (RFFE) approach, which transfers flood frequency
characteristics from a group of gauged catchments to the location of interest. Even in cases
where there is recorded streamflow data it is beneficial to pool the information in the gauged
record with the RFFE information. A RFFE technique is expected to be simple, requiring only
readily accessible catchment data to obtain design flood estimates relatively quickly.
The RFFE method described in this chapter ensures that design flood discharge estimates
are consistent with the gauged records and with results for other ungauged catchments in a
region. It is recognised that there will be considerable uncertainty in estimates for ungauged
catchments because of the limited number of gauged catchments available to develop the
method and the wide range of catchment types that exist throughout Australia.
In developing the RFFE technique, a number of criteria had to be satisfied. These criteria
included:
The basis for the development of the RFFE technique recommended herein, therefore, is a
national database consisting of 853 gauged catchments. These data were used to develop
and test the RFFE technique presented in this chapter. Further details of the development of
the database and RFFE technique are provided by the references noted in Book 3, Chapter
3, Section 16.
The following sections contain a description of the conceptual and statistical framework of
the adopted RFFE technique, a computer-based application tool, referred to as ‘RFFE Model
2015’, that implements the adopted RFFE technique and a number of worked examples to
demonstrate the application of the model.
Following the guidance provided in Book 1 and Book 3, Chapter 1, users of the RFFE
technique are reminded that there are alternatives to the RFFE technique.
105
Regional Flood Methods
limitations of the method must be recognised. The RFFE technique has been developed
using the best available database of gauged catchments throughout Australia, but the fact
remains that only a small number of gauged catchments were available to represent the
wide range of conditions experienced over an area of about 7.5 million km2. Therefore, in
accordance with the guidance in Book 1, Chapter 1 of ARR, designers and analysts have a
duty to use an alternative technique if that technique can be shown to be superior to RFFE
Model 2015 and to utilise any available local data, both formal and informal to assist in
understanding local conditions and improve upon RFFE Model 2015 estimates. In comparing
and selecting alternative methods, the uncertainty in the observed flood data due to factors
such as limitations in record length and rating curve extrapolation should be recognised.
There has been little success in the identification of ‘acceptably homogeneous regions’ in
Australia using statistical measures such as those proposed by Hosking and Wallis (1993). A
common approach to defining a fixed region has been to base the region on political
boundaries. However, these fixed regions based on state borders and other geographical
boundaries have often been found to be highly heterogeneous (Bates et al., 1998; Rahman,
1997; Haddad, 2008).
As an alternative to fixed regions, Burn (1990a), Burn (1990b), and Zrinji and Burn (1994)
proposed the Region Of Influence (ROI) approach where a location of interest (i.e. the
106
Regional Flood Methods
catchment where flood quantiles are to be estimated) is allowed to form its own region by
selecting a group of ‘nearby catchments’ in either a geographical or catchment attributes
space (catchment attributes refer to the catchment characteristics that are influential in
developing flood flows). The ROI approach attempts to reduce the degree of heterogeneity
in a proposed region by excluding sites located remotely in geographical or catchment
attributes space. The ROI approach often uses a statistical criterion to select the optimum
size of the region, such as ‘minimum model error variance’ in the regression.
Alternative methods to the Probabilistic Rational Method, such as the Index Flood Method
heavily rely on the assumption of ‘regional homogeneity’, which as previously mentioned is
satisfied poorly for Australian regional flood data. Studies on regression based RFFE
techniques for Australia (e.g. (Hackelbusch et al., 2009; Haddad et al., 2008; Haddad et al.,
2009; Haddad et al., 2011; Haddad et al., 2012; Haddad and Rahman, 2012; Micevski et al.,
2015; Palmen and Weeks, 2009; Palmen and Weeks, 2011; Pirozzi et al., 2009; Rahman,
2005; Rahman et al., 2008; Rahman et al., 2009; Rahman et al., 2011a; Rahman et al.,
2011b; Rahman et al., 2012; Rahman et al., 2015a; Rahman et al, 2015; Rahman et al.,
2015b; Rahman et al., 2015c)) have demonstrated that these techniques are capable of
providing quite accurate design flood estimates using only a few predictor variables. In
particular, it has been found that the Generalised Least Squares (GLS) based regression
technique offers a powerful statistical method which accounts for the inter-station correlation
of annual maximum flood series and across-site variation in flood series record lengths in the
estimation of the flood quantiles. Use of a GLS-based regression method also allows
differentiation between sampling error and model error and thus provides a more realistic
framework for error analysis. The GLS based quantile regression technique has been
adopted in the US (see, for example, (Stedinger and Tasker, 1985; Tasker and Stedinger,
1989; Griffis and Stedinger, 2007)).
107
Regional Flood Methods
parameters are the mean, standard deviation and skewness of the natural logarithm of the
annual maximum flood series. The PRT offers three significant advantages over the quantile
regression technique:
2. It is straightforward to combine any at-site flood information with regional estimates (see
Book 3, Chapter 2) using the approach described by Micevski and Kuczera (2009) to
produce more accurate quantile estimates; and
The quality and representativeness of the flood data determine to a large degree the
accuracy and reliability of regional flood estimates. The challenge in collating a database for
RFFE lies in maximising the amount of useful flood information, while minimising the random
and systematic error (or ‘noise’) that may be present in some flood data.
In RFFE, various sources of errors in data and their effects on final flood estimates need to
be recognised. The accuracy of flood quantile estimates at each individual gauged site
depends largely on rating curve accuracy and record length at the individual site. Thus, the
selection of a minimum record length at an individual site in the region is a very important
step in any RFFE technique; the record length should be as long as possible while retaining
enough sites in the region to make the results of RFFE useful. Also, the flood data at each
site should satisfy a number of basic assumptions e.g. homogeneity, independence and
stationarity. In the case where these assumptions are violated, appropriate measures should
be taken; suitable techniques include GLS regression to account for the inter-station
correlation (e.g. (Stedinger and Tasker, 1985; Griffis and Stedinger, 2007)), and non-
stationary Flood Frequency Analysis to account for the impacts of climate change and
changes in catchment conditions during the period of record. The data preparation issues for
a RFFE technique are discussed in more detail by Haddad et al. (2010) and (Rahman et al.,
2015a; Rahman et al, 2015; Rahman et al., 2015b; Rahman et al., 2015c). In addition to the
consideration of the data available at individual gauging stations, it is important to also
consider the representativeness of the gauged data. The available gauged catchments
represent only a sample of the range of conditions that may occur throughout the regions
where the RFFE method is developed and applied.
The transfer of flood information from gauged to ungauged catchments relies on the ability to
identify a number of key catchment and climate characteristics which determine similarities
and differences in the flood production of catchments. In the Probabilistic Rational Method
similarity is assumed to exist on the basis of geographical proximity, but in other methods an
108
Regional Flood Methods
appropriately small but informative set of predictor variables needs to be identified. The
accuracy of a RFFE technique does not necessarily increase with the number of adopted
predictor variables. For example, Rahman et al. (1999) used 12 predictor variables in an L-
moments based Index Flood Method for south-east Australia. A subsequent study by
Rahman (2005) showed that use of only 2 to 3 predictor variables can provide a similar level
of accuracy.
Because a RFFE technique typically has limited predictive power, design flood estimates
produced by it are likely to have a lower degree of accuracy than those from a well calibrated
catchment modelling system. From the investigations made by (Rahman et al., 2009;
Rahman et al., 2012; Rahman et al., 2015a; Rahman et al, 2015), it may be stated that the
relative accuracy of regional flood estimates using the RFFE model presented in this chapter
is likely to be within ±50% of the true value; however, in a limited number of cases the
estimation error may exceed the estimation by a factor of two or more (see Book 3, Chapter
3, Section 7). It is unlikely that any RFFE technique would be able to provide flood quantile
estimates which are of much greater accuracy given the current availability of streamflow
data (in terms of temporal and spatial coverage) and feasibility of the extraction of a greater
number of catchment descriptors using simplified methods such as GIS based techniques.
Because of the small sample of gauged catchments and limited availability of readily
obtainable catchment descriptors, it is not possible to prepare an extremely detailed set of
descriptor variables covering all possible conditions, so a sample must be selected that
provides a suitable range to represent the critical parameters, but to limit the application of
variables that do not contribute significantly to the overall performance of the RFFE
technique.
For catchments having limited recorded streamflow data, the combination of at-site data with
a RFFE technique is likely to provide more accurate flood quantile estimates than either the
at-site or regional method alone. Details of how limited streamflow data can be combined
with a RFFE technique are presented in Book 3, Chapter 2. Testing has shown this improves
estimates in many cases.
An important assumption in all RFFE techniques is that the small set of predictor variables
used in the regression equations is able to explain the differences in flood producing
characteristics of the catchments in a region. Not all ungauged catchments located in the
region satisfy this basic homogeneity assumption; some catchments may have
characteristics that are substantially different from the gauged catchments in the region.
109
Regional Flood Methods
Book 3, Chapter 3, Section 13 contains further discussion on the limits of applicability of the
RFFE technique, on what constitutes an atypical catchment and recommendations on how
to derive flood estimates for such catchments.
One of the apparent limitations of the ROI approach is that for each of the gauged sites in
the region, the regional prediction equation has a different set of model parameters; hence a
single regional prediction equation cannot be pre-specified. To overcome this problem, the
parameters of the regional prediction equations for all the gauged catchment locations are
pre-estimated and integrated with the RFFE Model 2015 (see Book 3, Chapter 3, Section 14
for more details). To derive flood quantile estimates at an ungauged location of interest, the
RFFE Model 2015 uses a natural neighbour interpolation method to derive quantile
estimates based on up to the 15 nearest gauged catchment locations within a 300 km radius
from the location of interest. This ensures a smooth variation of flood quantile estimates over
the space.
M is the mean of the natural logarithms of the annual maximum flood series,
S is the standard deviation of the natural logarithms of the annual maximum flood series,
and
�� is the frequency factor for the LP III distribution of X% AEP, which is a function of the AEP
and the skewness (SK) of the natural logarithms of the annual maximum flood series.
110
Regional Flood Methods
The prediction equations for M, S and SK were developed for all the gauged catchment
locations in the Humid Coastal areas using Bayesian GLS regression and model parameters
were noted. These model parameters are then integrated with the RFFE Model 2015.
�
�� = �0 + ∑ � ��ij + ��, � = 1, 2, ..., � (3.3.2)
�=1
where �ij (j = 1,…, k) are explanatory variables, � � are the regression coefficients, �� is the
model error which is assumed to be normally and independently distributed with model error
variance �2� , and n is the number of locations in the region. In all cases, an at-site estimate
of yi denoted as �� is available. To account for the error in the at-site estimate, a sampling
error �� must be introduced into the model so that:
Thus the observed regression model error � is the sum of the model error � and the
sampling error �. The total error vector has a mean of zero and a covariance matrix:
whereΣ(ŷ) is the covariance matrix of the sampling error in the estimate of the flood quantile
or the parameter of the LP III distribution and I is a (n x n) identity matrix. The covariance
matrix for ηi depends on the record length available at each location and the cross
correlation among annual maximum floods at different locations. Therefore, the observed
regression model error is a combination of time-sampling error ηi and an underlying model
error δi.
The GLS estimator of β and its covariance matrix for a known �2� is given by:
− −
���� = ��� �2� � � ��� �−
� � (3.3.5)
�
− −
� ���� = ��� �2� � � (3.3.6)
�
The model error variance σδ² can be estimated by either generalised Method of Moments
(MOM) or maximum likelihood estimators. The MOM estimator is determined by iteratively
solving Equation (3.3.5) along with the generalised residual mean square error equation:
−
� − ��� �2�� + � � �
� − ����� = � − � + 1 (3.3.7)
In some situations, the sampling covariance matrix explains all the variability observed in the
data, which means the left-hand side of Equation (3.3.7) will be less than n – (k + 1) even if
111
Regional Flood Methods
σδ² is zero. In these circumstances, the MOM estimator of the model error variance is
generally taken to be zero.
With the adopted Bayesian approach, it was assumed that there was no prior information on
any of the β parameters; thus a multivariate normal distribution with mean zero and a large
variance (e.g. greater than 100) was used as a prior for the regression coefficient
parameters. This prior was considered to be almost non-informative, which produced a
probability distribution function that was generally flat in the region of interest. The prior
information for the model error variance σδ² was represented by a one parameter
exponential distribution. Further description of the adopted Bayesian GLS regression can be
found in Haddad et al. (2011) and Haddad and Rahman (2012).
112
Regional Flood Methods
Figure 3.3.1. Geographical Distribution of the Adopted 798 Catchments from Humid Coastal
Areas of Australia and 55 Catchments from Arid/Semi-arid Areas
The selected streams were unregulated since major regulation (e.g. a large dam on the
stream) affects the rainfall-runoff relationship significantly by increasing storage effects.
Streams with minor regulation, such as small farm dams and diversion weirs, were not
excluded because this type of regulation is unlikely to have a significant effect on large
annual floods. Gauging stations on streams subject to major upstream regulation were
excluded from the data set. Catchments with more than 10% of the area affected by
urbanisation were also excluded from the study data set. Catchments known to have
undergone major land use changes, such as the clearing of forests or changing of
agricultural practices over the period of streamflow records were excluded from the data set.
Stations graded as 'poor quality' or with specific comments by the gauging authority
regarding quality of the data were assessed in greater detail; if stations were deemed 'low
quality' they were excluded.
The annual maximum flood series data may be affected by multi-decadal climate variability
and climate change, which are not easy to deal with. The effects of multi-decadal climate
variability can be accounted for by increasing the cut-off record length at an individual
station. However, the impacts of climate change present a serious problem in terms of the
applicability of the past data in predicting future flood frequency; this requires further
research (Ishak et al., 2013). This is further discussed in Book 3, Chapter 3, Section 10.
The data sets for the initially selected potential catchments are further examined, as detailed
in Haddad et al. (2010), Rahman et al. (2015a), Rahman et al (2015), and Rahman et al.
(2015b): gaps in the annual maximum flood series was filled as far as could be justified,
outliers were detected using the multiple Grubbs-Beck test (Lamontagne et al., 2013; Cohn
et al., 2013), errors associated with extrapolation of rating curves were investigated and the
113
Regional Flood Methods
presence of trends with the data were checked. From an initial number of approximately
1200 catchments, a total of 798 catchments were finally adopted from all over Australia
(excluding catchments in the arid/semi-arid areas, where some of the above criteria were
relaxed, as discussed in Section 5).
The record lengths of the annual maximum flood series of these 798 stations range from 19
to 102 years (median: 37 years). The catchment areas of the selected 798 catchments range
from 0.5 km² to 4325 km² (median: 178 km²). Table 3.3.1 provides summary of the selected
catchments from Humid Coastal areas of Australia.
Table 3.3.1. Summary of Adopted Catchments from Humid Coastal Areas of Australia
State No. of Stations Streamflow Record Catchment Size
Length (years) (km2) (range and
(range and median) median)
New South Wales & 176 20 – 82 (34) 1 – 1036 (204)
Australian Capital
Territory
Victoria 186 20 – 60 (38) 3 – 997 (209)
South Australia 28 20 – 63 (37) 0.6 – 708 (62.6)
Tasmania 51 19 – 74 (28) 1.3 – 1900 (158.1)
Queensland 196 20 – 102 (42) 7– 963 (227)
Western Australia 111 20 – 60 (30) 0.5 – 1049.8 (49.2)
Northern Territory 50 19 – 57 (42) 1.4 – 4325 (352)
TOTAL 798 19 – 102 (37) 0.5 – 4325 (178)
The at-site Flood Frequency Analyses were conducted using the FLIKE software (Kuczera,
1999). The Potential Influential Low Flows (PILFs) were identified using multiple Grubbs-
Beck test (Lamontagne et al., 2013) and were censored in the Flood Frequency Analysis. A
Bayesian parameter estimation procedure with LP III distribution was used to estimate flood
quantiles for each gauged site for AEPs of 50%, 20%, 10%, 5%, 2% and 1%.
(ii) design rainfall intensity at catchment centroid (in mm/h) for the 6 hour duration and 50%
AEP (50%I6h);
(iii) design rainfall intensity at catchment centroid (in mm/h) for the 6 hour duration and 2%
AEP (2%I6h);
(iv) ratio of design rainfall intensities of AEPs of 2% and 50% for duration of 6 hour (2%I6h/
50%I ); and
6h
(v) shape factor, which is defined as the shortest distance between catchment outlet and
centroid divided by the square root of catchment area.
114
Regional Flood Methods
Design rainfall values were extracted from the new Intensity Frequency Duration (IFD) data
for Australia as discussed in Book 2, Chapter 3 of ARR. Table 3.3.2 provides the distribution
of shape factors for the selected catchments.
Table 3.3.3. Details of RFFE Technique for Humid Coastal Areas of Australia
115
Regional Flood Methods
�6, 2
� = �0 + �1ln (3.3.9)
�6, 50
�6, 2
�� = �0 + �1ln area + �2ln + �3ln �6, 2 (3.3.10)
�6, 50
I6,50 is the design rainfall intensity (mm/h) at catchment centroid for 6 hour duration and 50%
AEP;
shape factor is the shortest distance between catchment outlet and centroid/area0.5; and
I6,2 is the design rainfall intensity (mm/h) at catchment centroid for 6 hour duration and 2%
AEP.
For Region 1 and 2, only the model intercepts were used in Equation (3.3.9) and Equation
(3.3.10), which imply a regression equation without any predictor variable. Here, the
weighted average values of S and SK were adopted, record lengths at the stations within the
ROI sub-region were used as a basis for determining the weights.
The values of b0, b1, b2, b3, c0, c1, d0, d1, d2 and d3 at all the 798 individual gauged
catchment locations (in the Humid Coastal areas) were estimated and embedded in the
RFFE Model 2015. To derive flood quantile estimates at an ungauged location of interest,
the RFFE Model 2015 uses a natural neighbour interpolation method to derive quantile
estimates based on up to the 15 nearest gauged catchment locations within 300 km radius
from the location of interest. This ensures a smooth variation of flood quantile estimates over
space.
116
Regional Flood Methods
transmission losses (i.e. losses occurring from flow in rivers and other drainage channels)
may also result in discharge reducing in a downstream direction, particularly in the lower
river reaches of larger catchments in arid areas. The special flooding characteristics of
catchments in arid/semi-arid areas make it desirable to treat them separately from
catchments in more humid areas. In arid/semi-arid areas, annual maximum flood series
generally contain many zero values; these values were censored in flood frequency analysis.
In ARR 1987 (Pilgrim, 1987), only a few catchments were used from arid/semi-arid areas to
develop RFFE methods. Since the publication of ARR 1987, there has been little
improvement in terms of streamflow data availability in most of the arid/semi-arid areas of
Australia. There are a number of reasons for poor stream gauging coverage and quality in
arid/semi-arid areas, as follows:
1. Poorly defined water course catchments, meaning that the water courses are hard to
define and therefore gauge.
2. Large floods may be outside the water course, meaning that the cross-section may be
hard to measure and the flow may be difficult to gauge.
3. Infrequent flood events, meaning that the gauge may be operating for extended periods of
time without any flow.
4. Remote location, making it difficult to take velocity measurements during the flood events.
5. Little incentive to gauge flows because of the perceived limited water resources and the
limited demand for development of these resources.
In the preparation of the Regional Flood Frequency Estimation database, only a handful of
catchments from the arid/semi-arid areas satisfied the selection criteria. To increase the
number of stations from the arid/semi-arid areas to develop a RFFE method, the selection
criteria were relaxed i.e. the threshold streamflow record length was reduced to 10 years and
the limit of catchment size was increased from 1000 km² to 6000 km². These criteria resulted
in the selection of 55 catchments from the arid/semi-arid areas of Australia (Figure 3.3.2 and
Table 3.3.4). The selected catchments have average annual rainfall less than 500 mm. The
catchment areas range from 0.1 to 5975 km² (median: 259 km²) and streamflow record
lengths range from 10 to 46 years (median: 27 years).
117
Regional Flood Methods
led to the formation of two regions from the 55 arid/semi-arid catchments: Region 6 (11
catchments from the Pilbara area of WA) and Region 7 (44 catchments from all other arid
areas except Pilbara) (see Figure 3.3.2 for the extent of these two arid/semi-arid regions and
Table 3.3.5 for other details).
A prediction equation was developed for Q10 as a function of catchment characteristics, and
regional growth factors were developed based on the estimated at-site flood quantile. In the
arid areas, significant storm events do not typically occur every year, and some of these
events do not produce significant floods. The at-site Flood Frequency Analyses for the arid
catchments was conducted using the FLIKE software (Kuczera, 1999). The Potential
Influential Low Flows (PILFs) were identified using multiple Grubbs-Beck test (Lamontagne
et al., 2013) and were censored in the Flood Frequency Analysis. A Bayesian parameter
estimation procedure with LP III distribution was used to estimate flood quantiles for each
gauged site for AEPs of 50%, 20%, 10%, 5%, 2% and 1%. It should be noted that the Flood
Frequency Analysis procedure adopted in the Humid Coastal and arid areas was the same.
The Qx/Q10 values were estimated at individual stations; the weighted average of these
values (weighting was done based on record length at individual stations) over all the
stations in a region then defined the Growth Factors (GFx) for the region.
The adopted prediction equation for the index variable Q10 has the following form:
where b0, b1 and b2 are regression coefficients, estimated using ordinary least squares
regression; area represents catchment area in km², and I6,50 is the design rainfall intensity
(mm/h) at catchment centroid for 6 hour duration and 50% AEP. The values of b0, b1 and b2
and the regional Growth Factors (GFx) are embedded into the RFFE Model 2015.
118
Regional Flood Methods
catchments located close to these regional boundaries, seven fringe zones were delineated,
as shown Figure 3.3.2. The boundary of the fringe zone with the Humid Coastal region was
approximately defined by the 500 mm mean annual rainfall isohyet, while the other side was
defined by the 400 mm mean annual rainfall isohyet to establish a fringe zone. In drawing
the boundary, some minor adjustment was made to make the boundary as smooth as
possible.
For these fringe zones, the flood estimate at an ungauged catchment location is taken as the
inverse distance weighted average value of the flood estimates based on the two nearest
regions. The method is embedded into the RFFE Model 2015.
The reliability of the RFFE flood quantile confidence limits described in Book 3, Chapter 3,
Section 3 was assessed empirically using standardised quantile residuals. The quantile
residual is the difference between the logarithm of flood quantile estimates obtained using
at-site Flood Frequency Analysis and the RFFE technique. The standardised quantile
residual is the quantile residual divided by its standard deviation which is the square root of
the sum of the RFFE predictive variance of the flood quantile and at-site quantile variance
(Haddad and Rahman, 2012; Micevski et al., 2015). This accounts for both the model error
(e.g. inadequacy of the RFFE model) and the sampling error (e.g. due to limiteations in
streamflow record length). If the uncertainty in the log quantile estimates has been
adequately described, the standardised quantile residuals should be consistent with a
standard normal distribution.
Figure 3.3.3 shows the plots of standardised residuals vs. normal scores for Region 1 for
AEPs of 10% and 5% (the plots for other AEPs for Region 1 are provided in Book 3, Chapter
3, Section 16). Plots for other regions can be seen also in Rahman et al (2015). These
residuals were estimated without bias correction (details of bias correction are provided in
Book 3, Chapter 3, Section 8 ). These plots include 558 catchments from Region 1 used in
the RFFE model and some 28 catchments which were excluded from the final RFFE model
data set based on the results of preliminary analysis and unusual characteristics such as
significant natural floodplain storage. Figure 3.3.3 reveals that most of the 558 catchments
closely follow a 1:1 straight line indicating that the assumption of normality of the residuals
was not inconsistent with the evidence; this is supported by the application of the Anderson-
Darling and Kolmogorov-Smirnov tests which showed that the assumption of the normality of
the residuals cannot be rejected at the 10% level of significance. Under the assumptions of
normality, approximately 90% of the standardised quantile residuals should lie between ± 2,
which is largely satisfied. There were a few catchments with standardised residual values
close to ± 3. These correspond to instances where the RFFE confidence limits may not be
reliable; an example of such a catchment is provided in Book 3, Chapter 3, Section 15. The
same conclusion applies to the other Humid Coastal regions.
119
Regional Flood Methods
Figure 3.3.3. Standardised Residuals vs. Normal Scores for Region 1 Based on Leave One
Out Validation for AEPs of 10% and 5%
The main conclusion from this analysis is that the quantification of uncertainty in the quantile
estimates by the RFFE technique is reliable for the vast majority of the cases. Figure 3.3.3
and Book 3, Chapter 3, Section 15 serve as a reminder that some catchments may not be
adequately represented by the catchments used in the RFFE analysis. Users of the RFFE
Model 2015 should check that the catchment of interest is not atypical compared with the
gauged catchments included in the ROI used to develop the RFFE estimate. To assist users
in this regard the RFFE Model 2015 discussed in Book 3, Chapter 3, Section 14 and Book 3,
Chapter 3, Section 13 lists the RFFE Model gauged catchments located nearest to the
ungauged catchment of interest.
120
Regional Flood Methods
The accuracy of the flood quantile estimates provided by the RFFE technique is evaluated
by using the Relative Error (RE) defined as:
�RFFE − �FFA
RE(%) = × 100 (3.3.13)
�FFA
where QRFFE is the flood quantile estimate for a given site for a given AEP by the RFFE
technique; and QFFA is the flood quantile estimate from the at-site Flood Frequency Analysis
(see Book 3, Chapter 3, Section 4 for details).
It should be noted that the Relative Error given by Equation (3.3.13) makes no allowance for
the fact that the at-site flood frequency estimates are themselves subject to sampling error.
Therefore, this error should be seen as an upper bound on the true relative error.
It should be noted here that LOO is a more rigorous validation technique compared with the
split-sample validation where the model is tested on a smaller number of catchments (e.g.
10% of the total catchments). Hence, the relative error that is generated by LOO is expected
to be higher than if split-sample validation were used. The medians of the absolute relative
error values from the LOO validation for different regions are reported in Table 3.3.6. It can
be seen that for the Humid Coastal regions, Region 5 (SW WA) has the highest relative error
(59 to 69%) and Region 3 (Humid SA) has the smallest relative error (33 to 41%).
Table 3.3.6. Upper bound on (absolute) median relative error (RE) from leave-one-out
validation of the RFFE technique (without considering bias correction)
Median RE (%)
AEP
Region 50% 20% 10% 5% 2% 1%
Region 1: 51 49 52 53 57 59
East Coast
Region 2: 53 46 46 46 46 45
Tasmania
Region 3: 38 39 33 35 39 41
Humid SA
121
Regional Flood Methods
Median RE (%)
Region 4: 33 36 36 38 39 47
Top End NT
and
Kimberley
Region 5: 61 59 66 68 68 69
SW WA
Region 6: 35 37 35 42 37 43
Pilbara
Region 7: 63 67 67 61 57 49
Arid and
Semi-arid
122
Regional Flood Methods
It must be recognised however that this is a small number of gauges in the 7.7 million km2
area of Australia. In addition to the sparse gauge coverage, there is a significant variation in
catchment types across Australia. The RFFE Model must use the available data to develop a
regional procedure that can estimate flood quantiles with the best possible accuracy.
Relevant catchment characteristics therefore may not be represented sufficiently to allow
inclusion in the regional relationship.
There are insufficient gauges to provide a representative coverage of all catchment types
throughout Australia. This is a particular concern in the arid and semi-arid interior.
The regional relationship implemented in ARR has therefore used only characteristics that
are sampled sufficiently. These characteristics are catchment area, rainfall intensity
parameters and shape factor. Other factors, such as land use, slope, soils, geology or
vegetation, are not sampled sufficiently to allow inclusion in the parameters used for the
procedure, even though they are known to be important in the estimation of flood quantiles.
The RFFE Model therefore may be regarded as providing a “generic” estimate of flood
quantiles for a range of typical catchment types. It may be expected that different flood
estimates would be derived for other catchment types that have catchment characteristics
that are dissimilar to those used in development of the method. If there were a larger sample
of gauged catchments in Australia and they sampled a wider variety of catchment types, it is
expected that the RFFE Model would incorporate a wider range of catchment characteristics
and would therefore be capable of more reliable estimates for a wider range of catchments.
The procedure is based on the analysis of the selected catchment characteristics, but
improvements in the procedure are possible with additional analysis.
Imposing a higher data quality standard will reduce the number of gauges for analysis while
lowering the standard will result in greater errors in the data used to develop the procedure.
For the development of the RFFE Model, the decision has been to impose a minimum data
quality standard, to ensure a maximum number of gauges were included.
123
Regional Flood Methods
It must be noted though that such data inaccuracies were not be identified during the
development of the RFFE Model as the analysis involved a large number of stations and
detailed assessment of each of these was not feasible. The meta-data and other
documentation provided by the water agencies was not sufficiently detailed to allow a routine
assessment of data quality, and it was necessary to rely on local advice as to the suitability
of the data used.
The inclusion of this data may impact on the quality of the regional results, but does improve
the representativeness of the data. While it is impossible to quantify this impact, the possible
impact needs to be borne in mind.
Longer periods of record are more likely to represent the variability in flood magnitude and
also more likely to sample large floods that occur infrequently. However, even with a record
length of more than 20 years, it is possible that the record may not sample a wide range of
flood sizes and there is still a potential source of inaccuracy when extrapolating to larger
floods. The samples from different time periods may also affect the flood quantiles since it is
well known that there are longer term variations in the distribution of flood maxima due to
inter-decadel variability in climate. This may result in differences in the design flood quantiles
that are purely a result of the sample period rather than a real difference in flood probability.
However, given the accuracy considerations discussed above, some additional testing and
review is recommended.
The additional testing and review can include the following processes:
• Review the catchment characteristics for the catchment being analysed and assess
whether this catchment is typical for catchments that have been used in the development
of the method: This review needs to consider the catchments in the local area, or
elsewhere in Australia. The most relevant characteristics to review include the catchment
shape, slope, soils and vegetation. The extent of floodplain storage, either natural or
artificial needs to be reviewed. If the target catchment has features that are distinctly
different from the range found in “typical” gauged catchments, the results from the RFFE
Model can be either discarded or adjusted to allow for the local conditions.
• Review the nearby catchments listed in the RFFE Model: These are the nearest gauged
catchments that have been used in the development of the procedure and are some
gauges that will influence the results for the Region of Influence calculations.
This review will need to consider whether these nearby catchments are similar to the
target catchment or if there are any apparent outliers in this group. As with the review of
catchment characteristics, the review of the nearby catchments in the RFFE Model could
124
Regional Flood Methods
lead to an adjustment in the results, or the decision for the flood quantiles for the target
catchment to be transposed directly (allowing for differences in catchment area) from a
nearby catchment that is most similar to the target catchment.
• Review any local available data: This data may be as uncertain as a limited amount of
anecdotal data, such as an observation of the frequency of overtopping of a bridge for
example. Depending on the findings of the check of local observations, the RFFE Model
results may need adjustment. The results of the RFFE Model calculation can be compared
with the local observation to at least ensure that the calculated design flood quantiles are
consistent with the local observations.
The conclusion of this additional analysis is that the calculated RFFE Model flood quantiles
are not perfect, though they do provide the means to develop estimates that are consistent
with a large body of gauged records. Further consideration of the results and possible
adjustment will help to ensure a better estimate of the design flood quantiles needed for
inclusion in the analysis.
While the RFFE Model is appropriate for catchment types represented by the gauged
catchments used to develop this model, there are catchments where the method either
cannot be applied or where there may be gross error in the flood quantile estimates if the
RFFE Model were applied without adjustments. These catchment types are described below.
Therefore, the RFFE Model 2015 cannot be applied for any catchment where urbanisation
accounts for more than 10% of the catchment area, or where it is considered that there may
125
Regional Flood Methods
If a catchment has a dam where there will be an impact on flood discharges, the RFFE
Model cannot be applied directly. In this case, the RFFE Model can be used to calculate
design flood discharges for the “natural” catchment and then the effects of the dam can be
included by the application of a runoff-routing model where the storage effects can be
modelled directly.
In cases where mining is judged to impact a significant proportion of a catchment, the RFFE
Model can only be applied on the natural portions of the catchment. The area affected by
mining activities should be modelled by an alternative method. Assessment of the water
balance affecting the volume of runoff is needed to ensure that the effects of water control in
ponds and mine pits and infiltration in highly modified catchments is represented correctly.
The catchment response is also affected by drainage works and diversions. Calculation of
runoff from mining areas is a complex and specialised exercise and detailed understanding
of individual conditions is necessary. In general, a detailed runoff-routing model is needed as
well as a good understanding of the water balance in these situations. However, direct
calibration of runoff routing models on ungauged catchments is not possible, which may
result in grossly inaccurate design flood estimates.
126
Regional Flood Methods
Catchments affected by intense agricultural activities are similar to these affected by mining
and similar analysis methods are needed to calculate flood discharges.
In this case the RFFE Model cannot be applied directly but additional analysis is needed to
assess the catchment characteristics and then prepare a runoff-routing model to adjust the
generic RFFE Model results using the catchment storage and channel characteristics.
The RFFE Model can be applied to small catchments with no lower limit though it is
recognised that there are only a few gauged catchments smaller than 10 km2 included in the
database to develop the RFFE Model. Because of the limited available data, it is likely that
there will be a greater degree of error in the quantile estimates for these smaller catchments.
The RFFE Model should not be applied for catchments larger than 1000 km2 because these
larger catchments were not generally used in the development of the method.
It is known that there are many other catchment characteristics that influence catchment
response and flood discharges. These include factors such as:
• Catchment land use including vegetation coverage: The extent of clearing or forest cover
can have a significant impact, especially in the south-west of Western Australia, for
example.
• Soils and geology: These factors influence the rainfall losses and catchment flood
response.
• Slopes: Steeper catchments respond more rapidly and therefore produce larger flood
peaks.
• Channel types and floodplain storage: Well defined channels and smaller floodplain
storage extents produce faster response and therefore larger flood peaks.
In cases where the catchment requiring the design flood estimate is judged to have
characteristics significantly different from those used in the development of the RFFE Model,
further hydrology and hydraulic analyses may be needed to refine the results from the
generic RFFE Model results.
127
Regional Flood Methods
The RFFE Model has been developed from the analysis of gauged catchments, but in reality
all catchments exhibit some differences. Gauges need to be installed at locations where
there is a sensitive and stable control and where the flow is well constrained within a defined
stream channel. This means that larger catchments with extensive floodplain storage of
widely distributed flow for example may not be well represented in the gauged dataset.
These catchments may also have lower data quality but may well be required for design
purposes.
This means that catchments where design flood discharges are needed may be remote from
the gauged catchments used to develop the RFFE Model and may have distinct differences
from the adopted gauged catchments.
In these cases, the application of the RFFE Model may produce inaccurate results and
hence additional review and checking are necessary to confirm that the catchment being
analysed has similar characteristics to those used to develop the RFFE Model. Where there
are significant differences, a similar assessment to that needed for atypical catchments
should be considered, and an adjustment to the generic RFFE Model result may be needed.
The stream gauges in this region are often located at sites that may not be representative of
the general area, since the gauge sites are selected because of access, confined channel
and stable control, which may not be typical of the types of sites where design flood
discharges are needed.
The Pilbara region is not as diverse as the remainder of arid and semi-arid areas and the
RFFE Model estimates for catchments in the Pilbara can be adopted subject to the other
limitations discussed here.
The general arid and semi-arid region though is more complex. The RFFE Model results for
this region are based on the best available data from the gauges, but because of the
diversity of conditions and the very low desnity of gauging, the RFFE Model results are very
uncertain.
Because of this the recommended approach for the arid and semi-arid zone requires some
additional analysis to refine the standard results. The RFFE Model data-base report
(Rahman et al., 2015a) has a listing of all of the catchments that were used in the
development of the method and the application of the RFFE Model software provides a map
of the neighbouring catchments that were used in the development of the method.
128
Regional Flood Methods
3.13.10. Baseflow
The RFFE Model has been developed using at-site flood frequency analysis, so the
estimated peak discharges include baseflow. This means that the results from the RFFE
Model will not be consistent with those from runoff-routing models where only surface runoff
is calculated. While baseflow is only a minor part of the total flood hydrograph in many areas
of Australia, there are some areas where it is more significant and it becomes more
important when considering smaller floods. When working with the two methods it is thus
necessary to make appropriate allowance for baseflow contribution to the peak and volume
of the design flood.
i. Catchment name;
In the "Advanced" data input option, the user can also enter the following inputs:
i. Region name (can be selected from the dropdown list, see Table 3.3.7 for region name
and Figure 3.3.2 for the extent of the regions);
Table 3.3.7. Region Names for Application of the RFFE Model 2015 (see Figure 3.3.2 for
the Extent of the Regions)
Region name Region code
Region 1: East Coast 1
Region 2: Tasmania 2
Region 3: Humid SA 3
Region 4: Top End NT and Kimberley 4
Region 5: SW WA 5
Region 6: Pilbara 6
Region 7: Arid and Semi-arid 7
ii. Design rainfall intensity at catchment centroid for 50% AEP and duration of 6 hour in
mm/h; and
1www.arr.org.au
129
Regional Flood Methods
iii. Design rainfall intensity at catchment centroid for 2% AEP and duration of 6 hour in mm/h.
The RFFE Model 2015 output contains the following principal information (see Figure 3.3.6
as an example).
130
Regional Flood Methods
Figure 3.3.5. RFFE Model 2015 Screen Shot for Data Input for the Wollomombi River at
Coinside, NSW
131
Regional Flood Methods
Figure 3.3.6. RFFE Model 2015 Screen Shot for Model Output for the Wollomombi River at
Coinside, NSW (Region 1)
i. A table of AEP (in the first column), estimated flood quantiles labelled as discharge (m3/s)
in the second column, lower confidence limit (5%) (third column) and upper confidence
limit (95%) (fourth column). The confidence limits represent the overall uncertainty with
the estimated flood quantiles by the RFFE Model 2015.
ii. A second table labelled "Statistics", which shows the statistics for the regional LP III
model at the catchment of interest, which are particularly useful to combine at-site and
132
Regional Flood Methods
regional information to enhance the accuracy of flood quantile estimates (for details see
Book 3, Chapter 2).
iii. A graph that shows estimated flood quantiles and confidence limits against AEPs.
iv. A graph that shows the catchment of interest with outlet and centroid locations and
nearby gauged catchments that were included in the database to develop RFFE
technique.
v. A download menu that allows the user to save the results and additional outputs
generated by the model.
Three worked examples are provided in Book 3, Chapter 3, Section 15 to illustrate the use of
RFFE Model 2015. The first two examples relate to catchments with no major regulation, no
major natural or artificial storage and no major land use changes over time where the RFFE
Model 2015 is directly applicable.
The third example relates to a catchment which has significant natural floodplain storage
where RFFE Model 2015 is not directly applicable. For this case, the RFFE Model
significantly overestimates the flood quantiles (as compared to at-site Flood Frequency
Analysis). Here, the RFFE Model estimates need to be adjusted to account for the storage
effect of the catchment by applying an appropriate technique (see Book 3, Chapter 3,
Section 13 for further details).
133
Regional Flood Methods
Figure 3.3.7. RFFE Model 2015 vs. At-site FFA Flood Estimates for the Wollomombi River at
Coinside, NSW (Region 1)
Table 3.3.8. Application Data for the Wollomombi River at Coinside, NSW (Region 1) (Basic
Input Data)
Menu Input
Catchment Name Wollomombi River at Coinside
Catchment Outlet Latitude in decimal -30.478
degrees
Catchment Outlet Longitude in decimal 152.026
degrees
Catchment Centroid Latitude in decimal -30.352
degrees
Catchment Centroid Longitude in decimal 151.936
degrees
Catchment Area in km2 376
Table 3.3.9 shows a list of 15 gauged catchments, which is generated as part of the RFFE
Model output. In this example, these gauged catchments are located closest to the
Wollomombi River at Coinside, NSW and were used in the development of the RFFE model.
The user should compare the characteristics of the ungauged catchment of interest with
those of the nearest gauged catchments (as in Table 3.3.9) to ensure that the ungauged
catchment is not atypical.
134
Regional Flood Methods
Table 3.3.9. Fifteen Gauged Catchments (Used in the Development of RFFE Model 2015)
Located Closest to Wollomombi River at Coinside, NSW
Site ID Dist. Area Lat. Long. Lat. Long. Record Mean Shape
(km) km2 (outlet) (outlet) (centroid (centroid Length Annual Factor
) ) (years) Rainfall
(mm)
206014 0.07 376 -30.478 152.0267 -30.352 151.936 57 871 0.8786
206001 18 163 -30.59 152.1617 -30.5244 152.279 33 1167 1.082
204030 24.29 200 -30.26 152.01 -30.1904 151.828 34 940 1.3946
206017 28.01 22 -30.478 152.3183 -30.4632 152.352 24 1167 0.8005
204008 31.64 31 -30.405 152.345 -30.4139 152.382 29 1249 0.6824
206026 35.67 8 -30.42 151.66 -30.4201 151.639 37 789 0.734
206025 37.68 594 -30.68 151.71 -30.6802 151.557 39 945 0.6194
206034 39.29 117 -30.7 151.7067 -30.7666 151.648 26 758 0.8859
418034 41.98 14 -30.3 151.64 -30.2758 151.65 29 818 0.7876
418014 63.83 855 -30.47 151.36 -30.5063 151.476 37 725 0.4171
206018 68.38 894 -31.051 151.7683 -30.9996 151.634 51 711 0.4846
204017 68.62 82 -30.306 152.7133 -30.3529 152.698 40 1995 0.6086
204037 72.28 62 -30.09 152.63 -30.1082 152.576 40 1586 0.7302
205002 72.50 433 -30.426 152.78 30.4537 152.566 29 1570 1.0277
206009 81.35 261 -31.19 151.83 -31.2597 151.757 57 910 0.6642
135
Regional Flood Methods
Figure 3.3.8. RFFE Model 2015 Screen Shot for Data Input for Four Mile Brook at Netic Rd,
SW WA (Region 5)
Table 3.3.10. Application Data for Four Mile Brook at Netic Rd, SW Western Australia
(Region 5) (Basic Input Data)
Menu Input
Catchment Name Four Mile Brook at Netic Rd
Catchment Outlet Latitude in decimal -34.30
degrees
Catchment Outlet Longitude in decimal 116.00
degrees
Catchment Centroid Latitude in decimal -34.318
degrees
Catchment Centroid Longitude in decimal 116.021
degrees
Catchment Area in (km2) 13.1
136
Regional Flood Methods
Figure 3.3.9. RFFE Model 2015 Screen Shot for Model Output for Four Mile Brook at Netic
Rd, SW WA (Region 5)
137
Regional Flood Methods
Figure 3.3.10. RFFE Model 2015 vs. At-site FFA Flood Estimates for Four Mile Brook at
Netic Rd, SW WA (Region 5)
Table 3.3.11. Application Data for the Morass Creek at Uplands, VIC (Region 1) (Basic Input
Data)
Menu Input
Catchment Name Morass Creek at Uplands
Catchment Outlet Latitude in decimal -36.87
degrees
138
Regional Flood Methods
Menu Input
Catchment Outlet Longitude in decimal 147.70
degrees
Catchment Centroid Latitude in decimal -36.88
degrees
Catchment Centroid Longitude in decimal 147.84
degrees
Catchment Area in (km2) 471
Figure 3.3.11. RFFE Model 2015 Screen Shot for Data Input for the Morass Creek at
Uplands, VIC (Region 1)
139
Regional Flood Methods
Figure 3.3.12. RFFE Model 2015 Screen Shot for Model Output for the Morass Creek at
Uplands, VIC (Region 1)
140
Regional Flood Methods
Figure 3.3.13. RFFE Model 2015 vs. At-site FFA Flood Estimates (for the Morass Creek at
Uplands, VIC) (Region 1)
141
Regional Flood Methods
Figure 3.3.14. Standardised Residuals vs. Normal Scores for Region 1 Based on Leave-one-
out Validation for AEPs of 50%, 20%, 2% and 1%
142
Regional Flood Methods
Figure 3.3.15. Standardised Residuals vs. Normal Scores for Region 1 Based on Leave-one-
out Validation for AEPs of 2% and 1%
3.17. References
Adams, C.A. (1984), 'Regional flood estimation for ungauged rural catchments in Victoria',
MEngSc Thesis, The University of Melbourne.
Ball, J.E. (2014), 'Flood estimation under climate change', 19th IAHR-APD Congress, Hanoi,
Vietnam.
143
Regional Flood Methods
Bates, B.C. (1994), 'Regionalisation of hydrological data: A review', Report 94/5, CRC for
Catchment Hydrology, Monash University.
Bates, B.C., Rahman, A., Mein, R.G. and Weinmann, P.E. (1998), Climatic and physical
factors that influence the homogeneity of regional floods in south-eastern Australia, Water
Resources Research, 34(12), 3369-3382.
Burn, D.H. (1990a), An appraisal of the region of influence approach to Flood Frequency
Analysis, Hydrological Sciences Journal, 35(2), 149-165.
Burn, D.H. (1990b), Evaluation of regional Flood Frequency Analysis with a region of
influence approach, Water Resources Research, 26(10), 2257-2265.
Cohn, T.A., England, J.F., Berenbrock, C.E., Mason, R.R., Stedinger, J.R. and Lamontagne,
J.R. (2013), A generalized Grubbs-Beck test statistic for detecting multiple potentially
influential low outliers in flood series, Water Resour. Res., 49(8), 5047-5058.
Farquharson, F.A.K., Meigh, J.R. and Sutcliffe, J.V. (1992), Regional Flood Frequency
Analysis in arid and semi-arid areas, Journal of Hydrology, 138: 487-501.
French, R. (2002), 'Flaws in the rational method', Proc. 27th National Hydrology and Water
Resources Symp., Melbourne.
Griffis, V.W. and Stedinger, J.R. (2007), The use of GLS regression in regional hydrologic
analyses, Journal of Hydrology, 344: 82-95.
Hackelbusch, A., Micevski, T., Kuczera, G., Rahman, A. and Haddad, K. (2009), 'Regional
Flood Frequency Analysis for eastern New South Wales: a region of influence approach
using generalised least squares based parameter regression', Proc. 31st Hydrology and
Water Resources Symp., Newcastle, pp: 603-615.
Haddad, K., Rahman, A. and Weinmann, P.E. (2008), 'Development of a generalised least
squares based quantile regression technique for design flood estimation in Victoria', Proc.
31st Hydrology and Water Resources Symp., Adelaide, pp: 2546-2557.
Haddad, K., Pirozzi, J., McPherson, G., Rahman, A. and Kuczera, G. (2009),'Regional flood
estimation technique for NSW: application of generalised least squares quantile regression
technique', Proc. 32nd Hydrology and Water Resources Symp., Newcastle, pp: 829-840.
Haddad, K., Rahman, A. and Kuczera, G. (2011), Comparison of ordinary and generalised
least squares regression models in regional Flood Frequency Analysis: A case study for
New South Wales, Australian Journal of Water Resources, 15(2), 59-70.
Haddad, K., Rahman, A. and Stedinger, J.R. (2012), Regional Flood Frequency Analysis
using Bayesian generalised least squares: A comparison between quantile and parameter
regression techniques, Hydrological Processes, 26: 1008-1021.
144
Regional Flood Methods
Haddad, K., Rahman, A., Weinmann, P.E., Kuczera, G. and Ball, J.E. (2010), Streamflow
data preparation for regional Flood Frequency Analysis: Lessons from south-east Australia,
Australian Journal of Water Resources, 14(1), 17-32.
Hosking, J.R.M. and Wallis, J.R. (1993), Some statistics useful in regional frequency
analysis, Water Resources Research, 29(2), .71-281.
Ishak, E., Rahman, A., Westra, S., Sharma, A. and Kuczera, G. (2013), Evaluating the non-
stationarity of Australian annual maximum floods, Journal of Hydrology, 494: 134-145.
Kuczera, G. (1999), Comprehensive at-site Flood Frequency Analysis using Monte Carlo
Bayesian inference. Water Resources Research, 35(5), 1551-1557.
Lamontagne, J.R., Stedinger, J.R., Cohn, T.A. and Barth, N.A. (2013),'Robust national flood
frequency guidelines: What is an outlier?', Proc. World Environmental and Water Resources
Congress, ASCE, pp: 2454-2466.
Micevski, T. and Kuczera, G. (2009), Combining site and regional flood information using a
Bayesian Monte Carlo approach, Water Resources Research, 45, W04405, doi:
10.1029/2008WR007173.
Micevski, T., Hackelbusch, A., Haddad, K., Kuczera, G. and Rahman, A. (2015),
Regionalisation of the parameters of the log-Pearson 3 distribution: a case study for New
South Wales, Australia, Hydrological Processes, 29(2), 250-260.
Mittelstadt, G.E., McDermott, G.E. and Pilgrim, D.H. (1987), Revised flood data and
catchment characteristics for small to medium sized catchments in New South Wales.
University of New South Wales, Department of Water Engineering, unpublished report.
Palmen, L.B. and Weeks, W.D. (2009),'Regional flood frequency for Queensland using the
Quantile Regression Technique', Proc. 32nd Hydrology and Water Resources Symp.,
Newcastle.
Palmen, L.B. and Weeks, W.D. (2011), Regional flood frequency for Queensland using the
quantile regression technique, Australian Journal of Water Resources, 15(1), 47-57.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Pilgrim, D.H. and McDermott, G. (1982), Design floods for small rural catchments in eastern
New South Wales, Civil Engineering Transactions, I. E. Aust., CE24: 226-234.
Pirozzi, J., Ashraf, M., Rahman, A., and Haddad, K. (2009),'Design flood estimation for
ungauged catchments in eastern NSW: evaluation of the Probabilistic Rational Method',
Proc. 32nd Hydrology and Water Resources Symp., Newcastle, pp: 805-816.
145
Regional Flood Methods
Rahman, A., Haddad, K., Rahman, A.S., Haque, M.M., Kuczera, G. and Weinmann, P.E.
(2015) Australian Rainfall and Runoff Revision Project 5: Regional Flood Methods: Stage 3,
ARR Report Number P5/S3/025, ISBN 978-0-85825-869-3.
Rahman, A., Rima, K. and Weeks, W.D. (2008),'Development of Regional Flood Frequency
Estimation methods using Quantile Regression Technique: a case study for north-eastern
part of Queensland', Proc. 31st Hydrology and Water Resources Symp., Adelaide, pp:
329-340.
Rahman, A., Haddad, K., Kuczera, G. and Weinmann, P.E. (2009),'Australian Rainfall and
Runoff Revision Projects, Project 5 Regional Flood Methods, Stage 1 Report', No.P5/
S1/003, Nov 2009, Engineers Australia, Water Engineering, pp: 181.
Rahman, A., Haddad, K., Zaman, M., Ishak, E., Kuczera, G. and Weinmann, P.E.
(2012),'Australian Rainfall and Runoff Revision Projects, Project 5 Regional flood methods,
Stage 2 Report', No.P5/S2/015, Engineers Australia, Water Engineering, pp: 319.
Rahman, A., Bates, B.C., Mein, R.G. and Weinmann, P.E. (1999), Regional Flood Frequency
Analysis for ungauged basins in south - eastern Australia, Australian Journal of Water
Resources, 3(2), 199-207.
Rahman, A., Haddad, K., Zaman, M., Kuczera, G. and Weinmann, P.E. (2011a), Design
flood estimation in ungauged catchments: a comparison between the Probabilistic Rational
Method and Quantile Regression Technique for NSW, Australian Journal of Water
Resources, 14(2), 127-139.
Rahman, A., Haddad, K., Rahman, A.S. and Haque, M.M. (2015a),'Australian Rainfall and
Runoff Revision Projects, Project 5 Regional flood methods, Database used to develop ARR
RFFE Model 2015', Engineers Australia, Water Engineering.
Rahman, A., Zaman, M., Fotos, M., Haddad, K., Rajaratnam, L. and Weeks, W.D.
(2011b),'Towards a new Regional Flood Frequency Estimation method for the Northern
Territory', Proc. 34th IAHR World Congress, Brisbane, pp: 364-371.
Rahman, A., Haddad, K., Haque, M.M., Kuczera, G., Weinmann, P.E., Stensmyr, P.,
Babister, M., Weeks, W.D. (2015b),'The New Regional Flood Frequency Estimation Model
for Australia: RFFE Model 2015', 36th Hydrology and Water Resources Symp., 7-10 Dec,
2015, Hobart.
Rahman, A., Haque, M., Haddad, K., Rahman, A.S., Caballero, W.L. (2015c),'Database
Underpinning the RFFE 2015 Model in the new Australian Rainfall and Runoff', 36th
Hydrology and Water Resources Symposium, 7-10 Dec, 2015, Hobart.
Stedinger, J.R. and Tasker, G.D. (1985), Regional hydrologic analysis 1 Ordinary, weighted,
and generalised least squares compared, Water Resources Research, 21(9), 1421-1432.
Tasker, G.D. and Stedinger, J.R. (1989), An operational GLS model for hydrologic
regression, Journal of Hydrology, 111(1-4), 361-375.
Zrinji, Z. and Burn, D.H. (1994), Flood frequency analysis for ungauged sites using a region
of influence approach. Journal of Hydrology, 153: 1-21.
146
BOOK 4
cxlix
Catchment Simulation for
Design Flood Estimation
cl
List of Figures
4.2.1. Catchment and Runoff Generation Processes ............................................................ 5
4.2.2. Simplified Description of the Process of Converting Rainfall to Runoff and
Streamflow ............................................................................................................ 6
4.2.3. Observed Hydrograph - Sum of the Baseflow Hydrograph and the
Quickflow ........................................................................................................... 9
4.2.4. Conceptual Relationship between Data Availability, Model Complexity and
Predictive Performance (Grayson and Blöschl, 2000) ........................................ 10
4.2.5. Spatial Representation of Physical Processes in Hydrologic Models ....................... 13
4.2.6. Comparison of Initial Loss in Urban and Rural Catchments ...................................... 16
4.3.1. Elements of Three Different Approaches to Flooding ................................................ 22
4.3.2. Simple Framework for Monte Carlo Simulation for Handling Joint Probabilities
Associated with Both Losses and Temporal Patterns .................................... 27
4.3.3. Schematic Layout of Delatite River catchment and Calibration to December 2010
Event ................................................................................................................... 35
4.3.4. Comparison of Design Flood Estimates with Flood Frequency Curve for the Delatite
River at Tonga Bridge .......................................................................................... 36
4.4.1. Joint Probability Density for a Bivariate Normal Distribution ..................................... 43
4.4.2. Difference in the Seasonal Likelihood of Large Long Duration Rainfall
Events and Large Short Dduration Rainfall Events and their Concurrence with
Catchment Losses ........................................................................................... 48
4.4.3. Generic Components that need to be Considered in Solution of Joint
Probability Problems ....................................................................................... 50
4.4.4. Use of Standardisation to Derive a Regional Distribution Based on
Catchment-specific analyses ....................................................................... 52
4.4.5. Examples of Investigations into Dependence between Flood Producing Factors
based on (a) Antecedent Seasonal Rainfall Data for Catchments over
1000 km2 (Scorah et al., 2015), (b)
Rainfall and Storm Surge Data (Westra, 2012), (c) and
Temperature Coincident with Rainfall Maxima (Nathan and Bowles, 1997) 53
4.4.6. Examples of Difference in Correlation between Flow Maxima in the
Namoi and Peel Rivers, Based on (a) Annual Maxima at Both Sites, and (b)
Peel River Flows that are Coincident with Namoi River Maxima ............ 53
4.4.7. General Framework for the Analysis of Stochastic Deterministic (Joint
Probability) Problems using Monte Carlo Simulation ...................................... 55
4.4.8. Inverse Transform Method ........................................................................................ 56
4.4.9. Histogram Obtained by Generating 1000 Random Numbers Conforming to a
Normal Distribution with a Mean of 50 and Standard Deviation of 25, and the
Resulting Distribution of variables Obtained by the Inverse Box Cox
Transformation (with � set to 1.20) .............................................................. 57
4.4.10. Generation of Variables with a Correlation of -0.7 based on (a) Uniform
and (b) Normal Distributions ........................................................................ 59
4.4.11. Conditional Empirical Sampling - Storage Volume in an Upstream Dam is
Correlated with the Volume in a Downstream Dam .................................... 61
cli
Catchment Simulation for
Design Flood Estimation
clii
List of Tables
4.4.1. Stochastic Generation of Correlated log-Normal Maxima ......................................... 67
4.4.2. Derivation of Deterministic Function Relating Upstream Flows to Downstream
Levels (a) ............................................................................................................. 68
4.4.3. Calculation of Exceedance Probability of the Level Exceeding 10.4 m using the
Total Probability Theorem .................................................................................... 69
cliii
Chapter 1. Introduction
James Ball, Rory Nathan
The simulation of design flood hydrographs is most easily undertaken using rainfall data.
While the methods presented in Book 3 provide useful independent estimates on the peak of
simulated hydrographs, additional information is required to check the shape and volume of
design flood hydrographs. Rainfall-based methods involve the transformation of rainfalls into
a selected flood characteristic:
• continuous simulation models transform a time series of rainfall into probabilistic flood
estimates
Hydraulic models are then used to simulate flood levels from hydrologic (and/or rainfall)
inputs. The challenge with these methods is how to achieve probability neutrality, that is how
to ensure that the method used to transform rainfalls into design floods is undertaken in a
fashion that minimises bias in the resulting exceedance probabilities.
The guidance in this Book is focused on the conceptual frameworks used to derive design
flood hydrographs, rather than on the models used to transform rainfall into runoff. Detailed
guidance on the different types of hydrologic models is provided in Book 5, and guidance on
the hydraulic modelling of runoff through the catchment is provided in Book 6. Guidance on
the application of catchment modelling systems and the interpretation of the results obtained
is provided in Book 7.
1
Introduction
Estimation of a design flood involves the derivation of the relationship between the
magnitude and probability of a given flood characteristic. The objective of the analysis is to
provide information for risk-based planning or design purposes. Such information can be
extended to provide standards-based estimates such as the Probable Maximum Flood, but
the simulation objective is still to assess the performance of a system under particular set of
loading conditions.
In contrast, the modelling required to simulate floods using historic or forecast rainfalls
involves quite different considerations. The challenge of selecting data inputs and analysis
frameworks to estimate the exceedance probability of a particular outcome is replaced by
the difficulty of preparing data to reflect conditions specific to a particular event in time. The
focus of the analysis might be on simulating a particular historic flood, or it may involve
assessment of the antecedent conditions and forecast rainfalls associated with a flood
forecast required at a particular point in time.
Identical models might be used to simulate the response of the catchment under historic or
design conditions. The essential difference is that design flood simulations are undertaken to
derive the best estimate of the relationship between flood magnitude and exceedance
probability, whereas simulation of actual floods represent the best estimate of flood
characteristics for a particular point in time. The Guidance provided in Book 6 and Book 7 is
equally applicable to models used for the simulation of design or actual floods. The guidance
provided in this Book is specific to the issues involved in assigning an exceedance
probability to flood characteristics.
1.3. Scope
The scope of this Book is largely on the simulation frameworks used to derive design floods.
Particular attention is paid to those hydrologic processes that most influence flood
magnitude, and on the approaches required to estimate the exceedance probability of a
flood characteristic of interest. Guidance is included on the treatment of joint probability, as
the explicit analysis of the way in which factors combine to influence the frequency of floods
is an essential consideration in the estimation of design floods. Worked examples are
provided to assist practitioners apply the guidance to typical real-world problems.
1.4. Outline
This book is structured as follows:
• Book 4, Chapter 2 provide a broad description of the runoff processes that contribute to
streamflows (interception, depression storage, infiltration, interflow, and groundwater
contributions), and identifies those components of most relevance to flood generation;
2
Introduction
design floods; Book 4, Chapter 3 also describes the use of continuous (and hybrid)
simulation approaches to flood estimation, and summarises the strengths and
weaknesses of the various approaches;
• Book 4, Chapter 4 introduces the generic nature of joint probability requirements; it covers
the factors involved in the transformation of rainfall to runoff, and other factors (e.g. initial
reservoir level or tide levels) that may influence the design performance of interest; and
• Book 4, Chapter 3 and Book 4, Chapter 4 include worked examples that illustrate
application of the techniques to practical problems.
3
Chapter 2. Hydrologic Processes
Contributing to Floods
Anthony Ladson, Rory Nathan
Chapter Status Final
Date last updated 14/5/2019
2.1. Introduction
This chapter outlines the hydrologic processes that contribute to floods including a review of
runoff generation, baseflow contributions to flood flow, flow routing and losses. The chapter
concludes with a discussion of the conceptualisation of these processes in models and case
studies of floods in tropical and temperate rural catchments, and in urban areas.
Under Australian conditions, the ultimate cause of the large streamflows that result in floods
is usually rainfall. Other causes, such as melting of snow and ice, are less important in our
temperate climate. In places storm surge may combine with stream flows to cause flooding
as discussed in Book 6, Chapter 5.
There are particular conditions that can lead to high streamflow, and flooding. A 'wet'
catchment means reduced losses so that a greater proportion of rainfall will be converted to
runoff. A catchment could be wet up by a long period of low intensity rainfall, particularly
when evaportranspiration is low, such as in winter. A short burst of high intensity rainfall can
lead to flooding if there are limited opportunities for rain to be lost. This is particularly the
case in catchments where impervious surfaces and piped drainage systems link runoff to
streams.
Figure 4.2.1 summarises the physical processes that can lead to floods, but floods can also
be considered stochastic events caused by the random simultaneous occurrence of unusual
conditions. The stochastic nature of flooding was illustrated in Book 1, Chapter 3 where it
was shown that flood peaks resulting from 1% AEP rainfalls ranged in magnitude from 500
m3/s to 2000 m3/s. The cause of this disparity in response is random variation in catchment
processes, such as interception and storage, and other factors such as the spatial and
temporal patterns of rainfall. The series of peak flows at a gauge are manifestations of the
joint probability of these random processes.
4
Hydrologic Processes
Contributing to Floods
Here we are focussing on quickflow and the mechanisms that rapidly convert rainfall to
streamflow and so cause a flood hydrograph. Book 4, Chapter 2, Section 3 briefly discusses
the slower process of baseflow along with losses and flow routing (Figure 4.2.2).
5
Hydrologic Processes
Contributing to Floods
Figure 4.2.2. Simplified Description of the Process of Converting Rainfall to Runoff and
Streamflow
• the infiltration capacity which is the maximum rate at which water can enter the soil.
If the rainfall rate (mm/hr) is greater than the infiltration capacity, water will pond at the soil
surface; if the ground is sloping, then water will runoff. Runoff produced in this way is called
infiltration excess overland flow, or Hortonian1overland flow. Hortonian overland flow can
provide a rapid pathway for water to be converted from rainfall to runoff. Hortonian flow is
6
Hydrologic Processes
Contributing to Floods
likely to contribute to floods when catchment surfaces have low infiltration capacity, when
there is intense rainfall and where there is a rapid mechanism for runoff to reach a stream.
Usually there are some areas within a catchment that are wetter than others. Areas along
valleys and adjacent to streams may remain saturated for long periods with up-slope areas
being dryer. Saturated areas enlarge and contract with the seasonal wetting and drying of a
catchment. Saturated areas may expand during a storm and then shrink once rainfall
ceases. As the amount of saturated area changes so does the source area contributing to
runoff.
The concept of partial area runoff arises because only part of a catchment may be saturated
and this area may be the only contributor to streamflow (Dunne and Black, 1970). Saturation
excess runoff can contribute to floods when source areas are large and convert intense
rainfall to runoff that flows directly to streams.
1Hortonian runoff is named for Robert E. Horton (1875-1945); a pioneer of modern hydrology.
7
Hydrologic Processes
Contributing to Floods
2.3. Baseflow
Streamflow is often divided into quickflow and baseflow. Quickflow is the characteristic rapid
response of a stream to rainfall and catchment runoff while baseflow is contributed by slow
release of stored water. Quickflow is often referred to as ‘direct runoff’ or as ‘surface runoff’
but, as noted above, can include subsurface stormflow. During floods, quickflow is of the
greatest relevance but, particularly for modelling, baseflow must be considered where it
provides a significant contribution to a flood hydrograph (Figure 4.2.3).
There are a range of processes that contribute to the conceptual baseflow hydrograph as
shown in Figure 4.2.3. The initial baseflow represents the contribution from previous events;
then as the hydrograph rises, baseflow can be depleted as water enters bank storage or is
removed by transmission loss. Later, baseflow can increase as bank storage re-enters the
stream, or through other processes such as interflow and discharge from groundwater
(Laurenson, 1975).
Generally, quickflow will be explicitly modelled, by for example, a runoff-routing model, and
then baseflow must be added to produce a flood hydrograph and unbiased estimate of the
peak flow. Baseflow provides a significant contribution to peak flows in around 70% of
Australian catchments (refer to Book 5, Chapter 4).
8
Hydrologic Processes
Contributing to Floods
Figure 4.2.3. Observed Hydrograph - Sum of the Baseflow Hydrograph and the Quickflow
2.4. Losses
In flood hydrology, losses refer to any rainfall that is not converted to quickflow. The amount
of loss is subtracted from storm rainfall to leave the “rainfall excess”, that is, quickflow is
produced by the rainfall excess on the catchment. Some of the water accounted for in losses
is evaporated, perhaps after being intercepted by vegetation or held in surface depressions.
Some losses are infiltrated rainfall that may contribute baseflow to the stream.
Losses can be estimated for historic events. Where there are measurements of the volume
of runoff, catchment area and rainfall depth, losses can be calculated as the difference
between the volume of rainfall and the volume of the quickflow hydrograph (the flood
hydrograph with the baseflow removed). This approach was used to estimate losses for a
range of catchments as discussed in Book 5 and in earlier work on losses e.g. Hill et al.
(1998).
Losses must also be predicted as part of flood forecasting and design values for losses are
required as part of design flood estimation. A variety of loss models have been developed as
discussed in Book 4, Chapter 2, Section 6 and in Book 5, Chapter 3, Section 2.
9
Hydrologic Processes
Contributing to Floods
streams. Flow routing is the mathematical description of flow processes that model the
attenuation and translation of hydrographs as water moves through this network. A variety of
flood routing approaches are described in Book 5.
In general, the choice of model should depend on the amount of data that is available
(Figure 4.2.4). Models that are too simple are not able to exploit the available data, while
models that are too complex may suffer from ‘over fitting’ and have poor predictive ability.
The enduring popularity of reasonably simple hydrologic models, such as RORB, is because
they have been found to be of a complexity that matches the reasonably limited data that is
available for most catchments.
This section briefly reviews the conceptualisation of hydrologic processes leading to floods
and refers to other sections where more detail is available.
Figure 4.2.4. Conceptual Relationship between Data Availability, Model Complexity and
Predictive Performance (Grayson and Blöschl, 2000)
10
Hydrologic Processes
Contributing to Floods
infiltration are available such as those based on the Richards Equation or the Green and
Ampt approach (Mein and Larson, 1973; Dingman, 2002). Evaporation can be modelled as a
function of meteorological drivers, soil properties and moisture content (Soil Vegetation
Atmosphere Transfer (SVAT) models) (Dolman et al, 2001).
For flood modelling the physics of infiltration or evaporation, are seldom modelled explicitly,
instead design or observed rainfall is converted to ‘rainfall excess’ by subtracting losses ie.
the portion of rainfall that does not become direct runoff.
2.6.2. Losses
The loss models used in flood modelling are often simple, based on two parameters, one to
characterise the Initial Loss (IL) (the water required to wet up the catchment) and one to
characterise the Continuing Loss (CL). The output of these models is the rainfall excess that
is then used to generate a direct flow hydrograph. Loss models can be standalone, ie. the
rainfall excess can be calculated separately, or integrated within acatchment modelling
system.
The current recommendation in ARR (Book 5, Chapter 3, Section 2) is that the IL/CL model
is the most suitable for design flood estimation for both rural and urban catchments. This
model uses a constant value of initial loss and constant value of continuing loss for a flood
event.
For urban catchments, ARR (Book 5, Chapter 3, Section 5) provides IL and CL values for
three hydrologically distinct surfaces:
• Pervious Areas - recommended loss values are the same as those for rural areas; and
Where losses must be estimated for flood forecasting, continuous simulation or other design
problems, more complex loss model may be appropriate. Potential candidate models are
discussed in Book 5, Chapter 3, Section 2.
2.6.3. Baseflow
For flood modelling, important aspects of baseflow that must be addressed are:
1. The removal of baseflow from measured hydrographs of historic flood events so that the
quickflow hydrograph can be determined; and
2. The addition of a baseflow hydrograph to modelled direct flow so the total flood
hydrograph, and particularly flood peak, can be correctly estimated.
Features of the baseflow hydrograph and the key processes are discussed in Book 5,
Chapter 4.
11
Hydrologic Processes
Contributing to Floods
for an event. Procedures to estimate baseflow characteristics for design flood estimation are
provided in Book 5, Chapter 4.
2.6.4. Routing
The purpose of flow routing in models is to provide a calculated estimate of the hydrograph
at the downstream end of a reach given a hydrograph at the upstream end. This section
briefly reviews catchment processes that are represented by routing methods in flood
models. For further information on these methods, as discussed in Book 5, Chapter 5.
At any point in a stream, at a particular time during a flood event, the water flowing past will
be contributed by a variety of pathways and processes that all come together to make up the
flow at that instant. If we traced each drop of water within the flow, all would have originated
as rainfall but have been on a variety of journeys through the catchment and travelled at
different speeds: one drop of streamflow may have started as rainfall on the water surface a
short distance upstream, another may have come from rain falling on saturated soil beside
the river bank; yet another may have originated from a previous storm event and travelled to
the stream via groundwater.
Streamflow derived from rainfall, passes through various storages. Groundwater represents
long-term storage. There is also temporary storage, lasting as long as a flood event,
consisting of water in transit in each element of the drainage system including water in the
main stream, tributaries, hill slopes and overland flow. Water can be temporarily stored on
floodplains and in retarding basins. There is also riverbank storage, water wetting up the
bank profile at the start of an event and later flowing back into the stream as the water level
drops.
This process description suggests routing models would need to be highly complex to
represent the large number of pathways, flow speeds, and storage characteristics. However,
surprisingly, simple mathematical approaches can be used to represent the movement of
water along the different catchment pathways. Catchment response is usually highly
damped so that short-term fluctuations in rainfall have little influence on the streamflow
hydrograph and individual pathways do not need to be explicitly modelled. Instead, the
dominant effect of routing is attenuation and translation which can be well represented by
average response over longer time periods.
Routing of flows in a catchment may be achieved using hydrologic or hydraulic methods, and
the various approaches to this are discussed in Book 5, Chapter 5. The simplest
representation of routing in models is hydrologic routing which combines continuity with a
relationship between storage and flow. With this approach, flow paths in a catchment are
divided into a series of elements, where the volume of storage at any time is related to the
discharge in each element. Differences between rural and urban streams may be
represented by parameters which control the amount of water that is stored temporarily for a
given flow rate. Hydrologic routing methods cannot easily accommodate backwater effects,
and thus they are not well suited to situations which are influenced by tides and storm
surges, or reaches in which waves propagate upstream due to the effects of large tributary
inflows and waterway constrictions.
Hydraulic routing provides an increase in complexity and a reduction in the requirements for
simplifying assumptions. Unsteady modelling of flows in two dimensions can be undertaken
by solving the depth-averaged equations that describe the conservation of mass and
momentum. These two dimesnaional (2D) models are described in more detail in Book 6,
Chapter 4, Section 5 along with one dimensional (1D) unsteady models and coupled 1D/2D
12
Hydrologic Processes
Contributing to Floods
approaches. The limitations and appropriate use of these procedures, and others, are
described in detail in Book 6.
It is possible to combine hydrologic and hydraulic routing. Hydrologic models have a short
run time which facilitates the use of Monte Carlo approaches while hydraulic models are
better able to represent complex routing situations. If a hydraulic model can be used to
establish a storage discharge relationship, then this can be included in the hydrologic model
which can then be run multiple times as part of an ensemble or Monte Carlo analysis. The
use of 1D hydraulic models with short run time coupled with hydrologic models for Monte
Carlo modelling is also possible.
Lumped models (left panel of Figure 4.2.5) treat a drainage area as a single unit and use
catchment averaged values of inputs and parameters. For example, spatially averaged
rainfall is used as the main driver with single average values for initial and continuing loss.
Simple routing approaches are used perhaps based on the passage of a hydrograph
through a single storage or separate storages for surface water and groundwater. Lumped
models are less common in design flood estimation or flood forecasting.
13
Hydrologic Processes
Contributing to Floods
Distributed models (right panel of Figure 4.2.5) use a more spatially explicit approach,
usually based on a grid that may be of a consistent size and shape across a study area or
may be varied adaptively. Distributed models require inputs and parameters for each grid
cell; the advantage is that results can then be produced for each grid cell. For example, two
dimensional unsteady hydraulic routing approaches are commonly applied to grids to create
spatially detailed information on flow depths, velocities and flood hazard in rural and urban
areas.
2.7. Examples
Three case studies are provided that outline flood runoff processes in:
• Urban areas.
At the time the South Creek catchment was instrumented, it was expected that there would
be little or no overland flow. The steep, well drained and permeable slopes, along with high
annual rainfall (> 4000 mm), and restricted layer at shallow depth, was expected to result in
the upper layers of the soil profile becoming saturated, suggesting ideal conditions for lateral
subsurface stormflow. However, this was found not to be the case.
14
Hydrologic Processes
Contributing to Floods
Measurements showed that overland flow was the dominant runoff process. For example,
during storms in January and March 1976, over 90% of runoff was produced by overland
flow. Although rain infiltrated into the soils, the restricting layer at 200 mm depth led to a
perched water table and caused saturation at the surface. Exfiltration and further rainfall
landing on saturated areas, which covered most of the catchment, led to overland flow
(Bonell and Gilmour, 1978).
The dominance of overland flow has implications for modelling of South Creek and similar
catchments. The routing approach must be suitable for rapid flow down steep slopes that
results in short lag times between rainfall and streamflow (Bonell et al., 1979). A constant
continuing loss model may be suitable although this would need to be tested.
During dry periods, runoff has not been observed, even during summer storms with rainfall
intensities up to 50 mm/hr. When the catchment is dry, there may be local areas where
rainfall intensity exceeds infiltration capacity, but any runoff that is produced enters the soil
by running down surface cracks or infiltrating further down-slope and never reaches the
catchment outlet.
Runoff only occurs after the catchment wets up, cracks close, and a zone of saturated soil
provides a link to the catchment outlet. During wet periods, the soils at the bottom of swales
saturate creating variable source areas that expand with additional rainfall. For example,
during the storms of 29 and 30 July 1996 approximately half of the rainfall was converted to
runoff (Western and Grayson, 2000).
The runoff production processes at Tarrawarra have implications for modelling of this type of
catchment. For runoff-routing models, spatially explicit soil moisture accounting will be
important as the water content of soils has a strong influence on losses (Western and
Grayson, 2000). For event models, seasonal estimates of losses may be necessary. A
proportional loss model may be more appropriate than one that relies on constant continuing
loss.
In the analysis of events in urban areas, a significant feature is the small values of initial
loss. Boyd et al. (1993) analysed 763 events in urban areas. For most of these, the initial
loss was less than 1 mm. The average initial loss weighted by the number of events was
0.62 mm. Considering initial loss on individual catchments, information summarised in
15
Hydrologic Processes
Contributing to Floods
Table 5.3.5 shows that 70% of catchments have an initial loss of 1 mm or less (refer also
Book 5, Chapter 3, Section 5).
It is possible to compare the initial loss on rural and urbanised catchments. Data for
Australian catchments is summarised in the Appendix to Book 5 (Book 5, Chapter 3, Section
8). A density plot of this data shows the substantially lower initial loss for urban catchments
and the concentration near 0 mm. For rural catchments, the mean initial loss across all
catchments is 32 mm but the high standard deviation (16.8 mm) means the density is spread
across a wide range of values.
During larger events in urban areas, pervious surfaces also produce runoff either through
infiltration excess or saturation excess processes. Many pervious surfaces in urban areas
are compacted because they are walked or driven on, decreasing infiltration capacity and
increasing the proportion of rainfall running off.
Along with these catchment processes, the piped drainage system in urban areas efficiently
delivers water to streams. Piped drainage represents an extension of the drainage network
so that even areas distant from the original natural waterways contribute flow to those
waterways. In highly urbanised catchments every impervious surface will be drained to the
stream.
In addition to catchment changes, the modification to urban streams also changes the
transfer of flood flows. Modified urban streams have less attenuation, transmission losses
are reduced and water travels more quickly. The results is a substantial increase in
magnitude and frequency of flooding. Further details are provided in Book 9.
Modelling urban hydrology can be challenging because of the variety of different surface
types and variation in connections between surfaces and drains. There are parallel flow
16
Hydrologic Processes
Contributing to Floods
paths with different routing characteristics. Some water will pass through the piped system
while some will flow overland. Water can surcharge out of pipes or enter pipes at various
locations in the catchment. Flow behaviour, and even catchment area, depends on flood
magnitude. Modelling approaches for urban areas are discussed in Book 9.
2.8. References
ASCE (American Society of Civil Engineers) (1975), Aspects of hydrological effects of
urbanisation. ASCE Task Committee on the effects of urbanisation on low flow, total runoff,
infiltration, and groundwater recharge of the Committee on surface water hydrology of the
Hydraulics Division. Journal of the Hydraulics Division, (5), 449-468.
Abbott, M.B., Bathurst, J.C., Cunge, J.A., O'Connell P.E. and Rasmussen, J. (1986), An
introduction to the European hydrological system- Systeme Hydrologique Europeen, SHE, 1:
history and philosophy of a physically-based, distributed modelling system. Journal of
Hydrology, volume 87(1-2), pp.45-59.
Beven, K.J. (2002), Towards an alternative blueprint for a physically based digitally simulated
hydrologic response modelling system, Hydrological Processes, 16: 189-206.
Bonell, M. and Gilmour, D.A. (1978), The development of overland flow in a tropical
rainforest catchment. Journal of Hydrology, 39: 365-382.
Bonell, M., Gilmour, D.A. and Sinclair, D.F. (1979), A statistical method for modelling the fate
of rainfall in a tropical rainforest catchment. Journal of Hydrology, 42: 251-267.
Boyd, M.J., Bufill, M.C. and Knee, R.M. (1993), Pervious and impervious runoff in urban
catchments. Hydrological Sciences Journal, 38(6), 463-478.
Cordery, I. and Pilgrim, D.H. (1983), On the lack of dependence of losses from flood runoff
on soil and cover characteristics. Hydrology of Humid and Tropical Regions with Particular
Reference to the Hydrological Effects of Agriculture and Forestry Practice (Proceedings of
the Hamburg Symposium). Hamburg, IAHS Publication No.140.
Dolman, A.J., Hall, A.J., Kavvas, M.L., Oki, T. and Pomeroy, J.W. (2001), Soil-Vegetation-
Atmosphere transfer schemes and large-scale hydrologic models. Proceedings on an
international symposium held during the sixth Scientific Assembly of the International
Association of Hydrological Sciences (IAHS) at Maastricht, The Netherlands 18-27 July
2001. IAHS Publication No.270.
Dunne, T. and Black, R.D. (1970), Partial area contributions to storm runoff in a small new
England watershed. Water Resources Research, 6(5), 1296-1311.
Espey, W.H. and Winslow, D.E. (1974), Urban flood frequency characteristics. Journal of the
Hydraulics Division. American Society of Civil Engineers, 100(HY2), 279-293.
17
Hydrologic Processes
Contributing to Floods
Ferguson, B.K. and Suckling, P.W. (1990), Changing rainfall-runoff relationships in the
urbanizing Peachtree Creek Watershed, Atlanta, Georgia. Water Resources Bulletin, 26(2),
313-322.
Haan, C.T., Johnson, H.P. and Brakensiek, D.L. (1982), Hydrologic modelling of small
watersheds. American Society of Agricultural Engineers. St Joseph, Michigan.
Harris, E.E. and Rantz, S E. (1964), Effect of urban growth on streamflow regimen of
Permanente Creek Santa Clara County, California: Hydrological effects of urban growth.
Geological survey water-supply paper 1591-B. US Department of the Interior.
Hill, P.I., Mein, R.G. and Siriwardena, L. (1998), How much rainfall becomes runoff: loss
modelling for flood estimation. Cooperative Research Centre for Catchment Hydrology.
Hollis, G.E. (1975), The effect of urbanisation on floods of different recurrence interval.
Water Resources Research, 11: 431-435.
Laurenson, E.M (1975), Streamflow in catchment modelling. In: Chapman, T. G. and Dunin,
F. X. (eds) Prediction in catchment hydrology. Australian Academy of Science, pp.149-164.
Laurenson, E.M. and Pilgrim, D.H. (1963), Loss rates for Australian Catchments and their
Significance Journal, Institution of Engineers, Australia, 35(1-2), 9-24.
McDonnell, J.J. (2013), Are all runoff processes the same? Hydrological Processes, 27:
4103-4111.
Mein, R.G. and Goyen, A.G. (1988), Urban runoff. Civil Engineering Transactions, 30:
225-238.
Mein, R.G. and Larson, C.L. (1973), Modelling infiltration during steady rain. Water
Resources Research, 9(2), 384-394.
Pilgrim, D.H. and Cordery, I. (1993), Flood runoff. In: Maidment, D.R. (ed) Handbook of
Hydrology. McGraw-Hill, pp: 9.1-9.42.
Tholin, A.L. and Keifer, C.J. (1959), The hydrology of urban runoff. Journal of the Sanitary
Engineering Division, 85(SA2), 47-106.
Weiler, M., McDonnell, J.J., Tromp-van Meerveld, I. and Uchida, Taro. (2005), Subsurface
Stormflow. In: Anderson, M.G. (ed.) Encyclopedia of Hydrological Sciences. John Wiley and
Sons.
Western, A.W. and Grayson, R.B. (1998), The Tarrawarra data set: soil moisture patterns,
soil characteristics and hydrological flux measurements. Water Resources Research 34(10),
2765-2768.
Western, A.W. and Grayson, R.B. (2000), Soil moisture and runoff processes at Tarrawarra.
In: Grayson, R. B. and Blöschl G. (eds) Spatial patterns in catchment hydrology:
observations and modelling. Cambridge University Press, pp: 209-246.
18
Chapter 3. Types of Simulation
Approaches
Rory Nathan, Fiona Ling
3.1. Introduction
Rainfall-based models are commonly used to extrapolate flood behaviour at a particular
location using information from a short period of observed data. This can be done using
either event-based or continuous simulation approaches.
Event-based approaches are based on the transformation of a discrete rainfall event into a
flood hydrograph using a simplified model of the physical processes involved. It requires the
application of two modelling steps, namely: a runoff production model to convert the storm
rainfall input at any point in the catchment into rainfall excess or runoff at that location, and a
hydrograph formation model to simulate the conversion of these local runoffs into a flood
hydrograph at the point of interest. The rainfall event is described by a given depth of rainfall
occurring over a selected duration, where it is necessary to specify the manner in which the
rainfall varies in both time and space. The input rainfall may represent a particular observed
event, or else it may represent the depth of rainfall with a specific Annual Exceedance
Probability (ie. a design rainfall). The former approach is most commonly used for model
calibration and flood forecasting, the latter approach is used to estimate flood risk for design
and planning purposes. The defining feature of such models is that they are focused on the
simulation of an individual flood event, and that antecedent (and baseflow) conditions need
to be specified in some explicit fashion.
In contrast, continuous simulation approaches transform a long time series of rainfall (and
other climatic inputs) into a corresponding series of streamflows. Such time series may span
many weeks or years, and may represent behaviour that reflects the full spectrum of flood
and drought conditions. Such models comprise simplified representation of catchment
processes, and most usually involve the simulation of soil moisture and its control over the
partitioning of rainfall into various surface and subsurface contributions to recharge and
streamflow. Once simulated, information on the frequency and magnitude of flood behaviour
needs to be extracted from the resulting time series using the same methods adopted for
historical streamflow data.
The relative strengths and weaknesses of these approaches are outlined in Book 1, Chapter
3. The following sections provide information on simulation approaches relevant to each
approach, where guidance on their calibration and application is presented in Book 7. Event-
based models may be implemented in a variety of ways, and three approaches of increasing
sophistication are described in Book 4, Chapter 3, Section 2 to Book 4, Chapter 3, Section 2.
The Simple Event approach is first described in Book 4, Chapter 3, Section 2, and this
includes discussion of the main elements that are common to all event-based approaches.
The Ensemble Event approach (Book 4, Chapter 3, Section 2) provides a simple means to
accommodate variability of a selected input, and this is followed by description of Monte
Carlo approaches in Book 4, Chapter 3, Section 2, which provide a rigorous treatment of the
joint probabilities involved in estimation of design floods. Continuous Simulation approaches
19
Types of Simulation
Approaches
are described in Book 4, Chapter 3, Section 3, and hybrid approaches based on a mixture of
event- and continuous schemes are briefly described in Book 4, Chapter 3, Section 4. The
performance, strengths and limitations of the different approaches are discussed in Book 4,
Chapter 3, Section 5 and Book 4, Chapter 3, Section 6, and finally, the elements of a worked
example are presented in Book 4, Chapter 3, Section 7.
• A design storm of preselected AEP and duration: historically it has been most common to
only consider the most intense parts of complete storms (“design burst"), where the
average intensity of the burst is determined from rainfall Intensity Frequency Duration
(IFD) data (Book 2, Chapter 2). This information is generally available as a point rainfall
intensity, and it is necessary to apply an Areal Reduction Factor (Book 2, Chapter 4) to
correctly represent the areal average rainfall intensity over a catchment;
• Temporal patterns to distribute the design rainfall over the duration of the event, and this
can include additional rainfalls before the start (and after the end) of the burst to represent
complete storms (Book 2, Chapter 5);
• Spatial patterns to represent rainfall variation over a catchment that occurs as the result of
factors such as catchment topography and storm movement (Book 2, Chapter 4); and
• Loss parameters that represent soil moisture conditions in the catchment antecedent to
the event and the capacity of the soil to absorb rainfall during the event (Book 5, Chapter
5).
A range of event-based models are available to convert rainfalls into a flood hydrograph,
though in generally these models provide highly simplified representations of the key
processes relevant to flood generation:
• A loss model is used to estimate the portion of rainfall that is absorbed by the catchment
and the portion that appears as direct runoff (Book 5, Chapter 3). This loss is typically
attributed to a range of processes, including: interception by vegetation, infiltration into the
soil, retention on the surface (depression storage), and transmission loss through the
stream bed and banks; and
The most commonly applied event-based approach is the Design Event approach which
assumes that there is a critical rainfall duration that produces the design flood for a given
catchment. This critical duration depends on the interplay of catchment and rainfall
characteristics; it is not known a priori but is usually determined by trialling a number of
20
Types of Simulation
Approaches
rainfall durations and then selecting the one that produces the highest flood peak (or
volume) for the specific design situation.
An important consideration in the application of this approach is that the inputs defining the
Design Event should be selected to be probability neutral. This involves selecting model
inputs and parameter values such that the 1 in X AEP design rainfalls are converted to the
corresponding 1 in X AEP floods. The task of defining a typical combination of flood
producing factors for application in the ‘Design Event’ approach is made particularly difficult
by the fact that flood response to rainfall is generally non-linear and can be highly non-linear.
This means that average conditions of rainfall or loss are unlikely to produce average flood
conditions. The probability neutrality of inputs can only be tested if independent flood
estimates are available for comparison; for more extreme events, the adopted values of
probability neutral inputs must be conditioned by physical and theoretical reasoning.
The following guidance presents three approaches to dealing with probability neutrality,
namely:
• Simple Event, where all hydrologic inputs are represented as single probability neutral
estimates from the central range of their distribution;
• Ensemble Event, where the dominant factor influencing the transformation is selected
from a range of values representing the expected range of behaviour, and all other inputs
are treated as fixed; and
• Monte Carlo Event, where all key factors influencing the transformation are stochastically
sampled from probability distributions or ensembles, preserving any significant
correlations between the factors, and probability neutrality is assured (for the given set of
inputs) by undertaking statistical analysis of the outputs.
The key differences between these approaches is illustrated in Figure 4.3.1.Book 4, Chapter
3, Section 2 to Book 4, Chapter 3, Section 2 describe each of these procedures in turn,
though it is worth noting here the essential similarities between the three methods as shown
in Figure 4.3.1. It is seen that these three methods use the same source of design rainfalls
and the same conceptual model to convert rainfall into a flood hydrograph. The process
involved in calibrating a conceptual model to historic events is common to all three
approaches, they differ only in how selected inputs are treated when deriving design floods.
21
Types of Simulation
Approaches
Spatial patterns of rainfall generally have a lower influence on flood characteristics than
temporal patterns, and consequently simpler approaches are used to accommodate the joint
probabilities involved. For most practical situations it is assumed sufficient to adopt a fixed
non-uniform pattern that reflects the systematic variation arising from topographic influences
(Book 2, Chapter 4).
22
Types of Simulation
Approaches
For estimating losses, various types of models ranging from a simple loss model to complex
conceptual runoff-routing models are available (Hoang et al., 1999; Hill et al., 2012). Loss
models most suited for design purposes generally involve specification of a parameter (such
as initial loss) that is related to soil moisture conditions in the catchment prior to the onset of
the storm. They also generally involve specification of a loss term related to the infiltration of
a proportion of storm rainfall during the event (e.g. continuing loss or proportional loss). The
most comprehensive analyses of design loss values available to date have been undertaken
by Kuczera et al. (2006) and Newton and Walton (2000), and guidance on suitable loss
values to adopt is provided in Book 5, Chapter 5. The selected loss values can have a large
influence on the resulting flood characteristic, and the adoption of regional estimates does
not guarantee unbiased estimates of the resulting floods; for this reason it is also desirable
to reconcile design values with independent flood frequency estimates where possible (as
discussed in Book 5, Chapter 5).
The direct runoff simulated by the loss model is then routed through the catchment to
generate the design flood hydrograph. The hydrograph corresponding to the rainfall burst
duration that results in the highest peak (the critical rainfall duration) is taken as the design
flood hydrograph, and it is assumed to have the same Annual Exceedance Probability as its
causative rainfall. It needs to be stressed that probability neutrality is an untested
assumption with the simple event approach, and without reconciliation with flood frequency
estimates using at-site or transposed gauged maxima, there is no way of determining how
the selected inputs may have biased the outcome.
In summary, the only probabilistic variable considered with the Simple Event approach is
average rainfall intensity or depth, while other inputs (e.g. losses, rainfall temporal and
spatial patterns) are represented by fixed values drawn from the central tendency of their
distribution (Rahman et al., 1998; Nathan et al., 2002; Rahman et al., 2002a; Kuczera et al.,
2006; Nathan et al., 2003).
In concept the approach could be extended to take account of factors that are non-uniformly
distributed, though here it would be necessary to carefully weight the outcome by the relative
likelihood of the different values selected, or else select the input values in a way that
reflects the form of their distribution. For example, if a sample of ten initial loss values were
selected, then it would be necessary to weight each result by the probability of each loss
value occurring, which could be determined (for example) from the cumulative distribution of
losses presented in Book 5, Chapter 5; alternatively, the distribution of losses could be
divided into ten equally likely exceedance percentile ranges, and the results then be given
equal weighting.
23
Types of Simulation
Approaches
It is expected that the approach is most suited to the consideration of temporal patterns, as
suitable ensemble sets of patterns are readily available (as described in Book 2, Chapter 5).
Flood magnitudes are generally very sensitive to temporal patterns and thus the ensemble
approach provides a straightforward, if somewhat tedious, means of avoiding the
introduction of bias due to this source of variability. Extending the ensemble method to
consider other inputs, jointly or otherwise, would appear to introduce additional problems
which are probably most easily handled by Monte Carlo approaches.
In the most general Monte Carlo simulation approach for design flood estimation, rainfall
events of different duration are sampled stochastically from their distribution. The simulated
design floods are then weighted in accordance with the observed frequency of occurrence of
rainfall events of different durations that produced them. This avoids any positive bias of
estimated flood probabilities which may be associated with the application of the critical
rainfall duration concept (Weinmann et al., 2000; Weinmann et al., 2002; Rahman et al.,
2002b). The application of this generalised approach relies on the derivation of new design
data for rainfall events that are consistent with a new probabilistic definition of storm ‘cores’
or complete storms (Hoang et al., 1999). Such design rainfall data is currently not available,
thus limiting the application of the generalised approach. To obviate the need for this,
Nathan et al. (2002) and Nathan et al. (2003) adapted the approach to separately consider
different rainfall durations; the resulting peak flows are then enveloped to select the critical
event duration, consistent with the ‘critical rainfall duration’ concept used in traditional design
flood estimation practice. This is the approach further described herein. Whilst adherence to
the ‘critical duration’ concept could possibly introduce systematic bias into the results, it has
the advantage of ensuring consistency with existing design approaches and allows much of
the currently available design data to be readily used.
Undertaking a Monte Carlo simulation requires three sets of key decisions, followed by a
simulation step that involves construction of the derived flood frequency curve. The overall
steps involved are as follows:
i. Select an Appropriate Flood Event Simulation Model - The criteria for selecting an
appropriate model are similar to those used with the traditional Design Event approach
and are described in Book 5. The selected model should be able to be run in batch mode
with pre-prepared input files or be called from the Monte Carlo simulation application.
Models with fast execution speeds are well suited to Monte Carlo simulation; complex
models with slow run-times can still be utilised, though generally they need to be invoked
within a stratified sampling scheme (Book 4, Chapter 4, Section 3) to ensure that the
simulations times are within practical constraints.
ii. Identify the Model Inputs and Parameters to be Stochastically Generated - The stochastic
representation of model inputs should focus on those inputs and parameters which are
characterised by a high degree of natural variability and a non-linear flood response.
Examples include rainfall temporal pattern, initial loss and reservoir storage content at the
start of a storm event. If the assessment indicates limited variability and essentially linear
system response, then there may be little to be gained from extending the Monte Carlo
simulation approach to include such additional inputs or parameters.
24
Types of Simulation
Approaches
iv. Undertake Monte Carlo Simulation - The design inputs and parameters exhibiting
significant variability are sampled in turn from their distributions allowing for significant
correlations, and the resulting combination of inputs and parameters is then used in a
simulation model run. Only those inputs that have a significant influence on the results
need to be stochastically sampled, and other inputs can be treated as fixed (usually
average or median) values. For Monte Carlo simulation involving several stochastic
variables, many thousands of simulations are required to adequately sample the inherent
variability in the system, and thus for most practical problems some thought is required to
minimise disc storage space and simulation times.
v. Construct the Derived Flood Frequency Curve - Once the required number of runs has
been undertaken, it is necessary to analyse the results to derive the exceedance
probabilities of different flood magnitudes. Where very simple models are used or the
probabilities of interest are not extreme – more frequent than, say, 1 in 100 Annual
Exceedance Probabilities (AEP) – the simulation results can be analysed directly using
frequency analysis (as described in Book 3, Chapter 2). Alternatively, in order to estimate
rarer exceedance probabilities (or use more complex models with slow execution speeds)
it is desirable to adopt a stratified sampling approach to derive the expected probabilities
of given event magnitudes, as described in Book 4, Chapter 4.
An example flowchart for the last two steps is illustrated in Figure 4.3.2. This flowchart
represents the high level procedure relevant to the consideration of the joint probabilities
involved in the variation of loss parameters and temporal patterns. The starting point for this
simple Monte Carlo simulation is the Step “A” in Figure 4.3.2. The loss and temporal patterns
are then sampled and combined with fixed values of other inputs for simulation using a flood
event model. Once many thousands of combinations of rainfall depth, losses and temporal
patterns have been undertaken, the resulting flood maxima are analysed to derive unbiased
estimates of flood risk (represented by Step “B”, Figure 4.3.2). Suitable sampling schemes
and analyses relevant to these steps are described in Book 7, Chapter 7, where additional
variables (such as reservoir level or rainfall spatial pattern) can be included as additional
sampling steps as required.
Figure 4.3.2 also depicts the relationship between Monte Carlo schemes and the other
simpler event-based methods discussed above. The blue-shaded shapes represent the
steps involved in the traditional Simple Event (or Design Event) approach, where the flood
characteristic obtained from a single simulation using the selected inputs (Step “C”) is
assumed to have the same Annual Exceedance Probability as its causative rainfall. The
ensemble approach is shown as an added loop: in this example the simulation would be
repeated for each available temporal pattern, and the results would be averaged (at Step
“C”) to yield the flood characteristic of interest, where again it is assumed that the Annual
25
Types of Simulation
Approaches
Exceedance Probability of the calculated flood is the same as its causative rainfall. The 2nd
and 3rd last shapes represent the additional steps required to implement a Monte Carlo
scheme.
It should be noted that the steps involved between points A and B in Figure 4.3.2 represent
the scheme required to consider the joint probabilities associated with the variability of
selected inputs. It represents the characterisation of aleatory uncertainty, which is the
(irreducible) uncertainty associated with variability inherent in the selected inputs. However,
Monte Carlo schemes can also be used to consider epistemic uncertainty, and the additional
steps involved in this are shown by the first and last steps in Figure 4.3.2 Epistemic (or
reducible) uncertainty is due to lack of knowledge, and is associated with errors in the data
or the simplifications involved in representing the real world by a conceptual model. In
essence, the consideration of aleatory uncertainty allows the derivation of a single
(probability neutral) “best estimate” of flood risk, and consideration of epistemic uncertainty
allows the characterisation of confidence limits about this best estimate. The outer (dark
blue-shaded) iteration loop shows extension of approach to estimate confidence limits.
Figure 4.3.2 has inner (blue-shaded) shapes that show steps involved in Simple Event
approach, where dashed lines indicate additional iteration required for Ensemble Event
approach.
26
Types of Simulation
Approaches
Figure 4.3.2. Simple Framework for Monte Carlo Simulation for Handling Joint Probabilities
Associated with Both Losses and Temporal Patterns
In general, while the information required to characterise aleatory uncertainty can be readily
obtained from the observed record, this is not the case with epistemic uncertainty. Indeed it
is quite difficult to obtain information on the likely errors associated with input data or model
parameterisation, and it is very difficult to characterise the uncertainty associated with model
structure. Accordingly, the guidance presented here focuses on the assessment of aleatory
uncertainty as it is considered that this approach can be readily understood and applied by
practitioners with the appropriate skills. Thus, while it seems reasonable to regard the use of
Monte Carlo procedures to accommodate hydrologic variability as “best practice” for many
practical design problems, its use to derive confidence limits is expected to remain the
domain of more academic specialists for the foreseeable future.
27
Types of Simulation
Approaches
The Continuous Simulation method of estimating the design flood is similar in intent to the
event-based Monte Carlo approach discussed in Book 4, Chapter 3, Section 2. Both
methods seek to adequately simulate the interactions between flood producing (rainfall and
catchment characteristics) variables (Kuczera et al., 2006). Conceptually, the differences
between the two methods arise in how wet and dry periods are sampled and incorporated
into the process of estimating the design flood. In the event-based Monte Carlo method
rrunoff-routing models are used to simulate the interactions occurring only during the storm
(wet period) event. There is implicit consideration of the influence of dry periods in sampling
the catchment-rainfall interactions (antecedent conditions, temporal patterns, storm
durations) from exogenously derived distributions of initial conditions (Kuczera et al., 2006).
The Continuous Simulation method, on the other hand, accounts for these interactions
through direct simulation of the processes occurring in the catchment over an extended
period (Kuczera et al., 2006; Boughton et al., 1999; Cameron et al., 1999). The Continuous
Simulation method is also applicable in situations where the critical event duration extends
over many weeks or months, as is the case for systems with large storage capacity but
limited outflow capacity.
The Continuous Simulation method of estimating the design flood involves running a
conceptual runoff-routing model for a long period of time such that all important interactions
(covering the dry and wet periods) between the storm (intensity, duration, temporal pattern)
and the catchment characteristics are adequately sampled to derive the flood frequency
distribution. In general, pluviograph data of hourly resolution (or less) is used to drive the
runoff-routing models. In most cases the period of record of pluviograph data rarely exceeds
20 years, therefore rainfall data is extended by using stochastic rainfall data generation. The
runoff-routing model is calibrated using flow data, where available, and the calibrated model
is then used to generate a long series of simulated flow. Finally the simulated flow is then
used to extract the Annual Maximum Series and estimate the derived flood frequency curve.
Important components of the Continuous Simulation approach are further discussed in the
following sections:
28
Types of Simulation
Approaches
In such cases stochastic rainfall generation has been used to provide a long time series of
synthetic rainfall (Boughton et al., 1999; Cameron et al., 1999; Droop and Boughton, 2002;
Haberlandt and Radtke, 2013). The synthetic data set thus generated is designed to be
statistically indistinguishable from observed rainfall data (Kuczera et al., 2006).
There are well established methods to generate stochastic data at a coarse time scale.
However, generating fine resolution synthetic data that can reproduce the statistics of the
observed rainfall series at various temporal scales (annual, monthly, daily and hourly) is
challenging (Srikanthan and McMahon, 2001; Boughton and Droop, 2003; Kuczera et al.,
2006). Therefore, a commonly used approach is to generate the synthetic rainfall data at a
daily time step first, and then disaggregate to a sub-daily time step by using functional
relationships between daily and sub-daily rainfall statistics. Boughton (1999) used the
Transition Probability Matrix (TPM) model to generate thousands of years of daily rainfall
data and then disaggregated the daily data to an hourly time-step using the sub-daily rainfall
statistics derived from IFD curves and temporal patterns. Kuczera et al. (2006) tested the
ability of the DRIP rainfall generating model Heneker et al. (2001) to reproduce observed
rainfall statistics at different levels of aggregation (hourly to yearly) and found that the model
was able to reproduce the observed rainfall statistics satisfactorily for the large storms.
Techniques are available for generating daily rainfalls at any site in Australia (Book 2,
Chapter 7) thus the inputs required for continuous simulation models can be developed for
catchments without adequate at-site rainfall data.
The three factors that need to be considered when selecting a continuous simulation model
for flood estimation are:
1. The ability of the model to represent the physical processes occurring in the catchment
(model complexity);
3. The amount of data and computational resources available to properly describe and
calibrate the model (model parsimony).
Useful guidance on the trade-offs involved in matching model complexity with data
availability is provided in (Vaze et al., 2012).
29
Types of Simulation
Approaches
Lack of observed data is a major problem for calibration of the rainfall generation model or
the runoff-routing model. In the case of the rainfall generation model, for example, the short
rainfall data sets generally available are unlikely to include extreme rainfall events caused by
various rain producing mechanisms (for example cyclones vs. thunderstorms) and to sample
the full range of natural variability.
Newton and Walton (2000) further applied the CSS in a large (13 000 km2), semi-arid
catchment in Western Australia. They compared the design estimates produced by the CSS
to the observed flood frequency curve and found that the design flood estimates
overestimated the observed flood frequency curve for more frequent floods. They speculated
that the discrepancy between observed flood frequency curve and the CSS result might be
due to the sampling problem; the observed flood frequency curve was estimated based on a
shorter period (31 years) of data, while the rainfall generation model was calibrated to longer
(93 years) data. The observed streamflow data covered a relatively dry period and did not
represent the total climatic variability over a longer period.
There have been other applications of Continuous Simulation approaches for estimation of
the derived flood frequency curve, for example Haberlandt and Radtke (2013), Cameron et
al. (1999) and Droop and Boughton (2002), to catchments of various sizes and
30
Types of Simulation
Approaches
characteristics. In all cases stochastic rainfall generators were used to extend the rainfall
data. Although different rainfall generation and process models were used, all report that the
derived distribution curve produced by the method was able to provide a satisfactory match
to the observed flood frequency curves for large floods. However, in all cases described, the
ability of the model to properly reproduce extreme flood events has not been confirmed, due
to lack of data for extreme events.
31
Types of Simulation
Approaches
The results of the event-based modelling showed that in general an initial loss-continuing
loss model run using both the Monte Carlo and Ensemble approaches performed well in
reproducing the at-site flood frequency curve over the range of catchments tested, over a
range from 50% to 1% AEP. The exception to this was that the Monte Carlo model did not
perform well for one catchment (located in the south-west of Western Australia) where the
flow response to rainfall events varied widely. SWMOD (Water and Rivers Commission,
2003) was used as an alternative loss model for this catchment, and it was found that use of
this model improved the results significantly over the initial loss-continuing loss model.
Sih et al. (2008) also evaluated the performance of Monte Carlo and Ensemble Event
approaches, and they included comparison with the traditional Simple Event method. They
tested the three methods on seven catchments covering the temperate and tropical regions
of Australia, and considered both long duration (24 hours and longer) and short duration
(less than 6 hour) storms. The Simple Event method was found to generally underestimate
the peak flows for events. On the basis of the seven catchments considered, the Simple
Event method underestimated the Monte Carlo solution by around 10% to 15%, although in
some cases the method underestimated peak flows by between 50% to 70%. Sih et al.
(2008) found much closer agreement between the Ensemble Event and Monte Carlo
approaches, where generally the Ensemble Event method was found to underestimate the
Monte Carlo solution by around 5%.
The results of the method testing on continuous simulation models by Ling et al. (2015)
found that while it was possible to calibrate the models to reproduce the overall flow regime
of the catchments, the highest flow peaks were markedly underestimated and the simulated
flood frequency curve calculated from simulated Annual Maximum Series provided a very
poor fit to the observed flood frequency curve. Weighting the calibration to the largest events
in the series reduced the ability of the model to reproduce the overall flow regime, and
provided only slight improvements in the accuracy of the derived frequency curves. It was
found that the models could be calibrated directly to selected quantiles of the observed flood
frequency curve, but this resulted in a very poor representation of hydrograph behaviour and
large biases in flood volume. This testing clearly illustrated the multi-criteria nature of the
calibration problem (Gupta et al, 1998), and showed that it is difficult to obtain a very good fit
to both the flood frequency curve and hydrograph behaviour. Furthermore, comparison of the
calibrated parameters resulting from the different calibration approaches also showed large
differences in values, indicating a trade-off between reproducing the hydrograph and the
best representation of the flood frequency curve.
Ling et al. (2015) investigated the effect of record length on model performance. The results
from the two test catchments tested by Ling et al. (2015) found that even when twenty years
of data is available at a site, the model results can vary significantly based on the period of
record used in analysis. This is particularly evident when one period is noticeably drier or
wetter than the other. This highlights the need to investigate how representative the
available flow data is in the context of any available long-term rainfall records. Both the
Monte Carlo and Ensemble Event approaches gave similar results.
Ling et al. (2015) also investigated the efficacy of applying the methods to ungauged
catchments. The results of the investigation by Ling et al. (2015) illustrated that even when
data is available from a neighboring gauged catchment, care must be taken in transposing
inputs and parameters from similar gauged catchments. When parameters were transferred
between models from dissimilar catchments, the results of both the Monte Carlo and
Ensemble Event approaches were very poor. From these tests it is concluded that only
catchments with similar climatic conditions, catchment sizes and catchment characteristics
should be considered for providing model parameters for ungauged catchments.
32
Types of Simulation
Approaches
The Simple Event method has been the most commonly used approach to date in Australia.
It is simple to apply, and information on the required design inputs - design rainfalls, single
temporal patterns of average variability, and median design losses - are readily available for
most locations in Australia. The probability neutrality assumption is maintained by selecting
single “representative” values of the inputs; however, without independent information there
is no way of knowing whether this assumption has been satisfied. Thus, while simple and
easy to apply, the method is lacking in robustness and defensibility.
The Ensemble Event method represents a modest increase in complexity. Rather than
undertaking a single run for each combination of event AEP and duration, it is necessary to
undertake ten or so simulations and average the outcome; if single hydrographs are required
for design purposes then these can be obtained by simple scaling of a hydrograph obtained
from a representative event. The method does involve a little more tedium for practitioners,
though most modelling software can be configured for batch processing, and the additional
computation burden is of no consequence. The method is most readily suited to the
consideration of temporal patterns, where testing has shown that in natural catchments it
yields similar estimates to those derived from more rigorous approaches. While the
approach represents an appreciable improvement over Simple Event methods, the approach
does suffer from the limitation that it is not well suited to considering the influence of
additional stochastic factors that may have an influence on the derived flood estimates. In
natural catchments this includes the estimation of floods which are heavily influenced by the
joint occurrence of highly variable losses and temporal patterns, catchments in which natural
lake (or snowpack) levels are subject to variable antecedent conditions, or catchments
where it necessary to consider seasonal variation in individual inputs. In disturbed
catchments the method is unable to consider the influence of variable initial reservoir levels
on dam outflows, the likelihood of debris blocking culverts and bridge waterway areas, or the
influence of controlled discharges from infrastructure works that may be subject to some
variability.
In contrast, Monte Carlo methods are well suited to the consideration of multiple sources of
variability from natural or anthropogenic sources. Once the simulation scheme has been
established, it is easily expanded to consider additional factors of importance. For example,
the same sampling scheme can be used to accommodate the variability associated with
seasonality of storm occurrence or temporal patterns, drawdown in a reservoir, or blockage
factors. The information required to characterise aleatory uncertainty (ie. hydrologic
variability) is often available in the historic record: if there is sufficient information available to
simulate a process with a deterministic model, then the necessary information required to
characterise variability can be readily obtained (or generated). Importantly, it is a simple
matter to expand a simulation scheme to allow for correlations between the stochastic
factors modelled. Thus, if there is information available that suggests that the dominant
season is dependent on event severity, or that the available airspace in a reservoir
decreases with event severity, then this is easily accommodated by using a conditional
sampling scheme. The limitation of the method is that specialist modelling skills are required
to develop bespoke Monte Carlo schemes, and that additional effort is required to ensure
that the distributions used to characterise variability are appropriate for the conditions being
simulated. The method can be expanded to include consideration of epistemic uncertainty
33
Types of Simulation
Approaches
(e.g. uncertainty in the routing parameters or in the design estimate of rainfall depth), but the
necessary information for such schemes can be difficult to obtain and justify.
Hybrid models have the potential to combine the benefits of both continuous and event
approaches, though at this stage insufficient investigations have been undertaken to
determine whether such schemes provide demonstrable benefits over other approaches.
The runoff-routing model was fitted to three historic flood events, and the results for the
largest event (September 2010) are also shown in Figure 4.3.3. The initial loss parameters
fitted to the three events were 25, 10, and 15 mm, and the corresponding continuing loss
parameters were 2.5, 1.5, and 2.5 mm/hr.
34
Types of Simulation
Approaches
Figure 4.3.3. Schematic Layout of Delatite River catchment and Calibration to December
2010 Event
Three different approaches were used to derive design estimates using the calibrated runoff-
routing model. The Simple Event approach used a single temporal pattern of average
variability, along with a single set of loss parameters obtained from calibration to the three
historic events. The Ensemble Event approach replaced the single temporal pattern with a
sample of 19 patterns derived from rainfall events that have occurred in the inland region of
south-east Australia, and used the same loss parameters as used in the Simple Event
method. Monte Carlo results were obtained using the same set of temporal patterns as used
in the Ensemble Event approach; the continuing loss parameter was held constant, and the
initial loss was sampled from a non-dimensional distribution of initial losses (Hill et al., 2015)
with a median loss value set equal to the value adopted for the Simple Event method. The
results from these three approaches are shown in Figure 4.3.4 where it is seen that the
Monte Carlo approach yields estimates that are very similar to the quantiles obtained from
Flood Frequency Analysis. The Ensemble Event estimates are similar to but lower than
those obtained using Monte Carlo analysis, and the Simple Event estimates are substantially
higher. It is worth noting that all design flood estimates rarer than about 5% AEP lie within
the confidence limits associated with the Flood Frequency Analysis.
Also shown in Figure 4.3.4 are the results obtained from Continuous Simulation. A number
of conceptual models were trialled and the Sacramento model (Burnash et al., 1973) was
found to provide the best results. Rainfall inputs to the model were obtained using gridded
rainfall data (Jones et al., 2009) and mean monthly areal potential evapotranspiration inputs
were obtained from the Bureau of Meteorology (Chiew et al., 2002). The model was initially
calibrated to daily streamflows using 20 years of historic data, and then adjusted to
reproduce the instantaneous peak flows over the same period. The model was used to
derive 101 years of simulated streamflows using the gridded rainfall data, and a Generalised
35
Types of Simulation
Approaches
Extreme Value distribution was then fitted to the annual maxima extracted from the time
series. The results are shown in Figure 4.3.4, where it is seen that the design estimates are
substantially lower than the results obtained from the event-based approaches. The derived
flood frequency curve generally lies along the lower confidence limits of the frequency curve
fitted using gauged maxima.
While no general conclusions should be drawn from this example about the relative efficacy
of the different methods used, the results do illustrate the range of estimates obtained for a
well gauged catchment. They indicate the degree of ‘model uncertainty’ that generally
remains unknown when only a single simulation method is employed. The largest event
used to fit the runoff-routing model occurred in December 2010 and has a peak similar in
magnitude to the 2% AEP event determined from Flood Frequency Analysis. The period of
record used to calibrate the Sacramento model spanned a representative range of climatic
conditions. The data used in this example is more than is typically available, and
nevertheless the design estimates vary by about a factor of two.
Figure 4.3.4. Comparison of Design Flood Estimates with Flood Frequency Curve for the
Delatite River at Tonga Bridge
3.8. References
Abbott, M.B., Bathurst, J.C., Cunge, J.A., O'Connell P.E. and Rasmussen, J. (1986), An
introduction to the European hydrological system- Systeme Hydrologique Europeen, SHE, 1:
history and philosophy of a physically-based, distributed modelling system. Journal of
Hydrology, volume 87(1-2), pp.45-59.
36
Types of Simulation
Approaches
Beven K. J., Calver A. and Morris E. M. (1987). The Institute of Hydrology Distributed Model.
Technical Report No. 98 Institute of Hydrology, Wallingford, UK.
Boughton, W.C. (1999), A daily rainfall generating model for water yield and flood
studiesRep. Report 99/9, 21pp pp, CRC for Catchment Hydrology, Monash University,
Melbourne.
Boughton, W. and Droop, O. (2003), Continuous simulation for design flood estimation-a
review. Environmental Modelling & Software, 18(4), 309-318. Available at: http://
linkinghub.elsevier.com/retrieve/pii/S1364815203000045 [Accessed October 3, 2013].
Boughton, W.C., Muncaster, S.H., Srikanthan, R., Weinmann, P.E., Mein, R.G. (1999)
Continuous Simulation for Design Flood Estimation - a Workable Approach. Water 99: Joint
Congress; 25th Hydrology & Water Resources Symposium, 2nd International Conference on
Water Resources & Environment Research; Handbook and Proceedings; pp: 178-183.
Burnash, R.J.C., Ferral, R.L. and McGuire, R.A. (1973), A Generalised Streamflow
Simulation System - Conceptual Modeling for Digital Computers. U.S. Department of
Commerce, National Weather Service and State of California, Department of Water
Resources.
Cameron, D.S, Beven, K.J, Tawn, J, Blazkova, S. and Naden, P. (1999), Flood frequency
estimation by Continuous Simulation for a gauged upland catchment (with uncertainty).
Journal of Hydrology, 219(3-4), 169-187. Available at: http://linkinghub.elsevier.com/
retrieve/pii/S0022169499000578.
Chiew, F.H.S. and McMahon, T.A. (2002), Modelling the impacts of climate change on
Australian streamflow. Hydrol. Processes, 16: 1235-1245.
Chiew, F.H.S., Wang, Q.J., McConachy, F., James, R., Wright, Q. and deHoedt, G. (2002),
Evapotranspiration maps for Australia. Hydrology and Water Resources Symposium,
Melbourne, May 20-23 2002.
Garavaglia, F., Gailhard, J., Paquet, E., Lang, M., Garcon, R. and Bernardara, P. (2010),
Introducing a rainfall compound distribution model based on weather patterns sub-sampling
Hydrol. Earth Syst. Sci., 14: 951-964.
Green, J.H., Walland, D.J., Nandakumar, N. and Nathan, R.J. (2003), The Impact of
Temporal Patterns on the Derivation of PMPDF and PMF Estimates in the GTSM Region of
Australia. Presented at Hydrology and Water Resources Symposium, Wollongong, NSW,
November 2003.
Gupta, V.K. and Sorooshian, S. (1985), The relationship between data and the precision of
parameter estimates of hydrologic models. Journal of Hydrology, 81: 57-77.
Gupta, H.V., Sorooshian, S. and Yapo, P.O. (1998), Toward improved calibration of
hydrological models: multiple and noncommensurable measures of information. Water
Resour. Res., 34(4), 751–763.
37
Types of Simulation
Approaches
Haberlandt, U. and Radtke, I. (2013), Derived flood frequency analysis using different model
calibration strategies based on various types of rainfall-runoff data - a comparison.
Hydrology and Earth System Sciences Discussions, 10(8), 10379-10417. Available at: http://
www.hydrol-earth-syst-sci-discuss.net/10/10379/2013/ [Accessed October 13, 2013].
Heneker, T.M., Lambert, M.F. and Kuczera, G. (2001), A point rainfall model for risk-based
design, J Hydrology, 247(1-2), 54-71.
Hill, P.I., Graszkiewicz, Z., Sih, K., Nathan, R.J., Loveridge, M. and Rahman, A. (2012),
Outcomes from a pilot study on modelling losses for design flood estimation. Hydrology and
Water Resources Symposium 2012, Sydney.
Hill, P.I., Graszkiewicz, Z., Loveridge, M., Nathan, R.J. and Scorah, M. (2015), Analysis of
loss values for Australian rural catchments to underpin ARR guidance. Hydrology and Water
Resources Symposium 2015. Hobart. 9-10 December 2015.
Hoang, T.M.T., Rahman, A., Weinmann, P.E., Laurenson, E.M. and Nathan, R.J. (1999),
Joint Probability Description of Design Rainfalls. 25th Hydrology and Water Resources
Symposium, Brisbane, 2009, Brisbane.
Jones, D. A., Wang, W. and Fawcett, R. (2009), High-quality spatial climate data-sets for
Australia. Australian Meteorological and Oceanographic Journal, 5: 233-248.
Kuczera, G., Lambert, M., Heneker, T., Jennings, S., Frost, A. and Coombes, P. (2006), Joint
probability and design storms at the crossroads. In 28th International Hydrology and Water
Resources Symposium, Woolongong, pp: 10-14.
Ling, F., Pokhrel, P., Cohen, W., Peterson, J., Blundy, S. and Robinson, K. (2015), Australian
Rainfall and Runoff Project 12 - Selection of Approach and Project 8 - Use of Continuous
Simulation for Design Flow Determination, Stage 3 Report.
MGS Engineering Consultants (2009), General Stochastic Event Flood Model (SEFM),
Technical Support Model. Manual prepared for the United States Department of Interior,
Bureau of Reclamation Flood Hydrology Group.
Nathan, R.J. (1992), The derivation of design temporal patterns for use with generalized
estimates of Probable Maximum Precipitation. Civil Engng. Trans., I.E. Aust., CE34(2),
139-150.
Nathan, R.J. and Weinmann, P.E. (2004), An improved framework for the characterisation of
extreme floods and for the assessment of dam safety. Hydrology: Science & Practice for the
21st Century, Vol 1. Proc. British Hydrol. Soc., London, pp: 186-193.
Nathan, R.J., Weinmann, P.E. and Hill, P.I. (2002), Use Of A Monte Carlo Framework To
Characterise Hydrological Risk, ANCOLD Bulletin, (122), 55-64.
Nathan, R., Weinmann, E., Hill, P. (2003), Use of Monte Carlo Simulation to Estimate the
Expected Probability of Large to Extreme Floods, proc. 28th Int. Hydrology and Water Res.
Symp., Wollongong, pp: 1.105-1.112.
38
Types of Simulation
Approaches
Newton, D. and Walton, R. (2000), Continuous Simulation for Design Flood Estimation in the
Moore River Catchment, Western Australia. In Hydro 2000 Hydrology and Water Resources
Symposium. Canberra, ACT, pp: 475-480.
Paquet, E., Garavaglia, F., Gailhard, J. and Garçon, R. (2013), The SCHADEX method: a
semi-continuous rainfall-runoff simulation for extreme flood estimation, J Hydrol., 495: 23-37.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Pilgrim, D.H. and Cordery, I. (1975), Rainfall temporal patterns for flood designs, Proc. Amer.
Soc. Civ. Engrs., J Hydrauclis Div., 100(1), 81-95.
Pilgrim, D.H., Cordery, I. and French, R. (1969), Temporal patterns of design rainfall for
Sydney, Institution of Engineers, Australia, Civil Eng. Trans., CE11: 9-14.
Rahman, A., Hoang, T M.T., Weinmann, P.E. and Laurenson, E.M. (1998), Joint probability
approaches to design flood estimation: a review. CRC for Catchment Hydrology, Report
199808, Melbourne, Australia.
Rahman, A., Weinmann, P.E. and Mein, R.G. (2002a), The use of probability-distributed
initial losses in design flood estimation. Australian Journal of Water Resources. 6(1), 17-29.
Rahman, A., Weinmann, P., Hoang, T.M. and Laurenson, E. (2002b), Monte Carlo simulation
of flood frequency curves from rainfall. Journal of Hydrology, 256(3-4), 196-210. http://
doi.org/10.1016/S0022-1694(01)00533-9.
Schaefer, M.G. and Barker, B.L. (2002), Stochastic Event Flood Model (SEFM), Chapter 20,
Mathematical Models of Small Watershed Hydrology and Applications, Water Resources
Publications, LLC, VJ Singh editor.
Schaefer, M.G. and Barker, B.L. (2004), Stochastic Event Flood Model (SEFM) - Users
Manual, MGS Engineering Consultants Inc, prepared for US Bureau of Reclamation Flood
Hydrology Group.
Sih, K., Hill, P.I., Nathan, R.J., (2008), Evaluation of simple approaches to incorporating
variability in design temporal patterns. Water Down Under 2008 (incorporating 31st
Engineers Australia Hydrology and Water Resources Symposium), pp: 1049-1059.
Srikanthan, R. and McMahon, T.A. (2001), Stochastic generation of annual, monthly and
daily climate data: A review. Hydrology and Earth System Sciences, 5(4), 653-670, Available
at: http://www.hydrol-earth-syst-sci.net/5/653/2001/.
Vaze, J., Jordan, P., Beecham, R., Frost, A., Summerell, G. (2012), Guidelines for rainfall-
runoff modelling Towards best practice model application.
Water and Rivers Commission (2003), SWMOD Version 2 A Rainfall Loss Model for
Calculating Rainfall Excess, User Manual (Version 2.11).
Weinmann, P.E., Rahman, A., Hoang, T.M., Laurenson, E.M. and Nathan, R.J. (2000),
Monte Carlo Simulation of flood frequency curves. Proc., Hydro 2000, 3rd International
Hydrology and Water Resources Symposium, IE Aust, pp: 564-569.
39
Types of Simulation
Approaches
Weinmann, P.E., Rahman, A., Hoang, T.M.T., Laurenson, E.M. and Nathan, R.J. (2002),
Monte Carlo Simulation of Flood Frequency Curves from Rainfall - The Way Ahead. Aust.
Journal of Water Resources, 6(1), 71-79.
40
Chapter 4. Treatment of Joint
Probability
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
4.1. Introduction
In many applications of flood simulation it is necessary to understand and apply the basic
probability concepts involved when a range of factors combine to produce a flood event or
when different events occur jointly. Such applications range from the stochastic simulation of
design flood events allowing for the joint probabilities of several key flood producing or flood
modifying factors, to typical situations where flood risk results from various combinations of
flood events that have different causes or occur at different locations.
Book 4, Chapter 4, Section 2 introduces basic probability concepts that are applied in flood
simulation methods and in determining flood risks for situations where several factors or
events interact. It then describes typical practical applications where the interaction of
different factors or events need to be considered and points to other sections where
individual applications are treated in more detail. Book 4, Chapter 4, Section 3 is devoted to
introducing Monte Carlo simulation as the most practical and flexible method of deriving
distributions that result from the interaction of several stochastic components. Book 4,
Chapter 4, Section 4 illustrates the application of joint probability concepts to a typical flood
estimation problem
Aleatory uncertainty represents the natural variability inherent in most hydrologic systems. In
the context of design flood estimation, this usually involves consideration of natural variability
in the characteristics of storm rainfalls (depths, temporal and spatial patterns), antecedent
conditions (as they relate to initial losses, water levels in natural lake systems and snowpack
characteristics), coincident streamflows (or levels) at the confluence of two streams, and the
influence of tide levels on estuarine flood behaviour. Aleatory uncertainty associated with
anthropogenic causes is also commonly a factor that needs to be considered. Perhaps the
most common factor to be considered in design flood estimation is initial reservoir levels in
dams (either singly or in cascade), though this can include consideration of the reliability of
41
Treatment of Joint Probability
operating equipment (e.g. spillway gates and other forms of outlet works), and debris
blockage of waterway areas provided for spillways, drainage works and bridges. Factors
which vary randomly over time are termed stochastic variables.
Epistemic uncertainty, on the other hand, relates to the uncertainty arising from a lack of
knowledge about hydrologic factors and their governing processes. In the context of design
flood estimation, epistemic uncertainty is commonly associated with errors involved in rating
curves (ie. in the relationship used to estimate streamflows from gauged levels), in the
estimation of catchment rainfalls from point observations, and the uncertainties involved in
estimating model parameters from a limited number of relevant events. An important source
of epistemic uncertainty arises from the need for extrapolation. That is, there may be an
adequate amount of information available at a particular site for estimating the exceedance
probability of frequent floods, but additional uncertainty is introduced when transposing such
information to an ungauged location, or when extrapolating to events much larger than have
occurred in the historic record. As the degree of extrapolation increases, so does the
uncertainty in the appropriateness of the configuration, or indeed of the conceptual structure,
of the model being used. Such uncertainties arise from lack of knowledge, and as such can
be reduced over time with collection of relevant data and increases in our understanding.
This Chapter only considers the influence of aleatory uncertainty on joint probability, and
consideration of epistemic uncertainty is discussed in Book 1, Chapter 2 and Book 7,
Chapter 9. The focus of this chapter is on the use of techniques that minimise the
introduction of bias in the exceedance probability of the final design estimate. Such
estimates will always contain uncertainty due to lack of knowledge, but the methods
presented here are intended to make best use of the information on natural variability that
we do have.
The interaction of these different factors can be described by a joint probability distribution
(Benjamin and Cornell, 1970; Haan, 1974). A bivariate probability distribution describes the
joint probability of two variates x and y, and this case is the simplest to visualise (refer to
Figure 4.4.1). Each of the two variables has a marginal probability distribution, � � and
� � , which represents the probability distribution without considering the influence of the
other variable. At a particular value of one variable, say at x0, the distribution of the other
variable y can be said to be conditioned on x and this is referred to as the conditional
probability distribution of y:
42
Treatment of Joint Probability
� � � = �0 (4.4.1)
The marginal distributions are illustrated in Figure 4.4.1 for the probability densities of a
bivariate normal distribution in x and y (with means of 70 and 50, respectively), where the
conditional probability distribution is shown for x = 90.
It is clear from the figure that the marginal probability distribution of y can be obtained by
integrating the conditional probability distributions of y for all values of x. For independent
events, the distribution of one variable is not conditioned on the other, and all conditional
distributions are thus identical to the marginal distributions of that variable.
The joint probability distribution concepts can also be applied to deal with the joint
occurrence of events that are simulated separately. Examples of such applications include
the interactions of riverine (or overland) flooding and sea level anomalies (Book 6, Chapter
5), the joint probability of reservoir inflows and initial storage contents, and the joint
consideration of mainstream and tributary floods.
The general solution approach to joint probability problems and the selection of factors or
events to be included in the joint probability framework are discussed in Sections Book 4,
Chapter 4, Section 2 and Book 4, Chapter 4, Section 2 respectively.
Analytical approaches are available to deal with relatively simple joint probability
applications. A special case is where component probability distributions can be considered
43
Treatment of Joint Probability
to be independent of each other. In this case the joint probability can be evaluated simply by
multiplying the component probabilities from the marginal distributions. However, in practice
most joint probability applications are more complex and are most readily addressed by
Monte Carlo simulation. In this approach the joint probability distribution is derived by
randomly sampling from the (marginal) component distributions and simulating the system
response a sufficient number of times to define the output distribution over the range of
interest. The method can readily deal with several component distributions and correlations
between them. This is the practical joint probability approach dealt with separately in Book 4,
Chapter 4, Section 3 Monte Carlo Simulation.
A typical example is to divide the range of rainfall input magnitudes into a number of
intervals and then calculating the probability of a particular flood outcome conditional on this
range of rainfall inputs. Key variables for other flood estimation applications may also be
partitioned in a similar way.
The marginal exceedance probability of the flood outcome of interest X can then be
calculated by the application of the Total Probability Theorem (Haan, 1974):
where the term � � > � �� denotes the conditional probability that the flood outcome X
generated from this interval Ci exceeds x and the term � �� represents the probability that
the conditioning variable falls within the interval i. For Equation (4.4.2) to be applicable, the
set of conditioning events Ci needs to be mutually exclusive (meaning no overlap) and
collectively exhaustive (meaning that the probabilities of the conditioning events have to add
up to 1.0).
Typical applications of conditional probability concepts and the Total Probability Theorem are
further discussed in Book 4, Chapter 4, Section 2.
The combined exceedance probability of this specific flood outcome from either Event A OR
Event B can then be calculated as:
44
Treatment of Joint Probability
� �+� =� � +� � −� � � � (4.4.3)
where P(A)P(B) represents the exceedance probability of Events A and B occurring together.
For events of relatively small AEPs this product is quite small and can generally be
neglected. The combined exceedance probability of several events can be evaluated in an
analogous fashion. An example involving several events is when flood frequency curves for
different seasons are combined to determine the annual frequency curve.
It is important to note that for Equation (4.4.3) to be applicable, the different events being
considered have to be defined in terms of the same magnitude (not exceedance probability).
In the example situations discussed above the interest is on the combined probability of
occurrence of separate events. When the reliability of a linear structure such as a road or
railway is being considered, the interest is not on the combined probability of exceedance of
a given flood standard at different locations but on the combined non-occurrence probability
or survival probability. Under the assumption of independent occurrences of damaging
events at different locations, the overall reliability of the linear structure can be calculated as
the product of the non-exceedance probabilities of a damaging event at different locations.
The combined risk of failure of the structure can then be determined as the complement of
the overall reliability.
Book 4, Chapter 4, Section 2 provides further discussion of this particular form of probability
calculations.
It is commonly required to estimate flood risk downstream of a storage, where the outflow
peak is dependent on the initial water level. If the variation in initial water level is small, such
as in a retarding basin or small on-line storage, then it may appropriate to adopt a typical
starting storage from the central range of conditions. However, the relationship between
inflow and outflow can be highly non-linear, thus in general it cannot be expected that
adoption of a mean initial water level will provide an unbiased estimate of outflows. A
maximum water level could be used and justified on the basis that it provides a
conservatively high estimate of flood risk, but introducing conservatism in intermediate steps
of the analysis should generally be avoided as the compounding effects of such assumptions
can undermine the validity of any risk-based decisions. If the initial water level does have an
appreciable impact on the outflow flood, ie. when the available flood storage is large
compared to the flood volume, then it will be necessary to give explicit consideration to the
joint probabilities involved. Detailed guidance on this type of problem is provided in Book 8,
Chapter 7, and worked examples using both analytical and numerical schemes are provided
in Book 8, Chapter 8, Section 4. The general computational elements involved in the Monte
Carlo solution to this type of problem are discussed in Book 4, Chapter 4, Section 3;
particular attention is drawn to the need for conditional sampling (Book 4, Chapter 4, Section
3) as it possible that the storage level associated with a given exceedance probability tends
toward a maximum value as the event magnitude increases.
Flood levels in estuarine regions may be dependent on the combined influence of storm
surge and tide levels. The degree of influence depends on a number of factors, but the lower
limits of such flood estimates are determined by assuming that fluvial flood levels are wholly
45
Treatment of Joint Probability
independent of the ocean level; conversely, the upper limits of such flood estimates are
derived using the assumption of complete dependence, that is, that fluvial floods will always
coincide with ocean levels of the same exceedance probability. Book 6, Chapter 5 provides a
practical approach to the solution of this class of problem. This guidance assists the
practitioner determine whether consideration needs to be given to the dependence of flood
levels on ocean conditions, and if so, then site-specific estimates for any location on the
Australian coastline can be determined using a software tool (http://p18.arr-software.org/)
based on the bivariate extreme value distribution. A Monte Carlo solution could be
developed by generating correlated variates in combination with a stratified sampling
scheme using the procedures described in Book 4, Chapter 4, Section 3 and the
dependence parameters described in Book 6, Chapter 5. In concept, the spreadsheet based
worked example presented in Book 4, Chapter 4, Section 4 is directly applicable to this type
of problem, the only difference being that the correlation term relating to tributary flows
replaces the dependence term governing coincident ocean levels. Regardless of the
approach used, any solution of this type of problem will require the undertaking of
deterministic modelling to obtain flood levels for different combinations of riverine flood and
storm tides.
Another common problem arises when considering the influence of tributary flows at a
confluence relevant to the region of interest. There are a number of solutions to this class of
problem, and the degree of complexity required will dependent greatly on the sensitivity of
the outcome to selected simplifying assumptions. If the focus is on mainstream flows, then it
may be sufficient to estimate the tributary contribution by estimating the average flood inflow
coincident with mainstream flow conditions; Book 8, Chapter 8, Section 5 presents a simple
worked example for this based on the use of a bivariate log-Normal distribution. Conversely,
if the focus is on tributary flows, then the assumption that there is an average flood in the
mainstream that is coincident with local flooding is likely to yield a biased outcome. This is
because any variation in mainstream floods may have a large influence on local flood levels,
at least for the region susceptible to backwater influences. The worked example presented in
Book 4, Chapter 4, Section 4 is directly applicable to this type of problem, the only difference
in application is that levels computed using hydraulic modelling (final column of Table 4.4.2)
relate to upstream levels in the tributary, rather than downstream of the confluence. It should
be noted that the inputs to this worked example may be derived by either Flood Frequency
Analysis or rainfall-based modelling. It would be expected that the deterministic relationship
between mainstream flows and flood level is most easily obtained from some form of
hydraulic modelling, but if gauged information is available for a range of historic events, then
a suitable deterministic function may be obtained directly through analysis of the data, thus
obviating the need for hydraulic modelling. An example of such an analysis is provided by
Laurenson (1974).
The general form of solutions to the above problems all conform to the conceptual
framework described in Book 4, Chapter 4, Section 3. The sub-sections following this
framework provide for parametric and non-parametric approaches to characterising the input
distributions, and allow for the additional consideration (if required) for dealing with
conditional dependencies. The generic procedures covered here are intended to cover
situations not specifically catered for in the methods presented elsewhere in ARR, as
discussed above.
46
Treatment of Joint Probability
complete range of conditions that might apply. It is common in hydrology to consider both
conditional and unconditional probabilities, and care is required when interpreting and
reporting such analyses to avoid confusion.
For example, conditional probability estimates are often required for the estimation of flood
risk for construction activities. Flood risk varies seasonally throughout the year, and
construction works may be scheduled to occur in a season of low flood risk. In this case it is
appropriate to estimate conditional flood probabilities relevant to the particular season of
interest; such analyses might involve undertaking Flood Frequency Analysis using flood
maxima that have occurred over the months scheduled for construction, or else a rainfall-
based approach might be used in which seasonal design rainfalls are used in combination
with season-specific losses. The flood risk estimates derived from such analyses are
conditional upon the season considered, and without additional analyses it is not possible to
convert these estimates to annual risks.
The nature of the additional analyses required to derive unconditional estimates of annual
risk depends on whether the conditioning events are mutually exclusive or not. Estimating
annual flood risks based on seasonal analyses represents a mutually exclusive set of
estimates, as clearly the annual maximum event cannot occur in two different seasons in the
one year. Being mutually exclusive, the annual risk that a flood exceeds a given value is
obtained by the simple addition of the individual seasonal exceedance probabilities.
It is often the case, however, that the conditioning events are not mutually exclusive. A
common example of this is the estimation of flood immunity along a length of linear
infrastructure, such as a major road or railway line. Here, the annual maximum event may
well occur at multiple locations along its length, and thus the annual risk that access
between two locations might be disrupted cannot be obtained by simply summing the
estimates made at each individual crossing. Instead, some account must be given to the
dependence of the factors that give rise to the individual floods. The probability of closure for
an existing length of infrastructure is not simply equal to the exceedance probability of the
most vulnerable crossing as this ignores the contribution of flood exceedance probabilities
from rainfalls that may occur from other independent weather systems. Whether or not the
degree of dependence needs to be considered depends on the significance of the outcome
when the initiating events are considered to be wholly dependent or independent. The
greater the difference between these two extremes, the greater is the need to complicate the
solution by the explicit consideration of the dependencies involved.
Practitioners need to decide the appropriate level of complexity required to come up with a
practical solution in a manner that is proportionate to the nature of the problem and the
available resources. The simplest approach is to assume that the factors of most importance
are highly correlated and that alternative combinations of conditions contribute little to the
overall flood risk. With reference to Figure 4.4.2, it is seen that in temperate climates it might
be expected that large long duration rainfall events occur at times when soil moisture is high
and consequently catchment losses are low; conversely short duration (thunderstorm)
events might occur when losses are high. As long as due care is given to matching the
design inputs to match the dominant mechanism of interest, then it may be appropriate to
derive estimates of rainfall-based flood estimates on an annual basis. Conversely, if the
design loading of interest is sensitive to a mix of storm durations and catchment conditions,
then it may be warranted to derive rainfall-based estimates on a seasonal basis and
compute annual risks by summation of the seasonal exceedance probabilities.
47
Treatment of Joint Probability
Figure 4.4.2. Difference in the Seasonal Likelihood of Large Long Duration Rainfall Events
and Large Short Dduration Rainfall Events and their Concurrence with Catchment Losses
The analytical approach required to accommodate conditional probabilities when the events
are not mutually exclusive is more complex. There are a number of different approaches that
can be used, and in any given design situation the best approach to adopt depends on the
nature and importance of the problem. Monte Carlo simulation in combination with
evaluation of the Total Probability Theorem provides a general solution to problems involving
conditional probabilities, and details on how to undertake such an approach is provided in
Book 7, Chapter 9. However, different approaches are often available and the choice of
solution does somewhat depend on the skills and experience of the practitioner. For
example, while the assessment of flood immunity along a length of linear infrastructure could
be solved by generating correlated rainfall inputs for use with event-based models, the use
of gridded rainfall fields in combination with continuous simulation obviates the need to
explicitly consider the joint probabilities involved (Jordan et al., 2015). Other approximate
approaches that explicitly consider correlation in rainfall events have also been applied
(Fricke et al., 1983), and a simple analytical example demonstrating a similar approach is
provided in Book 8, Chapter 7, Section 3 and Book 8, Chapter 8, Section 5.
The techniques presented in this Book can also be applied to events which are mutually
exclusive, however again it may be appropriate to adopted simpler approaches. For
example, a discussion of the specific issues involved in computing annual risks from
analyses undertaken on a seasonal basis is provided in Book 8, Chapter 7, Section 4; this
approach is applicable to any design in which the conditional contributions are mutually
exclusive, where the relative importance of the different factors may vary with event severity.
48
Treatment of Joint Probability
transformation is deterministic in the sense that the model will always yield the same
outcome for a given set of inputs, antecedent conditions, and parameter values.
The general form of this concept is shown in Figure 4.4.3 for three different examples. In one
example, the stochastic component represents the flood frequency distributions of two
tributaries, where the deterministic component represents the manner in which the flows
combine at their confluence. For a reservoir, the stochastic inputs might represent the
frequency distribution of inflows and initial storage levels, where the deterministic component
represents the relationship between inflows, storage and outflows. In hydraulic modelling,
stochastic inputs may be used to represent inflows to a stream reach as well as the tide
levels for a downstream boundary condition, where the deterministic component is governed
by the hydraulic equations that predict flood level as a function of streamflow, reach
characteristics and boundary conditions.
A variety of approaches are available for solving this general type of problem. Laurenson
(1974) provides a general solution based on the matrix multiplication of a probability
distribution of a stochastic input with a transition matrix derived from the deterministic
operation of the system. The method is very general and suited to numerical solution.
Careful effort is required to develop the elements of the transition matrix, and additional
conditional probability terms need to be evaluated to allow for correlations in the inputs.
The joint occurrence of correlated stochastic factors can be evaluated using bivariate
distributions, and there are numerous applications in the water resources literature where
these have been used. The methodology used to assess the coincidence of catchment
flooding and extreme storm surge for the coastline of Australia was developed using such an
approach (Zheng et al., 2014), and is covered in detail in Book 6, Chapter 5. There are fewer
examples where multivariate extreme distributions are used, and possibly the use of copula
functions in combination with univariate distributions afford a more practical approach (Favre
et al., 2004; Genest and Favre, 2007; Chen et al., 2012). Kilgore et al. (2010) reviews a
range of methods and develop a general methodology for estimating joint probabilities of
coincident flows at stream confluences based on the use of copulas which is intended for
use by practitioners.
49
Treatment of Joint Probability
However, the development and application of such approaches does require considerable
statistical skill and they are not well suited for application by the majority of practitioners.
Also, regardless of the methods used to characterise the extreme (possibly correlated)
behaviour of the inputs, it is still necessary to model the deterministic component to
determine how the various inputs combine to yield outputs of different magnitudes.
Developing such response functions over the range of inputs required is itself a demanding
task, and there is advantage if this can be done in such a way that leads directly to the
exceedance probabilities of interest.
Monte Carlo techniques provide a structured means to generating outputs for a wide range
of inputs, and if formulated correctly they represent a generic solution to the problem
illustrated in Figure 4.4.3. With this approach, inputs are randomly sampled many hundreds,
or thousands of times, and used in conjunction with a model of the deterministic component
to obtain a distribution of the required outputs. Statistical analysis is then used to estimate
the exceedance probability of the output variable of interest.
One of the main attractions of Monte Carlo methods is that the modelling tools and
hydrologic concepts involved are essentially identical to those used in traditional
approaches. Differences only arise in the manner in which the inputs are handled and the
results analysed. Once the necessary framework has been developed, the factors of most
importance can be modelled as stochastic inputs, and those of lesser importance can be set
at fixed values. Many practitioners are used to developing automated means for running
simulation models; such approaches can be adapted to Monte Carlo simulation by using
simple probability models to generate the inputs, and straightforward statistics to analyse the
outputs. The approach thus represents a powerful means of capturing the influence of
variability on hydrologic systems in a manner that requires only a modest increase in the
level of modelling sophistication.
50
Treatment of Joint Probability
The common attribute of stochastic factors that influence flood response is that at any given
point in time their state is uncertain. With sufficient data it is possible to estimate their
average state and other characteristics related to their range and variability, and possibly the
nature of their dependence on the magnitude of other factors. Often natural factors vary in a
systematic fashion with the time of day or season, and they may be correlated. For example,
initial loss might range between 0.1 and five times its median value but 70% of the time it
might range between 0.5 and 1.5 times the median; average summer losses might also be
expected to be twice the magnitude of winter losses, and because of the likelihood of rainfall
occurring before intense rainfalls bursts, it might be that initial loss values vary inversely (ie.
are negatively correlated) with rainfall depth.
When considering the use of regional information it is often useful to standardise the data in
some form to allow transposition from one site to another. An example of this relevant to
flood estimation is the distribution of losses, as illustrated in Figure 4.4.4. While the typical
magnitude of losses varies from one catchment to another, standardising these values (by
simply dividing by the median value for the catchment) reveals that the likelihood that the
catchment is wetter or drier relative to typical conditions is similar for a wide variety of
catchment types (Hill et al., 2015). The representation of temporal pattern increments as a
proportion of total burst depth rather than, say, as an absolute depth in mm, is another
example of how regional information can be pooled to represent variability.
51
Treatment of Joint Probability
Investigation into how flood producing factors may vary with flood severity may be of
particular importance as often the information at the location of interest may be limited. For
example, it may be suspected that reservoir levels will be higher at the start of extreme
rainfall events as these may be more likely to occur during wetter (La Niña) periods.
Evidence for this might be obtained by examining historical correlations between initial
reservoir levels prior to large rainfalls, but if such information is limited then it may be more
appropriate to “trade space for time” by examining correlations between seasonal rainfalls
and extreme storms over a wide region (once the data has been standardised to allow for
systematic variation in rainfall depths). An illustration of this by Scorah et al. (2015) for
south-eastern Australia is shown in Figure 4.4.5(a).
Two other examples of similar investigations are provided in Figure 4.4.5. The middle panel
of Figure 4.4.5 shows the dependence of storm surge on rainfall maxima for an investigation
into the interaction between coastal processes and severe weather events (Westra, 2012),
and the right-hand panel illustrates the variation in temperature coincident with rainfall
maxima for the consideration of the joint probabilities involved in rainfall-on-snow events
(Nathan and Bowles, 1997).
52
Treatment of Joint Probability
Figure 4.4.5. Examples of Investigations into Dependence between Flood Producing Factors
based on (a) Antecedent Seasonal Rainfall Data for Catchments over 1000 km2 (Scorah et
al., 2015), (b) Rainfall and Storm Surge Data (Westra, 2012), (c) and Temperature
Coincident with Rainfall Maxima (Nathan and Bowles, 1997)
Figure 4.4.6. Examples of Difference in Correlation between Flow Maxima in the Namoi and
Peel Rivers, Based on (a) Annual Maxima at Both Sites, and (b) Peel River Flows that are
Coincident with Namoi River Maxima
53
Treatment of Joint Probability
A general and very accessible introduction to Monte Carlo methods can be found in
Burgman (2005), and more comprehensive and practical guidance is provided in Vose
(2000) and Saucier (2000); the latter reference includes C++ source code for a collection of
various distributions of random numbers suitable for performing Monte Carlo simulations.
Hammersley and Handscomb (1964) provide a more advanced theoretical treatment of the
subject, and useful discussion on the advantages of using Monte Carlo methods to estimate
design floods can be found in Weinmann et al. (2002), Kuczera et al. (2003), and Weinmann
and Nathan (2004).
It should be noted that while there are advantages to developing a simulation framework
using high level computing languages such as Python, C++ and Fortran, it is quite feasible to
initiate the required design runs and undertake the required statistical analyses using
standard spreadsheet software. Robinson et al. (2012) applied such a framework to the
solution of the joint probabilities involved in the simulation of extreme floods and reservoir
drawdown. At its simplest, any practitioner familiar with the techniques required to prepare
batched command scripts and use spreadsheet formulae will be able to implement the
procedures described herein.
The following sections outline the main steps involved in developing a Monte Carlo solution
of joint probability problems. The sections follow the sequence of steps shown in
Figure 4.4.7, which refers to the stochastic deterministic components of the general
catchment modelling system as illustrated in Figure 4.4.3. It should also be noted that this
scheme is a generalisation of the Monte Carlo framework depicted in Figure 4.3.2 of Book 4,
Chapter 3; specifically, the scheme shown in Figure 4.4.7 represents the treatment of natural
variability in rainfall-based flood estimation, where no account is given to epistemic
uncertainty in the data, parameters, or modelling components.
54
Treatment of Joint Probability
Figure 4.4.7. General Framework for the Analysis of Stochastic Deterministic (Joint
Probability) Problems using Monte Carlo Simulation
The generation scheme makes use of the inverse transformation approach. This can be
applied to either formally defined probability models, or else to empirical “data-driven”
distributions. The basis of the inverse transformation approach is to generate the required
probability density function f(x) through uniform sampling of the inverse of the cumulative
distribution function F(x) (ie. the function which gives the probability P of x being less than a
specified value).
The two-step process for doing this is illustrated in Figure 4.4.6, and the algorithm can be
summarised as follows:
2. Calculate the value (x) of the inverse of the cumulative density function F-1(U).
This process is illustrated in Figure 4.4.8 for three random numbers. The first random
number generates a value near the tail of the distribution, and the next two yield values that
55
Treatment of Joint Probability
are more centrally tended. For illustration purposes the input random numbers (U) in
Figure 4.4.8 are shown as being equally spaced, but on exit the transformed numbers are
unequally spaced, in conformance with the adopted distribution. Inverse functions of a
number of useful distributions (Normal, log-Normal, Beta, Gamma) are provided in standard
spreadsheet software (see example in Book 4, Chapter 4, Section 4). If an empirical
distribution is used then values can be simply interpolated from a look-up table comprised of
values of the cumulative density function (also see example in Book 4, Chapter 4, Section
4).
�� − 1 (4.4.4)
�= , when � ≠ 0; � = ln � , when � = 0
�
where � is a parameter determined by trial and error to ensure that the skewness of the
transformed distribution is zero. A noteworthy special case of this transformation arises
when � is set to zero, then the transformation is equivalent to taking logarithms of the data.
Fitting the parameter � is most easily achieved by optimisation or the use of “solver” routines
that are commonly available in spreadsheet programs. To illustrate the use of the inverse
transformation method with a variable that has been transformed using a Box-Cox lambda of
1.2, where the resulting normally-distributed variates have a mean of 50 and a standard
deviation of 25:
56
Treatment of Joint Probability
The above four steps can be repeated many hundreds (or thousands) of times as required
for input to a model. The outcome of the above four steps repeated 1000 times is provided
as a histogram in Figure 4.4.9.
Details of the Normal distribution are provided in all statistics textbooks and thus further
information will not be presented here. Source code for estimation of the cumulative Normal
distribution is freely available (Press et al., 1993) and the function is available in spreadsheet
software.
Lastly, it is worth noting that the uniform distribution is also of practical use in flood
hydrology. A simple random number generator that varies uniformly between 0 and 1 can be
directly applied to the sampling of temporal, or space-time, patterns of rainfall that are
considered equally likely to occur.
57
Treatment of Joint Probability
empirical data sets (such as losses and reservoir drawdown) in a more defensible manner
than simple adoption of a single best estimate or representative value. It is also useful for
sampling from “pragmatic” distributions, such as rainfall frequency curves that extend
beyond 1 in 2000 AEP and which are not based on a theoretical distribution function (Book
2, Chapter 2).
1. Sort empirical data into either ascending or descending order as appropriate, and assign
a cumulative probability value to each. If there are n data values, then the largest data
value (x1) is assigned an exceedance probability F(x1), the second largest (x2) is assigned
an exceedance probability F(x2), and so on till the last value, represented by xn and F(xn);
4. � − � ��
Return � = �� + �� − 1 − �� ;
� �� + 1 − � ��
While simple to implement, the use of empirical distributions in Monte Carlo simulation does
require care. Most importantly, it is necessary to ensure that the data sample being used is
relevant to the whole range of conditions being simulated. For example, if the data set is
comprised of initial reservoir levels recorded over a short historic period, then these may not
be relevant to the assessment of extreme flood risks under a different set of operating rules.
It is seen in Step 4 of the above algorithm that values within each interval are obtained by
linear interpolation. This is normally quite acceptable, though obviously the less linear the
relationship between the data values and their corresponding exceedance probabilities the
less defensible is such an approach. Accordingly, in some cases it is best to first transform
the data and/or the exceedance probabilities assembled for Sstep 1 of the algorithm. Many
hydrological variables are approximately log-Normally distributed, and thus it is often
desirable to undertake the interpolation in the log-Normal domain. To this end, the ranked
data values are transformed into logarithms (it does not matter what base is used) and the
exceedance probabilities are converted to a standard normal variate (that is, the inverse of
the standard normal cumulative distribution). Step 2 of the above algorithm would thus need
to be replaced with U = U(zmin,zmax) where zminand zmax represent the standard normal
deviates corresponding to F(x1) and F(xn), ie. the adopted limits of exceedance probability
range.
Care is also required when sampling from the tails of the distribution. Empirical data sets are
of finite size and, if the generated data are to fall between the upper and lower limits of the
observed data, the cumulative exceedance probability of the first ranked value F(x1) should
be zero, and that of the last ranked value F(x1) should be 1.0. Thus use of empirical data
sets is appropriate for those inputs whose extremes of behaviour are not of great relevance
to the output. Losses, for example, are zero bounded, and thus the difference in flood peak
between a loss exceeded 95% of the time and that exceeded 99.999% of the time may well
be of no practical significance. However, if an empirical approach is being used for the
generation of rainfalls that are defined for between 1 in 2 and 1 in 100 AEP, then it is
inevitable that more than half the random numbers generated in Step 2 of the above
algorithm can be expected to lie outside the specified range of rainfalls. As long as the
probability range of interest lies well within the limits specified, then rainfall values can be
58
Treatment of Joint Probability
obtained by some form of appropriate extrapolation; however, if this approach is used then
checks should be undertaken to ensure that the extrapolated values do not influence the
results of interest.
3. Return:
�min + �max �max − �min
�= 2
+� 2
where xmin and xmax are the lower and upper bounds of the first variate and ymin and ymax
are the corresponding bounds of the other.
Application of the above algorithm is illustrated in Figure 4.4.10(a). The bounds along the x-
axis are 5 and 130, and those along the y-axis (for the mid-point of the x distribution) are 30
and 75. Figure 4.4.10 illustrates the results for the generation of 2000 correlated variates
where the correlation coefficient (�) adopted is -0.7.
Figure 4.4.10. Generation of Variables with a Correlation of -0.7 based on (a) Uniform and
(b) Normal Distributions
The above algorithm can easily be adapted to the generation of correlated variates that
conform to some specified distribution. For the Normal distribution, the required algorithm is:
1. Independently generate two normal random variates with a mean of zero and a standard
deviation of 1: X= N(0,1) and Z= N(0,1);
59
Treatment of Joint Probability
3. Return:
� = �� + ���
� = �� + ���
where �� and �� are the means of the two distributions and �� and �� are the required
standard deviations.
Application of the above algorithm is illustrated in Figure 4.4.10(b). The input parameters to
this example are � = − 0.7, �� = 70 and �� = 10, and �� = 50 and �� = 10 and as before a
total of 2000 correlated variates are generated. Any distribution could be used in lieu of the
Normal distribution, or else the variates of interest could be transformed into the normal
domain.
The approach that can be followed to stochastically sample from such a data set can be
described as follows:
1. Identify the “primary” variable that is most important to the problem of interest, and
prepare a scatter plot of the two variables with the primary variable plotted on the x-axis
(as shown in Figure 4.4.11);
2. Divide the primary variable into a number of ranges such that variation of the dependent
variable (plotted on the y-axis) within each range is reasonably similar; in the example
shown in Figure 4.4.11 a total of seven intervals has been adopted as being adequate.
This provides samples of the secondary variable that are conditional on the value of the
primary variable;
3. Stochastically generate data for the primary variable using the empirical approach as
described in Book 4, Chapter 4, Section 3;
4. Derive an empirical distribution of the dependent data for each of the conditional samples
identified in Step 2 above (that is, undertake Step 1 of the empirical approach as
described in Book 4, Chapter 4, Section 3 for each of the intervals); thus, for the example
shown in Figure 4.4.11 a total of seven separate empirical distributions of upstream
storage levels are prepared;
5. For each generated value of the primary variable, stochastically sample from the
conditional distribution corresponding to the interval that it falls within; for example, if a
downstream storage level of 1500 ML was generated in Step 3 above, then the empirical
60
Treatment of Joint Probability
The results from application of the above procedure are illustrated in Figure 4.4.11 for 2000
stochastic samples (shown by the blue “+” symbols). The 2000 correlated values are
stochastically generated based on information contained in 500 observations. It is seen that
the correlation structure in the observed data set is preserved reasonably well by this
procedure.
61
Treatment of Joint Probability
The first approach, based on direct sampling, is the most straightforward to implement. It is
well suited to the analysis of problems that can be computed quickly, or else to more
complex problems in which the probability range of interest is limited to reasonably frequent
events. As a rule of thumb, the number of simulations required is around 10 to 100 times the
largest average recurrence interval of interest. That is, if the rarest event of interest has an
annual exceedance probability of 0.001, then it will be necessary to generate between 10
000 to 100 000 stochastic samples in order to derive a stable result.
The second approach, based on stratified sampling, does require more effort to implement. It
can still be formulated using a “batch” file approach, though additional care needs to be
taken with how the inputs are formulated and the results analysed. The benefit of this effort
is that the number of runs required to estimate the exceedance probability of rare events is
considerably fewer; indeed the algorithm can be designed so that a similar number of runs is
required regardless of the range of probabilities of interest.
Further information on these two approaches is provided in the next two sections. It is worth
noting that other approaches could be used; for example Diermanse et al. (2014) derive
estimates using importance sampling, which is similarly efficient to the stratified sampling
discussed below.
2. Assign a rank (i) to each peak value; 1 to the highest value, 2 to the next highest, and so
on, down to rank N;
3. Calculate the plotting position (p) of each ranked value using either the Weibull (Equation
(4.4.5)) or the Cunnane (Equation (4.4.6)) formulae:
�
�= (4.4.5)
�+1
� − 0.4
�= (4.4.6)
� + 0.2
If the design focus is on estimating the probability of a given flood magnitude then the
Weibull formula (Equation (4.4.5)) should be used as this provides an unbiased estimate
of the exceedance probability of any distribution. Alternatively, if the focus is on the
magnitude associated with a given exceedance probability then the Cunnane formula
(Equation (4.4.6)) is preferred as this provides approximately unbiased quantiles for a
range of distributions.
4. Construct a probability plot of the ranked peaks against their corresponding plotting
positions. The plot scales should be chosen so that the frequency curve defined by the
plotted values is as linear as possible. In many hydrological applications the ranked
values may be plotted on arithmetic or log scales and the estimated exceedance
probabilities (the plotting positions) are plotted on a suitable probability scale. Most
popular spreadsheet programs do not include probability scales and thus, for probability
plots conforming approximately to the Normal or log-Normal distribution, it is necessary to
62
Treatment of Joint Probability
5. The magnitude associated with a given exceedance probability (if the Cunnane plotting
position is used) or else the exceedance probability associated with a given magnitude (if
the Weibull plotting position is used) can be interpolated directly from the probability plot.
For convenience, a suitable smoothing function (ie. polynomial equation) can be fitted to
the plotted values in the region of interest to simplify the estimation of design values. The
function is used merely to interpolate within the body of the plotted points and thus, as
long as there is no bias in the fit, it matters little what function is used (polynomial
functions are quite suitable).
If desired, the maxima can be fitted using a traditional probability model (Book 3, Chapter 2),
but given that sufficient simulations need to be undertaken to yield a stable estimate, there is
little point in doing so.
Adoption of a stratified sampling approach ensures that the computational effort is always
focused on the region of interest and, if the simulation scheme is configured carefully, then it
will usually be possible to apply Monte Carlo simulation to most practical problems.
The approach follows the same logic as represented in the flow chart of Figure 4.4.7, the
only difference is that samples of the stochastic variable that is of most importance to the
output are generated over specific probability ranges. It matters little how the ranges are
defined and the ranges can be varied to suit the different ranges of interest. It is simplest to
divide the domain into M intervals uniformly spaced over the standardised normal probability
domain (Detail A in Figure 4.4.12). It should be noted that adopting this approach does not
make any distributional assumption about the variable, it simply provides the means to
distribute the simulations evenly across the probability domain. Typically 50 intervals should
suffice, though care is required to ensure that there is adequate sampling over the region of
most interest.
In the example illustrated in Figure 4.4.12, rainfall is used as the primary stochastic variable.
Within each interval N rainfall depths are stochastically sampled and for each rainfall depth a
model simulation is undertaken using an appropriate set of stochastic inputs (Detail B in
Figure 4.4.12). The number of simulations specified in each interval (N) is dependent on the
number of inputs being stochastically generated and their degree of variability, but in general
it would be expected that between 50 and 200 simulations should be sufficient to adequately
sample from the range of associated inputs.
The model results are recorded for all simulations taken in each interval (Detail C in
Figure 4.4.12). These results are assessed using the Total Probability Theorem (Book 4,
Chapter 4, Section 2) to yield expected probability estimates of the flood frequency curve. In
63
Treatment of Joint Probability
all, if the rainfall frequency curve is divided into 50 intervals and 200 simulations are
undertaken in each interval, a total of 10 000 runs is required. The same number of
simulations could be used whether the upper limit of exceedance probability is 1 in 100 or 1
in 106, and it is merely necessary to ensure that a representative number of combinations is
sampled within each rainfall range of interest. If the distribution of different rainfall durations
is known, the Total Probability Theorem can also be used to give appropriate weighting to
separate flood simulations for different rainfall duration intervals.
For the scheme illustrated in Figure 4.4.12, the expected probability that a flood peak (Q)
exceeds a particular value q can be calculated from the Total Probability Theorem:
where the term � �� represents the probability that rainfall occurs within the interval i, and
the term � � > �� denotes the conditional probability that the flood peak Q generated using
a rainfall depth from within this interval Ri exceeds q. The term � �� is simply the width of
the probability interval under consideration (this will be different for each of the M intervals
considered), and � � > �� can be calculated merely as the proportion of exceedances, n, in
the sample of N simulations within interval i (ie. as n/N). A representative value of R can be
used for all N simulations within the interval, though a smoother frequency curve can be
obtained if R is sampled with the interval using a uniform distribution.
In order to ensure that the total probability domain is sampled, it is necessary to treat the first
and last intervals differently from the intermediate ones. The issue here is that the full
extents of the end intervals have to be adequately sampled, and on the assumption that
these boundary intervals are distant from the probability region of interest, we can estimate
their contribution to the total probability in a pragmatic fashion. For the last interval � �1 is
evaluated as the exceedance probability of its lower bound, and for the first interval it is
evaluated as the non-exceedance probability of its upper bound. Also, for the first interval
� � > � �1 is replaced by the geometric mean of � � > � �1 * and, say, 0.1 x� � > � �1 * ,
where R1* is the rainfall value at the upper bound of the interval. Similarly, for the last
interval the term � � > � �N is replaced by the geometric mean of � � > � �N * and 1.0,
where �� * is the rainfall value at the lower bound of the interval. Thus, we are assuming for
the lowest interval that as the frequency of the rainfall event becomes very high the
likelihood that the flow threshold is exceeded trends towards a very low value, in this case
taken as one tenth the probability of � � > � �1 * ; and for the uppermost interval we
assume that the likelihood of the threshold being exceeded trends towards a value of 1.0 (ie.
a certainty). The geometric mean is used in place of the arithmetic mean as here we are
assuming a highly non-linear variation over the interval.
64
Treatment of Joint Probability
Figure 4.4.12. Manner in which Stratified Sampling is Applied to the Rainfall Frequency
Curve
4.4. Example
The example below shows how the concepts described in this chapter may be used to solve
a commonly encountered practical problem. The example is based on real data, but has
been adapted somewhat to more easily illustrate the concepts involved.
The case study involves a township that is located below the confluence of two rivers
(Figure 4.4.13). Both rivers are gauged, and one (referred to here as the “mainstream”) is
larger than the other (the “tributary”). Flood frequency information has been derived for the
two gauging sites, and the main focus of the study is to derive 1% AEP flood levels below
the confluence, immediately upstream of the town. A one dimensional (HEC-RAS) model
has been developed for the valley to allow flood levels to be determined throughout the
town. The portion of the model of most relevance to this problem is shown by blueshading in
Figure 4.4.13.
65
Treatment of Joint Probability
The analysis of this problem follows the components as outlined in Figure 4.4.7. Flood levels
upstream of the town may be the result of a large flood in the mainstream with a small
tributary flood, or a large flood in the tributary with average flow conditions in the
mainstream; more commonly, it might be expected that the downstream levels are a function
of different extremes of flooding in both contributing rivers. Flood Frequency Analysis was
undertaken on the Annual Maxima Series derived at both gauges, and it was found that a
log-Normal distribution provided an adequate fit to both ( Figure 4.4.14a). An analysis of the
coincident flow maxima at both sites indicated that the correlation between flood peaks was
0.6, and a scatter plot of the historic peaks used to make this inference is shown in
Figure 4.4.14 b).
Figure 4.4.14. (a) Flood Frequency Curves for the Mainstream and Tributary gauging sites,
and (b) Correlation between Historic Flood Peaks and Sample of Generated Maxima
The first step in the process is to generate the correlated stochastic inputs relevant to the
two branches of the stream. This is done using the procedure outlined in Book 4, Chapter 4,
66
Treatment of Joint Probability
Section 3 in conjunction with the inverse transform method (Book 4, Chapter 4, Section 3).
The first ten rows of the simulation are shown in Table 4.4.1. Uniform random numbers are
provided in Columns 2 and 3, and Columns 3 and 4 show the corresponding values of the
inverse cumulative Normal distribution (the standard normal variates). Column 6 shows the
correlated value of the standard normal variate, which is obtained from the procedure
outlined in Book 4, Chapter 4, Section 3; however, as here a orrelated standard normal
variate is generated rather than a correlated uniform variates, the two input variables are X=
N(0,1) and Z= N(0,1), ie. Columns 4 and 5, not columns 1 and 2. The corresponding maxima
in the mainstream and the tributary are shown in Columns 7 and 8, and are obtained by
scaling the N(0,1) variates by the relevant means and standard deviation of the log-Normal
distribution, e.g. � = �� + ��� The mean and standard deviation for both streams are shown
at the top of the table in Columns 4 and 5, and the results shown in Columns 7 and 8 have
been transformed back into the arithmetic domain by taking the anti-log of x. The results of
applying these steps 5000 times are shown in Figure 4.4.14(b).
The next step in the process is to derive the deterministic component of the system. To this
end, representative flows were input into a HEC-RAS model of the stream and the results
levels were obtained. Seven pairs of simulations were undertaken as shown in Figure 4.4.15
and Table 4.4.2. A multiple regression model was fitted to this information, and the resulting
relationship is depicted in Figure 4.4.15. This function is used in Column 9 of Table 4.4.1 to
obtain the flood level resulting from the stochastic maxima provided in Columns 7 and 8.
67
Treatment of Joint Probability
A probability plot of the ranked 5000 stochastic flood levels (using the Weibull plotting
position formula) is depicted in Figure 4.4.16. The 1% AEP flood level may be found by
simple linear interpolation of these results, and is found to be a level of 10.55 m. Also shown
in Figure 4.4.16 is the dependence of this estimate on the degree of correlation between the
mainstream and tributary peaks, where it is seen that if the peaks are assumed to be fully
independent or dependent the flood level estimate varies between 10.40 and 10.73 m,
respectively.
It is worth noting that trials were undertaken to determine how many simulations were
required to yield stable estimates of the quantiles. In this example, there was no difference in
results if 1000 or 5000 simulations were used, though below this number the estimates
started to become unstable.
68
Treatment of Joint Probability
Lastly, an estimate of the exceedance probability can be obtained using stratified sampling
and use of the Total Probability Theorem. To this end, the probability domain was divided
into 10 divisions, and 20 simulations were undertaken in each (totalling 200 simulations).
The boundaries of the ten divisions are shown in Columns 2 and 3 of Table 4.4.3, where the
limits have been uniformly distributed between standard normal variates of 1 and 4. The
calculations are undertaken as described in Book 4, Chapter 4, Section 3 for the level
threshold of 10.4 m, where the conditional probability terms are based on the exceedance
probability of flows in the mainstream. The probability of an event occurring in each of the
ten bins is shown in Column 4, and this is determined from the exceedance probabilities
associated with each of the bins. For example, the probability that a flow in the mainstream
lies within the first bin is simply the difference between 0.90320 and 0.84134 (= 0.06185),
which are the probabilities of the normal distribution that correspond to the standard normal
variates of 1.00 and 1.30. The number of times that a level exceeds 10.4 m in each bin is
given in Column 5, and the corresponding conditional probability is shown in Column 6,
which is computed by dividing by the number of samples in each bin (which in this case is
20). The product of the conditional probability term (Column 6) and the interval width
(Column 4) is given in Column 7, and the summation is provided at the bottom of the table. It
is thus seen that the exceedance probability of exceeding 10.4 m is estimated to be 0.0149
(or around 1 in 70). A comparison between three such estimates and the results obtained
from simple simulation is shown in Figure 4.4.16, from which is seen that the results
obtained are similar.
Table 4.4.3. Calculation of Exceedance Probability of the Level Exceeding 10.4 m using the
Total Probability Theorem
Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Column 7
Bin Zmin Zmax p[Mi] Num [H>h] p[H>h|Mi] p[H>h|
Mi]*p[H>h]
1 1.00 1.30 0.061855 0 0.00 0.000000
2 1.30 1.60 0.042001 0 0.00 0.000000
3 1.60 1.90 0.026083 0 0.00 0.000000
4 1.90 2.20 0.014813 4 0.20 0.002963
5 2.20 2.50 0.007694 15 0.75 0.005770
6 2.50 2.80 0.003655 20 1.00 0.003655
7 2.80 3.10 0.001588 20 1.00 0.001588
8 3.10 3.40 0.000631 20 1.00 0.000631
9 3.40 3.70 0.000229 20 1.00 0.000229
10 3.70 4.00 0.000076 20 1.00 0.000076
0.014911
69
Treatment of Joint Probability
Figure 4.4.16. Derived Frequency Curve of Downstream Levels, with (b) Dependence of 1%
Annual Exceedance Probability Level on Degree of Correlation between Flood Peaks
4.5. References
Benjamin, J.R. and Cornell, C.A. (1970), Probability, Statistics and Decisions for Civil
Engineers. McGraw-Hill, New York.
Box, G.E.P. and Cox, D.R. (1964), An Analysis of Transformations, J Royal. Statistical
Society, 211-243, discussion 244-252.
Chen, L., Singh, V.P., Shenglian, G., Hao, Z. and Li, T. (2012), Flood Coincidence Risk
Analysis Using Multivariate Copula Functions, J. Hydrol. Eng., pp: 742-755,
70
Treatment of Joint Probability
Diermanse, F.L.M., Carroll, D.G., Beckers, J.V.L., Ayre, R. and Schuurmans, J.M. (2014), A
Monte Carlo framework for the Brisbane river catchment flood study. Hydrology and Water
Resources Symposium 2014. Barton, ACT: Engineers Australia.
Favre, A.C., El Adlouni, S., Perreault, L., Thiémonge, N. and Bobée, B. (2004), Multivariate
hydrological frequency analysis using copulas. Water Resour. Res., 40(1).
Fricke, T.J., Kennedy, M.R. and Wellington, N.B. (1983), The Use of Rainfall Correlation in
Determining Design Storms for Waterways on a Long Railway Line. In: Hydrology and Water
Resources Symposium (15th : 1983 : Hobart, Tas.). National conference publication
(Institution of Engineers, Australia), (83/13), 215-219.
Genest, C., and Favre, A. (2007), Everything You Always Wanted to Know about Copula
Modeling but Were Afraid to Ask. Journal of Hydrologic Engineering, 12(4), 347-368.
Haan, C.T. (1974), Statistical Methods in Hydrology, Second Edition. Iowa State Press.
Hammersley, J.M. and Handscomb, D.C. (1964), Monte Carlo Methods, Methuen and Co,
London; John Wiley and Sone, New York.
Hill, P.I., Graszkiewicz, S., Loveridge, M., Nathan, R. and Scorah, M. (2015), Analysis of loss
values for Australian rural catchments to underpin ARR guidance. Proc. 36th Hydrology and
Water Resources Symposium, Hobart.
Jordan, P., Nathan, R., Weeks, B., Waskiw, P., Heron, A., Cetin, L., Rogencamp, G.,
Stephens, C. and Russell, C. (2015), Estimation of flood risk for linear transport
infrastructure using continuous simulation modelling. Proc. 36th Hydrology and Water
Resources Symposium, Hobart.
Kilgore, R.T., Thompson, D.B. and Ford, D.T. (2010), Estimating Joint Probabilities of Design
Coincident Flows at Stream Confluences. National Cooperative Highway Research Program
Report 15-36, Washington D.C.
Kuczera, G., Lambert, M., Heneker, T., Jennings, S., Frost, A., Coombes, P. (2003), Joint
probability and design storms at the crossroads. Proc, 28th Int. Hydrology and Water
Resour. Symp. 10-14 Nov, Wollongong.
Nathan, R.J. and Bowles, D.S. (1997), A probability-neutral approach to the estimation of
design snowmelt floods. Hydrology and Water Resources Symposium: Wai-Whenua,
November 1997, Auckland, pp: 125-130.
Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P. (1993), Numerical recipes in
Fortran: the art of scientific computing. Cambridge University Press.
Robinson, K., Ling, F., Peterson, J., Nathan, R., Tye, I. (2012), Application of Monte Carlo
Simulation to the Estimation of Large to Extreme Floods for Lake Rowallan Proceedings of
the 34th Hydrology & Water Resources Symposium. ISBN 978-1-922107-62-6. Engineers
Australia.
71
Treatment of Joint Probability
Scorah, M., Lang, S., and Nathan, R. (2015), Utilising the AWAP gridded rainfall dataset to
enhance flood hydrology studies. Proc. 36th Hydrology and Water Resources Symposium,
Hobart.
Vose, D. (2000), Risk analysis: a quantitative guide. John Wiley and Sons Ltd, Chichester.
Weinmann, P.E. and Nathan, R.J. (2004), The continuing challenge of estimating extreme
floods from extreme design storms. Advances in Hydro-Science and Engineering, Volume
VI, Proc. of the 6th Intern. Conf. on Hydro-Science and Engineering, Brisbane May 31 - June
3, 2004.
Weinmann, P.E., Rahman, A., Hoang, T.M.T., Laurenson, E.M. and Nathan, R.J. (2002),
Monte Carlo Simulation of Flood Frequency Curves from Rainfall - The Way Ahead.
Australian Journal of Water Resources, 6(1), 71-79.
Westra, S. (2012), Australian Rainfall and Runoff - Revision Project 18: Interaction of
Coastal Processes and Severe Weather Events. ARR Report Number P18/S2/010,
Engineers Australia.
Zheng, F., Westra, S., Sisson, S. and Leonard, M. (2014), Flood risk estimation in Australia's
coastal zone: modelling the dependence between extreme rainfall and storm surge.
Hydrology and Water Resources Symposium, Perth 2014.
72
BOOK 5
lxxv
Flood Hydrograph Estimation
lxxvi
Flood Hydrograph Estimation
lxxvii
List of Figures
5.1.1. Conceptual Representation of Flood Formation Processes in the Most Commonly used
Event-based Flood Hydrograph Estimation Models (courtesy R Nathan) .................. 4
5.2.1. Conceptual Representation of the Runoff Routing Process in a Node-Link Type Model 10
5.3.1. Physical Processes which Contribute to Rainfall Loss .............................................. 17
5.3.2. Initial Loss – Constant Continuing Loss Model ......................................................... 20
5.3.3. Initial Loss - Proportional Loss Model ....................................................................... 21
5.3.4. The model structure of the Australian Representative Basins model based on the work
of Chapman (1968). Diagram obtained from
Black and Aitken (1977) and Mein and McMahon (1982). .................................... 25
5.3.5. Distinction between Storm and Burst Initial Loss ...................................................... 28
5.3.6. Example of a Directly Connected Impervious Surface (Left)
and an Indirectly Connected Impervious Surface (Right) ........................................ 32
5.3.7. Schematic of Rainfall Depth v Runoff, from Boyd et al. (1993) ................................. 34
5.3.8. Cumulative Discharge Plot from Giralang (ACT), showing Cumulative Rainfall,
Observed Runoff and Estimated EIA runoff (Phillips et al, 2014) ............................ 35
5.3.9. Overview of Regression Analysis Approach ............................................................. 36
5.3.10. Example Regression Analysis for Albany Drain Catchment in Western Australia ... 38
5.3.11. Representation of Dayaratne (2000) Relationship for DCIA .................................... 41
5.3.12. Re-Analysis of Boyd et al. (1993) Data ................................................................... 42
5.3.13. Australian data from Phillips et al (2014) and
Boyd et al. (1993) ................................................................................................ 42
5.3.14. Compilation of Available EIA Data ........................................................................... 45
5.3.15. Example Sample Area Analysis for Residential and Commercial
Land Use for the Giralang Catchment (ACT) .......................................................... 47
5.3.16. Regions Adopted for Loss Prediction Equations ..................................................... 50
5.3.17. Seasonality of Average Gridded Soil Moisture in Each Defined Region (Using Gridded Data) 51
5.3.18. Recommended Median ILs (mm) ............................................................................. 54
5.3.19. Recommended Median CL (mm/hr) ........................................................................ 55
5.3.20. Storm Magnitude from Phillips et al (2014) ............................................................. 59
5.3.21. Summary of Initial Losses for Urban Catchments (from
Phillips et al (2014)) ............................................................................................. 60
5.3.22. Indirectly Connected Area Continuing Loss Estimates (from
Phillips et al (2014)) ............................................................................................. 62
5.3.23. Indirectly Connected Area Proportional Loss Estimates (from
Phillips et al (2014)) ............................................................................................... 64
5.3.24. Horton Loss Model with Different Soil Classifications &
AMC Numbers ........................................................................................................... 66
5.3.25. Variation in Location but Not Shape of Initial Loss Distribution
Nathan et al. (2003) ................................................................................................. 68
lxxviii
Flood Hydrograph Estimation
lxxix
Flood Hydrograph Estimation
5.5.3. Prism Storage and Wedge Storage in a River Reach on the Rising Limb of the
Hydrograph ............................................................................................................ 127
5.5.4. Werribee River Example – Inflow Hydrograph and Observed and Calculated Outflow
Hydrographs ....................................................................................................... 130
5.5.5. Werribee River Example – Impact of Routing Through Concentrated Storage (X = 0)
or Fully Distributed Storage (X = 0.5) .................................................................. 131
5.5.6. Graphical Estimation of X and K (after Laurenson (1998)) ..................................... 133
5.5.7. Werribee River Example – Application of the Kalinin-Miljukov Method (Two Routing
Reaches of 10 km Length) .................................................................................. 137
5.5.8. Storage-Discharge Relationships with Different Degrees of Non-linearity .............. 141
5.5.9. Effect of Non-linearity of Storage-Discharge Relationship on Routed Hydrographs
(after Pilgrim (1987)) ............................................................................................... 142
5.6.1. Isochrones and Time-Area Curve ............................................................................ 149
5.6.2. Time-Area Relationship for Example 2 .................................................................... 152
5.6.3. Calculation of Surface Runoff Hydrographs using Time-Area Approach ................ 154
5.6.4. Unit Hydrograph Calculation – example with 3 periods of rainfall excess and unit
hydrograph with 5 ordinates (after Laurenson (1998)). ........................................... 157
5.6.5. Poor Averaging of Unit Hydrographs ....................................................................... 160
5.6.6. (a) Clark and (b) Nash models of runoff routing ...................................................... 164
5.6.7. Clark Runoff Routing Example- resulting hydrographs for the five different
durations .............................................................................................................. 165
5.6.8. Original Laurenson Runoff Routing Model (South Creek catchment) (a) Isochrones of
storage delay time (b) Time-area diagram ............................................................ 166
5.6.9. Node-link type representation of a catchment in runoff routing models: map view
and schematic representation of node-link network in RORB and WBNM ........... 167
5.6.10. Components of a runoff routing model .................................................................. 169
5.6.11. Hillslope representation in kinematic wave routing models (a) actual catchment (b)
model representation (from HEC-HMS Manual) ................................................... 174
5.6.12. Routing of rainfall excess hydrograph through a series of nonlinear storages (after
Laurenson et al. (2010)) ........................................................................................ 177
5.6.13. Conceptualisation of generation of runoff hydrograph from a grid cell .................. 179
lxxx
List of Tables
5.3.1. Summary of Different Approaches for Estimating Loss Values ................................. 30
5.3.2. Summary of Effective Impervious Areas Results ...................................................... 39
5.3.3. Overview of International Literature on EIA/TIA ........................................................ 43
5.3.4. EIA Results of Alley and Veenhuis (1983) for 19 Catchments in Denver,
as summarised in Shuster et al. (2005) ............................................................... 44
5.3.5. Range of Values Used in developing ILs Prediction
Equations ................................................................................................................ 53
5.3.6. Range of Values Used in Developing CL Prediction Equations ................................ 53
5.3.7. Total Storms identified for Analysis Phillips et al (2014) ............................................ 58
5.3.8. Comparison of Initial Loss Literature Values with Rural
Recommended Values .......................................................................................... 61
5.3.9. Urban Continuing Loss Values Compared with Rural Continuing Loss
Values .................................................................................................................... 62
5.3.10. Soil Classifications in Horton Model ........................................................................ 65
5.3.11. Antecedent Moisture Condition Number .................................................................. 65
5.3.12. Horton Loss Model Parameters ............................................................................... 66
5.3.13. Standardised Loss Factors (Hill et al., 2014a) ........................................................ 70
5.3.14. Median Loss Values for Rural Catchments ............................................................. 85
5.3.15. Summary of EIA Results from (Boyd et al., 1993) ................................................... 90
5.3.16. EIA Initial Loss Estimates from Various Studies ...................................................... 91
5.4.1. AEP Scaling Factors, FAEP, to be applied to the 10% AEP
Baseflow Peak Factor and the Baseflow Volume Factor to determine the Baseflow Peak Factor
for events of various AEPs ...................................................................................... 107
5.4.2. Key Surface Runoff Characteristics for the 10% AEP Design Flood Event at
Eumundi .................................................................................................................. 113
5.4.3. Key Surface Runoff Characteristics for the 1% AEP Design Flood Event at Dirk
Brook ....................................................................................................................... 115
5.4.4. Calculation of Baseflow Factors for the 1% AEP Design Event for the Dirk
Brook ....................................................................................................................... 115
5.4.5. Calculation of Baseflow and Total Streamflow Characteristics for the 1% AEP Event
for the Dirk Brook Catchment .................................................................................. 116
5.5.1. Calculations for the Muskingum Routing Example .................................................. 129
5.6.1. Conversion Factors ................................................................................................. 151
5.6.2. Calculation of Surface Runoff Hydrograph using Triangular Time-Area
Curve ................................................................................................................... 153
5.6.3. Methods used in different runoff routing modelling systems to derive the overland
flow hydrograph ..................................................................................................... 171
5.6.4. Routing models used in different runoff routing modelling systems to route flows
through channel, stream and floodplain reaches .................................................... 175
lxxxi
Chapter 1. Introduction
James Ball, Erwin Weinmann
The more general aspects of catchment simulation for design flood estimation are covered in
Book 4, and the chapters in this book deal specifically with the models and design inputs
required to transform the event-based design rainfall inputs from Book 2 into design flood
hydrographs at catchment locations of interest.
This book is an extension of the material covered in Book 3, which deals with the calculation
of design flood peak discharges. While peak discharges (without flood hydrographs) are
adequate for many applications, such as calculating bridge or culvert capacity, flood
hydrographs are essential for many other applications. These applications include those
where floodplain storage or artificial storage is an important issue or where the movement
and modification of flood events through a catchment is of interest. With the increasing
implementation of more advanced hydrological modelling systems and more complex
analysis requirements, guidance on flood hydrograph modelling is becoming increasingly
important.
The flood hydrograph methods described here provide an alternative method to the flood
peak discharge methods covered in Book 3 and allow cross checking between the two
methods. There is a place therefore for both peak flow and flood hydrograph estimation for
different applications.
2. Assessment of data requirements and data availability, data collation and checking;
1
Introduction
7. Validation of the calibrated model to ensure that it is fit for the intended purpose – Book 7,
Chapter 6;
8. Application of the model with design rainfalls (Book 2), design losses (Book 5, Chapter 3)
and design baseflows (Book 5, Chapter 4) to estimate design flood hydrographs - Book 7,
Chapter 7;
1 The modelled design flood hydrographs will generally form the inputs to a hydraulic model
0. of the study area.
The following chapters of Book 5 introduce the important hydrologic modelling principles that
are applied in Steps 3 to 5 of the overall process. Book 5, Chapter 3 (Losses) and Book 5,
Chapter 4 (Baseflow Models) also provide guidance on the design values required in Step 7.
Detailed application guidance relating to the other steps is provided in Book 7.
In the hydrograph formation phase, the routing of flood contributions from subareas through
the various stream reaches, floodplains and natural or artificial storages is modelled by
hydrologic or hydraulic routing models of different complexity (Book 5, Chapter 5).
Some flood hydrograph modelling approaches represent the catchment only as a single unit
(lumped models). However, the models now typically applied in the event-based simulation
approach are semi-distributed in nature; they represent the catchment being modelled by a
number of sub-catchments or subareas, where the degree of spatial resolution used typically
varies between around 10 to 100 subareas. The processes involved in the runoff generation
phase are modelled at the sub-catchment or subarea scale, and the resulting runoff
hydrographs are then routed along the different stream reaches and storages in the
catchment to the point of interest. Node-link type runoff-routing models are the most
2
Introduction
common form of these models, where the nodes represent the subareas and stream
junctions, and the links the routing reaches (Book 5, Chapter 6). In addition to providing a
more detailed and physically based approach to hydrological modelling, distributed models
allow the assessment of flood hydrographs for points within the main catchment as well as at
the outlet, whereas lumped models allow calculation of hydrographs only at the catchment
outlet.
Figure 5.1.1 depicts a schematic representation of how the flood formation processes are
conceptualised in event-based flood hydrograph estimation models.
3
Introduction
4
Introduction
1.2. Scope
This book of Australian Rainfall and Runoff provides background information on the basic
elements that make up event-based flood hydrograph estimation models, and an overview of
the modelling systems most commonly used in Australia. This introductory information is
intended to equip practitioners with a clearer understanding of the simplifications and
assumptions involved in different model components. Book 5, Chapter 3 and Book 5,
Chapter 4 also give guidance on the design loss and baseflow inputs for use with event-
based flood hydrograph estimation models. Detailed guidance on other aspects of applying
these models to practical flood hydrograph estimation problems is provided in Book 7.
As in other books, the guidance provided here should not be interpreted as being
prescriptive, as unusual catchment conditions may require special considerations.
Importantly, the application of the models and design data for flood hydrograph estimation
should be informed by a good understanding of general hydrologic principles and concepts
relevant to flood estimation as well as specific interpretation of local flood data.
5
Chapter 2. Catchment Representation
Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
2.1. Introduction
The representation of a catchment in a flood hydrograph estimation model is, by necessity,
highly conceptualised and aims to represent those features and characteristics that are most
influential in determining the overall flood response of the catchment. The distribution of
storm rainfall over different parts of the catchment and the flood response to it may vary
considerably, depending on the details of topography, vegetation cover, land use and
drainage network characteristics. However, as the hydrograph inputs from different parts of
the catchment are progressively combined on their way to the catchment outlet, only some
of these differences in response characteristics are directly reflected in the combined
hydrographs at downstream points of interest. Different catchment modelling approaches
have therefore evolved to find an appropriate compromise between required model
complexity and spatial resolution on the one hand, and desirable modelling efficiency on the
other. These different modelling approaches can be applied with different degrees of
complexity, and often more simple methods may be quite appropriate.
As is explained in more detail in Book 5, Chapter 5 and Book 5, Chapter 6, the various forms
of temporary flood storage available in different parts of the catchment play a key role in
determining how the runoff inputs from different parts of the catchment are transformed into
the flood hydrograph at the catchment outlet. The effect of catchment storage on the routing
of hydrographs is twofold and involves:
i. translation of the hydrograph peak and other ordinates forward in time and
ii. attenuation of the peak as the hydrograph moves along the stream network.
The different catchment representations in flood hydrograph estimation models can therefore
be classified on the basis of how the different forms of temporary flood storage are
conceptualised and in how much detail they are represented in the model. Other factors
such as losses, which determine the flood volume (covered in Book 5, Chapter 3) and
baseflow, which may modify the flood hydrograph shape and volume (covered in Book 5,
Chapter 4), must also be a part of the modelling of flood hydrographs.
In situations where the interest is only on the combined hydrograph at the catchment outlet
and where good flood records for current conditions are available at that point, modelling of
the catchment as a single ‘lumped’ response unit may be sufficient. This is the approach
adopted by a number of traditional flood hydrograph estimation methods such as the unit
hydrograph approach, the time-area approach and the Clark and Nash models of runoff-
routing. In these modelling approaches, discussed further in Book 5, Chapter 2, Section 2,
the rainfall excess input is ‘lumped’ for the whole catchment and then transformed by some
routing method to a hydrograph at the catchment outlet.
For catchments with more complex runoff production and flood hydrograph formation
characteristics it is more usual to adopt a semi-distributed rather than a lumped modelling
approach. Adoption of a semi-distributed approach allows the key factors that influence flood
6
Catchment Representation
Finally, developments in computing power and the availability of digital terrain information
now allow a fully distributed representation of catchments in grid-based models. The
emerging rainfall-on-grid modelling approach is discussed in Book 5, Chapter 2, Section 4.
The peak flow estimation methods described in Book 3 can be regarded as “lumped” type
models, since the flood peaks are calculated at a single point only, without the internal
characteristics of the catchment being considered.
In the runoff generation phase of lumped models, the conceptual loss and baseflow models
described in Book 5, Chapter 3 and Book 5, Chapter 4 can be applied using the assumption
that the rainfall inputs, losses and baseflow contributions are the same over the whole
catchment. Such a simplifying assumption may be appropriate when rainfall and streamflow
data are only available from a single gauge and it is thus difficult to infer any internal
variation of the rainfall and runoff generation characteristics.
1. The Time-Area Approach (Book 5, Chapter 6, Section 2) - in which the different degree of
translation (time lag) experienced by runoff from different parts of the catchment is
modelled by dividing the catchment into a number of areas with the same delay time to
the catchment outlet (‘isochronal areas’). The runoff inputs to the different sub-areas are
then lagged accordingly to represent the translation effects of the total catchment, but this
routing method does not provide for any attenuation of peak flows on their way to the
catchment outlet.
2. The Unit Hydrograph Approach (Book 5, Chapter 6, Section 3) - which converts rainfall
excess inputs to a flood hydrograph by applying a transfer function (the unit hydrograph).
The transfer function is generally inferred from the analysis of observed rainfall inputs and
streamflow outputs, and there is limited potential to relate its parameterisation to
7
Catchment Representation
measurable catchment characteristics, though the parameters for the method may be
developed from recorded data.
3. Other lumped flood hydrograph estimation methods - that involve use of a single
(concentrated) linear storage (e.g. Clarke) or a distributed form of storage represented by
a cascade of several linear storages (e.g. Nash). Variations of these methods with non-
linear storages are also used. The fundamental flood routing concepts applied in these
methods are further explained in Book 5, Chapter 5, Section 2 to Book 5, Chapter 5,
Section 4.
While lumped flood hydrograph estimation models have the advantage of simplicity, they are
limited in their application to the following situations:
• Catchments with relatively uniform spatial rainfall, loss and baseflow characteristics or
where the variation of these characteristics between events is relatively minor, so that the
derived unit hydrograph or other model parameters are applicable to a range of design
events;
• Applications where a flood hydrograph is only required at the catchment outlet, as for the
design of drainage structures on roads and railway lines; and
• Applications that do not require extrapolation to the range of Very Rare to Extreme floods.
To the extent that they adopt a ‘black box’ approach (ie. the functions used to transform
rainfalls to streamflows do not have direct links to physical catchment characteristics),
lumped models depend on the availability of observed flood hydrographs for their calibration.
The scope for application to ungauged catchments is thus more limited but the Clark-
Johnstone synthetic unit hydrograph method (Book 5, Chapter 6, Section 3) has in the past
been widely used for catchments on the east coast of Australia (Cordery et al., 1981).
The semi-distributed (node-link type) models described in the next section offer a broader
range of application but are also more demanding in terms of model development, data
requirements and understanding/skill of the practitioner. The lumped flood hydrograph
estimation approach can be seen as a simplified version of the semi-distributed flood
hydrograph estimation approach, and it is worth noting that the most widely used runoff-
routing models (described in Book 5, Chapter 6, Section 4) can be configured to represent
catchment response in a lumped fashion.
Because of their flexibility and ability to calculate flood hydrographs throughout the
catchment and to model land use and catchment changes, as well as being relatively
straightforward to establish and run, node-link type models are currently the most widely
used modelling approach for flood hydrograph estimation in Australia. A range of ready-to-
use modelling systems (Book 5, Chapter 6, Section 4) are available to set up models for
catchments of different size and complexity. These modelling systems allow the influential
8
Catchment Representation
9
Catchment Representation
Figure 5.2.1. Conceptual Representation of the Runoff Routing Process in a Node-Link Type
Model
10
Catchment Representation
Nodes:
• Input nodes for hyetographs of rainfall excess or hydrographs of direct runoff (and possibly
baseflow) from model sub-areas;
• Junction nodes – where different branches of the drainage network join (or where
diversion flows re-enter);
• Diversion nodes – points where some of the flow is diverted or abstracted from the
network;
• Nodes for hydrograph outputs (including at gauging locations for comparison of modelled
and observed hydrographs).
Links:
The catchment sub-division should also have regard to the prevailing land uses in different
parts of the catchment and should aim at sub-areas that are essentially homogeneous in
terms of their runoff characteristics. This is particularly relevant in urban or urbanising
catchments. Large areas with immediate runoff response, such as natural lakes and
reservoirs also require special consideration. Book 7, Chapter 4 provides more detailed
guidance.
11
Catchment Representation
i. The input hyetograph (areal average rainfall excess) for the sub-area is directly converted
into a runoff hydrograph at a representative point within the sub-area (usually at or close
to the centroid). This assumes that all runoff from the sub-area reaches this input node
without any delay and there is no flow attenuation within the sub-area. Any translation and
attenuation effects occurring within the hill slope elements thus need to be represented in
the routing through the drainage network from the subarea input node to downstream
points of interest; and
ii. The input hyetograph to the sub-area is transformed to an output hydrograph using one of
the lumped catchment models introduced in Book 5, Chapter 2, Section 2. Different
models use time-area, unit hydrograph or different forms of storage routing or kinematic
wave routing concepts for this transformation.
Method (i) has the advantage of simplicity in that it avoids having to determine additional
model parameters for runoff from contributing areas. In catchments with relatively uniform
land use, when the interest is mainly on flood hydrographs at the catchment outlet for a
limited range of flood magnitudes, this method can be expected to provide satisfactory
results. However, it may provide conservatively high estimates of hydrograph peaks at
internal points in the upper parts of the catchment due to the neglect of routing effects in the
contributing areas of the catchment. Book 5, Chapter 6, Section 4 provides more detailed
discussion of this method.
Method (ii) allows for better representation of processes that contribute to hydrographs in the
upper parts of the catchment but it requires additional parameters. It is also better able to
deal with the effects of significant land use changes in parts of the catchment and with
changed runoff behaviour in Very Rare to Extreme flood events (Book 8, Chapter 5). A more
detailed discussion of this approach is provided in Book 5, Chapter 6, Section 4.
Two distinctly different groups of flood routing approaches are used in node-link type flood
hydrograph estimation models:
i. Hydrologic Routing Approaches- these flood routing methods are based on the storage
routing principles described in Book 5, Chapter 5, Section 4 (linear storage routing) and
Book 5, Chapter 5, Section 5 (non-linear storage routing). An important characteristic of
hydrologic flood routing models is that their parameters are generally inferred from
observed flood hydrographs, but it is possible to infer their parameters through close links
12
Catchment Representation
ii. Hydraulic Routing Approaches - these flood routing methods are based directly on the full
unsteady flow equations or various simplified forms of these equations, as described in
Book 5, Chapter 5, Section 5. Their parameters are inferred from the cross-sectional
characteristics of streams, channels and floodplains, and the hydraulic characteristics of
controlling features. The kinematic wave and diffusion wave approaches described in
Book 5, Chapter 5, Section 5 are the most widely used hydraulic routing approaches
incorporated into flood hydrograph estimation models.
The hydrologic routing approaches have the advantage that, by deriving their parameters
from observed hydrographs, they represent an integrated routing response from the complex
stream and floodplain system that is often too complex to be represented in detail. However,
application of such calibrated parameters to conditions outside the ones reflected in the
observed hydrographs (ie. for changed catchment conditions or significantly different flood
magnitudes) involves assumptions that may not be justifiable. The hydraulic routing methods
have closer links to the physical characteristics of the routing reaches, but their application
still involves a significant degree of conceptualisation and some form of calibration.
Modelling of the flood routing effects over a range of flow conditions requires a clear
understanding of the flood dynamics so that the simulated response can be appropriately
matched to the actual flooding behaviour. Specifically this means that the adopted network
should represent any breakout flows and bypass flows occurring during larger floods, as well
as the effects of significant floodplain storage areas activated during large events. The
additional storage availability in large flood events may be counter-balanced by increased
flow efficiency as the flow depth increases. Where backwater effects are likely to have a
significant impact not only on flood levels but also on the routing of flood flows through the
drainage network, hydraulic routing methods based on the full unsteady flow equations may
be required.
The model should also represent the varying impact of flow restrictions for different flow
magnitudes. In some cases the results of detailed hydraulic modelling may be required to
develop a clear understanding of the changes in flood flow behaviour with flood magnitude,
so that they can be adequately reflected in the hydrologic catchment model.
One important limitation of node-link type models is that the different routing elements are
conceptualised as one dimensional flow links. This means that a dominant flow direction
needs to be assumed when the routing elements are defined. In cases where the flow
direction changes for floods of different magnitudes, it may be necessary to introduce more
complexity into the channel network so that different flowpaths are activated at different flow
magnitudes. The two dimensional rainfall-on-grid approaches discussed in Book 5, Chapter
2, Section 4 are in principle better equipped to deal with changes in flow direction during a
flood event and between flood events of different magnitude, but this advantage may be off-
set by the difficulty of using such models to adequately represent loss processes and
roughness characteristics at the scale of individual grid cells.
Whatever routing method is used, it is important to ensure that any application of the model
outside its range of calibration is guided by consideration of changes in hydraulic
characteristics and then reflected in the adopted network conceptualisation and parameter
values. This is further discussed in Book 7, Chapter 4 and Book 7, Chapter 5.
13
Catchment Representation
Different models vary in the degree of detail adopted in modelling the runoff generated from
rainfall falling on a grid element. In principle the method allows more physically-based
representations of runoff processes; however, this is only likely to be valid at larger depths of
overland flow. Such models currently represent saturation and ponding processes in a
simplistic fashion, and similar simplifications are adopted in modelling the baseflow
contribution at the scale of individual cells.
The routing of runoff from individual cells through the catchment and the stream network is
then based on the principles of two dimensional dynamic wave modelling introduced in Book
5, Chapter 5, Section 5 and described in more detail in Book 6, Chapter 4, Section 7.
Particular issues to be dealt with in these models are the significantly larger data
requirements than for node-link type models to give a realistic representation of the
catchment, characterisation of hydraulic roughness for different catchment elements, and
how to deal with computational stability problems that arise when runoff is generated from
initially dry cells.
As the direction of the flow between cells is determined as part of the solution process at
each time step, drainage paths do not need to be pre-defined as in traditional one
dimensional runoff-routing approaches. The application of hydraulic methods in the runoff-
routing process also means that there is no need for linking the hydrologic model with a
hydraulic model of the floodplain area.
In most catchments the catchment boundaries and the drainage network are quite well
defined in the upper part of the catchment, and traditional runoff-routing models can thus
adequately describe the flood hydrograph formation for these parts of the catchment. A
‘hybrid’ approach, where a two dimensional model is only used for runoff-routing in the flatter
or urbanised parts of the catchment that are influenced by complex hydraulic controls, may
be the most efficient approach in these situations.
14
Catchment Representation
on-grid models and their advantages and limitations are discussed in more detail in Book 5,
Chapter 6, Section 5. While grid-based methods are apparently more directly based on
physical catchment data than alternative methods, they still need to be calibrated with
recorded data, and there is still uncertainty in the results from their application. This is
discussed further in Book 7.
At the current stage of development of these models and with the limited level of experience
gained with their practical application, it is considered premature to recommend their general
use in these Guidelines. However, it is expected that further development and testing will
allow rainfall-on-grid models to be more widely applied.
2.5. References
Cordery, I, Pilgrim, D.H. and Baron, B.C. (1981), Validity of use of small catchment research
results for large basins. Instn. Engrs. Australia, Civil Engg. Trans., CE23: 131-137.
15
Chapter 3. Losses
Peter Hill, Rhys Thomson
3.1. Introduction
This chapter provides advice on loss models and values for design flood estimation. It deals
with the fundamental hydrologic question – how much rainfall becomes runoff? Design
floods are typically derived either using flood frequency analysis or rainfall-based flood event
models. Continuous simulation is covered in Book 4 and the focus of this chapter is losses
for event based design flood estimation.
The loss is just one of the number of inputs to the design process (such as the critical storm
duration, areal reduction factor, spatial pattern, temporal pattern, runoff routing model, model
parameters and treatment of baseflow) that can affect the magnitude of the calculated
design flood. These other inputs are discussed in other books.
• Book 5, Chapter 3, Section 2 – discusses how loss processes are represented in different
conceptual loss models, ranging from empirical models through to more complex process
models
• Book 5, Chapter 3, Section 3 – discusses the selection of conceptual loss models and
different approaches to estimating loss values
• Book 5, Chapter 3, Section 4 – describes the estimation of effective impervious areas for
urban catchments
• Interception by vegetation;
16
Losses
More details on the runoff process are described in Book 4, while this section focuses on
specific processes associated with estimation of losses.
Runoff has generally been considered to consist of surface runoff produced by rainfall
excess which occurs at the ground surface when the rainfall intensity exceeds the infiltration
capacity. This is known as Horton-type runoff and Fleming and Smiles (1975) provide a
review of infiltration theory and its application to practical hydrology.
Over the last twenty years the classical concept of storm runoff has been challenged as a
result of observations on natural catchments during storm periods and many detailed studies
of instrumented plots and small areas. Two alternative types of storm runoff mechanism
have been proposed:
17
Losses
• Saturated overland flow occurs when, on part of the catchment, the surface horizon of the
soil becomes saturated as a result of either the build-up of a saturated zone above a soil
horizon of lower hydraulic conductivity, or due to the rise of a shallow water table to the
surface; and
• The other type of storm runoff is throughflow, which is water that infiltrates into the soil and
percolates rapidly, largely through macropores such as cracks, root holes and worm and
animal holes, and then moves laterally in a temporarily saturated zone above a layer of
low hydraulic conductivity. It reaches the stream channel quickly and differs from other
subsurface flow by the rapidity of its response and possibly by its relatively large
magnitude.
Associated with the recognition of these two alternative types of storm runoff, is the concept
that storm runoff may be generated from only a small part of many catchments. Additionally,
this source area may vary in its extent from time to time, in different seasons and during the
progress of a storm.
There are no practical methods for estimating storm losses and runoff that would take
explicit account of different runoff processes, partial and variable source areas and small-
scale variations in characteristics. The various existing methods assume uniform or average
conditions, not accounting non-homogeneity of the catchment. Though each model attempts
to simplify the overall process by different degrees, however, it is important to understand
the complexity of these physical processes when reviewing the different loss models that are
available (refer Book 5, Chapter 3, Section 2).
Majority of research on transmission losses for rural catchments was focused on long term
losses relevant for water resource modelling and planning, rather than flood estimation. For
example, there has been work done on gaining and losing river reaches within the Murray
Darling Basin. Boughton (2015) explored transmission losses for 100 catchments from the
east-coast of Australia.
The research on transmission losses tend to focus on specific reaches and hence the results
are site specific. For very large arid catchments, transmission losses can be substantial, for
example Knighton and Nanson (1994) found transmission losses of 75% for a reach of the
Cooper Creek. However, for most design flood applications the channel losses will not be
significant and can be combined with other processes that are implicitly covered by lumped
conceptual models.
18
Losses
Urban catchments may also be subject to transmission losses. While the losses identified for
rural catchments are generally less pronounced in urban catchments, transmission losses
can occur along the drainage system. This may include leakages and ageing infrastructure,
particularly when the soil around the drainage system is highly permeable.
• Empirical Models - designed to ensure depths of direct runoff and rainfall excess are in
equilibrium. These types of models have minimal factors that would influence the values
for an individual catchment.
• Process Models - attempt to represent the complex behaviour of losses within the
catchment, and consider flow through the soil layers and over the catchment surface.
Given their complexities, process models have a large number of parameters that makes
them difficult to apply to estimate design floods. In Australia, there is limited experience in
applying process models for design flood estimation and therefore they are not covered in
this section.
In many of these models, the initial loss occurs in the beginning of the storm, prior to the
commencement of surface runoff. It is assumed to be composed of interception losses,
depression storage and infiltration before the soil surface is saturated; a continuing loss rate
is then applied for the remainder of the storm. This model is consistent with the concept of
runoff produced by infiltration excess, ie runoff occurs when the rainfall intensity exceeds the
infiltration capacity of the soil.
These models apply the losses directly to the rainfall, subtracting from the rainfall itself, to
produce a rainfall excess that is subsequently applied to the hydrological model. This
concept of rainfall excess is important, as it does not consider the changing catchment
characteristics during the period of rainfall (compared with Simple and Process models).
There is typically a wide range of initial loss values observed for a catchment (Rahman et al.,
2002; Phillips et al, 2014; Hill et al., 2014a). This variability reflects the importance of
antecedent conditions but uncertainties in the estimation of the timing and distribution of the
catchment average rainfall also contribute to the range of values. This potential variability in
the initial loss value is an important consideration, particularly in application of historical
storms to hydrological models.
19
Losses
loss rates are most applicable to large storm events, where a significant proportion of rainfall
becomes runoff. Figure 5.3.2 provides an example of the application of a typical Initial Loss –
Continuing Loss model.
Despite the simple conceptual nature of the IL/CL model, there are a number of challenges
in estimating continuing loss directly from recorded streamflow and rainfall.
The continuing loss rate should not be based simply on a water balance of runoff volume
less initial loss divided by the duration of the event. This will underestimate the loss rate; as
illustrated in Figure 5.3.2 there will likely be timesteps in which the rainfall is less than the
continuing loss rate (or even zero) and hence the full value is not taken up.
Although not immediately apparent, the definition of CL also means that its magnitude is
dependent on the timestep used in the analysis. This is because as the timestep reduces,
there is an increased likelihood that there will be some timesteps in which the rainfall depth
is less than the CL rate. Thus, to achieve the same volume of rainfall excess, the CL will
typically need to be increased for shorter timesteps. This is discussed further in Book 5,
Chapter 3, Section 7.
20
Losses
Proportional loss models are consistent with runoff being generated by saturated overland
flow. This assumes that runoff is generated from saturated portions of the catchment; this
contributing area is expected to increase the duration and severity of the storm (Mein and
O'Loughlin, 1991; Mag and Mein, 1994).
For the IL/CL model this would suggest that the continuing loss should decrease as the
event progresses and such a reduction with duration (as a surrogate for volume of
infiltration) is observed from the empirical analysis of data by Ishak and Rahman (2006) and
Ilahee and Imteaz (2009). The variation of continuing loss with event duration is discussed
further in Book 5, Chapter 3, Section 7.
21
Losses
A number of studies such as Eastgate et al. (1979) and Rajendran et al. (1982) have applied
the SCS to Australian soils. There is however a lack of information on how Australian soils
are classified in the SCS hydrologic groups, which limits its application in Australia.
• Stochastic variation of initial water storage status between events (different antecedent
conditions); or
These models are run in a continuous or semi-continuous fashion (updated during an event)
and therefore can explicitly account for antecedent conditions, as well as for variation within
an event.
These models are based on the assumption that the catchment consists of many individual
storage elements with a soil moisture capacity. The depth of water in each element
increases with rainfall and decreases with evaporation. When rainfall exceeds the storage
capacity, direct runoff is produced. The model assumes that the soil moisture is redistributed
between the elements between rainfall events.
The simplest form assumes a linear distribution of soil moisture in the catchment, from zero
to its maximum capacity. This form of probability distributed model is incorporated in ReFH
model in the UK. However, the above approach assumes that a portion of the catchment has
zero storage capacity and hence there is no initial loss. Many catchments in arid and semi-
arid areas exhibit a significant initial loss and therefore the conceptual model is extended
such that the capacity varies between a minimum and maximum of the catchment. The
simpler models assume that the capacities vary linearly while other models have introduced
a shape parameter to describe the form of variation with capacity
3.2.3.6. SWMOD
SWMOD (Soil Water balance MODel) is developed by Stokes (1989) for the Northern Jarrah
forest of Western Australia, where saturation excess overland flow is held to be the dominant
runoff mechanism for storm events (Water and Rivers Commission, 2003). The model
22
Losses
incorporates the ability of different landforms in the catchment to store water during the
storm event. When the accumulated rainfall is greater than its infiltration capacity, the sub-
catchment will generate saturation-excess overland flow. Infiltration capacity is assumed to
vary within an area due only to soil depth.
1�
�� = �max − �max − �min × (1 − �) (5.3.1)
Where:
The infiltration capacity is taken to mean the maximum depth of water that can be stored in
the soil column. Where the accumulated rainfall is greater than the infiltration capacity that
fraction of the sub-catchment will have saturation-excess overland flow.
Large infiltrations ponds (10 to 15 m2) were used in conjunction with a ring infiltrometer and
a well permeameter to determine the infiltration characteristics of a complex lateritic soil
profile in the Jarrah forest of Western Australia (Ruprecht and Schofield, 1993). The logs
from the construction of observation bores were able to characterise the shape of b
parameter. Hill et al. (2014b) outlines the application of SWMOD to 38 catchments across
Australia.
Various representations to the complex equations for the water movement in the soil (such
as Philip and Green-Ampt, Horton etc) are used to express the reduction of infiltration
capacity with time (Maidment, 1992).
Skukla et al. (2003) analysed ten infiltration models including Green-Ampt and Horton’s
models, using double-ring infiltrometer tests and reported that Horton’s model gave the best
results for most land use conditions.
Siriwardene et al. (2003) undertook field infiltrometer tests at 21 sites in eight Victorian urban
catchments in order to estimate the infiltration parameters related to Horton’s infiltration
model. They acknowledge the difficulty in selecting representative values for the infiltration
parameters because of the 'significant variability with respect to soil type and land use in the
catchment'.
Mein and Goyen (1988) note that despite the obvious attraction of using Simple Models, 'the
problem is to specify parameters (which relate to soil type) and initial conditions which are
23
Losses
satisfactory for design use on a given catchment. In practice, the uncertainties of soil
behaviour and the areal variability of soil properties do not justify the use of anything more
than the simplest model'.
�� = �� + �0 − �� �−�� (5.3.2)
where:
3.2.4.2. Green-Ampt
The most commonly used approximate theory based infiltration model is the one developed
by Green and Ampt (1911), which is an approximate model utilising Darcy’s law and is
further discussed in Mein and Larson (1973), Chu (1978), Lee and Lim (1995) and King
(2000).
William (1994) describes a pilot study for nine Victorian catchments to determine if
application of the Green-Ampt equation provides a superior results to simplified models,
when applied at catchment scale. Although the Green-Ampt equation was successfully
applied to each catchment, the results produced were not on average superior to those
produced using the simple empirical models.
The ARBM is structured to represent the passage of water over and through the catchment,
as illustrated in Figure 5.3.4. It is based on Chapman’s work (Chapman, 1968; Chapman,
1970), which originally sought to optimise certain parameters, while measuring others.
However, developers Boyd et al. (1993) began optimising all parameters as it was believed
that measurements were difficult, uncertain, costly and impractical (Mein and McMahon,
1982).
24
Losses
conditions for each wetting event. This is done by simulating soil moisture depletion by
evaporation between rainfall events (Fleming, 1974). It is expected that the parameters used
would be related to physical catchment characteristics, therefore making the model
applicable to any Australian gauged or ungauged catchment.
Despite being developed, the optimised parameters have not exhibited uniqueness. Mein
and McMahon (1982) however, do not believe that this particular model produces outcomes
that are any different to other process models developed for the same purpose.
Figure 5.3.4. The model structure of the Australian Representative Basins model based on
the work of Chapman (1968). Diagram obtained from Black and Aitken (1977) and Mein and
McMahon (1982).
25
Losses
These models typically estimate the losses from rainfall and the generation of streamflow by
simulating the wetness and dryness of the catchment on a daily, hourly and occasionally
sub-hourly basis. Continuous simulation eliminates the need to select representative values
of loss, since the loss is explicitly included in the modelling. The focus of loss
conceptualisation in continuous rainfall-runoff simulation models is less on the
representation of the loss process, rather than on representing the effect on producing
floods.
The majority of continuous simulation applications are for flood forecasting, rather than for
design flood estimation. The development of stochastic rainfall generation techniques has
encouraged their application for design flood estimation (Boughton et al., 1999; Kuczera et
al., 2006).
Thus, the key objectives of loss models and their parameterisation for design flood
estimation are to:
• Close the volume balance in a probabilistic sense such that the volume of the design flood
hydrograph for a given AEP should match the flood volume derived from frequency
analysis of flood volumes;
• Produce a realistic time distribution of runoff to allow the modelling of peak flow and
hydrograph shape;
• Reflect the effects of natural variability of runoff production for different events on the
same catchment, to avoid probability bias in flood estimates; and
• Reflect the variation of runoff production with different catchment characteristics to enable
application to ungauged catchments.
As discussed in Book 5, Chapter 3, Section 2, there are a range of different conceptual loss
models that were derived with different conceptualisation and varying degrees of complexity
from simple lumped rainfall excess models to more detailed process models.
26
Losses
The above objectives are important when selecting loss models for design flood estimation.
It is therefore helpful to consider the following criteria when selecting a loss model for design
flood estimation:
• The model produces a temporal distribution of rainfall-excess that is consistent with the
effect of the processes contributing to loss;
• Suitable for extrapolation beyond calibration and is hence applicable to estimate floods
over the required range of AEPs;
• Small number of parameters that need to be selected (preferably no more than 2);
Considerations of such issues have typically resulted in adoption of simple rainfall excess
models.
Dyer et al. (1994) compared the performance of the constant continuing loss and
proportional loss models for 24 catchments using RORB and found that the proportion loss
model resulted in generally improved calibrations. This finding was supported by Hill et al.
(1996), who calibrated RORB models for 11 Victorian catchments. However, analyses
undertaken by Phillips et al (2014) and Hill et al. (2014a), concluded that the results were
inconclusive in regards to the best model. Even for catchments where one of the loss
models was preferred for a majority of events, there were some events for which the
alternate model was preferred. Similarly, there was no obvious relationship between the
preference for a particular model and hydroclimatic or catchment characteristics that could
explain the preference for a particular approach.
A number of Australian studies have demonstrated that the IL/CL model is suitable for
design flood estimation where in it can be used to estimate design flood estimates over a
range of AEPs. However, it is often difficult to derive unbiased estimates of floods using the
IL/PL model over a range of AEPs. Specifically, the IL/PL model has the potential to
underestimate peak flows for events rarer than used in the derivation of the values; this
suggests that PL should vary with the AEP of the event.
Furthermore, studies that have analysed a large number of events and catchments such as
Phillips et al (2014) and Hill et al. (2014a) have found that there can be a large variation in
PL values, which makes it difficult to recommend a representative value for design Book 5,
Chapter 3, Section 5.
Given the difficulties in characterising how PL should vary with AEP, it is considered that the
IL/CL model is the most suitable of these simple rainfall excess models for design flood
estimation for both rural and urban catchments. Probability distributed loss models such as
SWMOD demonstrate promise and should also be considered for rural catchments where
there is reliable and consistent description of hydraulic properties of soils.
If alternate loss models are to be adopted then they should be evaluated against the above
criteria. An important consideration is how the loss model performs when extrapolated to
events outside of the range of events used in deriving the loss values.
27
Losses
The conceptual difference between the initial loss for a rainfall burst (ILb) and for a storm
(ILs) is illustrated in Figure 5.3.5. The initial loss for the storm is assumed to be the depth of
rainfall prior to the commencement of surface runoff. The initial loss for the burst however, is
the portion of the storm initial loss, which occurs within the burst.
If pre-burst rainfalls are included, then the design rainfalls will represent (near) complete
design storms and therefore the storm losses can be directly applied without adjustment.
The design values recommended in Book 5, Chapter 3, Section 5 are intended for
application with complete storms and therefore, requires the pre-burst depths to be included.
However, if design bursts, rather than complete storms, are used in design then the burst
initial loss needs to be reduced to account for the pre-burst rainfall. For the same reason, the
initial moisture content for storage capacity models (such as Horton and SWMOD) need to
be increased to account for this pre-burst rainfall.
This has implications for all design flood situations, but is particularly important for design
situations where the outcome is sensitive to the flood volume, such as the design of
retarding basins (Rigby and Bannigan, 1996). The failure to recognise the rainfall prior to
design rainfall bursts has the potential to significantly underestimate the design flood.
28
Losses
The events to be analysed should be selected carefully to ensure that the sample of events
is not biased. The selection of high runoff events for loss derivation is likely to be biased
towards wet antecedent conditions (ie. losses tend to be too low). Ideally, events should be
selected on the basis of rainfall to remove this bias. However the selection and analysis of
events by rainfall is problematic because it requires consideration of a representative
duration of the rainfall and there may be little or no runoff generated from some intense
bursts of rainfall if the antecedent conditions are dry.
The main limitation of deriving losses directly from the analysis of recorded data is that they
may not be compatible with the other design inputs and hence suitable for design flood
estimation. That is, although the loss values may reflect the loss response observed for a
number of events on the catchment, this does not guarantee that their application with other
design inputs results in unbiased estimates of floods. For this reason, it is also desirable to
reconcile design values with independent flood frequency estimates where possible (refer
Book 5, Chapter 3, Section 3).
Loss values have been estimated for a large number of urban (Phillips et al, 2014) and rural
catchments (Hill et al., 2014a). These two studies represent the most comprehensive
regional studies of losses covering Australia and hence the recommended loss values
summarised in Book 5, Chapter 3, Section 5 are largely based upon these studies.
As with the estimation of losses from a single site, their application with other design inputs
does not guarantee unbiased estimates of floods and for this reason, it is therefore also
desirable to reconcile design values with independent flood frequency estimates where
possible (refer Book 5, Chapter 3, Section 3).
29
Losses
The fundamental limitation of this approach is that all the uncertainty in the each of the
design inputs (e.g. IFD, ARF, temporal patterns, spatial patterns), modelling (model
conceptualisation and parameterisation) and the flood frequency analysis (e.g. rating curve,
choice and fitting of the distribution) is reflected in the resulting loss values. The loss simply
becomes an error term to compensate for all of the uncertainty and biases in all other inputs.
It is therefore not surprising that the values derived from such an approach (e.g. Walsh et al.
(1991); Flavell and Belstead (1986)) typically display a large range and relating such values
to physical catchment characteristics (that should influence infiltration and interception) has
proven intractable.
A further limitation of such approach is that the resulting loss values are a function of the
design flood estimation method itself and are therefore only suitable for application with the
same set of inputs. For example, if new or alternate information is available on any of the
inputs such as IFD, ARF then the analysis needs to be repeated. Thus, the values only work
with a single combination of design inputs.
• Difficulty in selecting an
unbiased sample of events
30
Losses
• Different combination of
loss values can result in
same flood estimates but
has different impact when
applied outside of the
magnitude used for
reconciliation
If there is a long-term stationary streamflow record at the site, reconciliation of design values
(Option 3) is preferable but if the distribution of loss values is required this will typically need
to be inferred from previous studies (Option 2). In majority of cases, there will be insufficient
streamflow data available at the site and therefore a combination of regional information
31
Losses
(Option 2) and reconciliation of design values with regional flood frequency estimates
(Option 3) will typically be the most appropriate approach.
For urban catchments it is more difficult to obtain independent flood frequency estimates and
therefore values will often need to be inferred from at-site data (Option 1) or values obtained
from regional information (Option 2).
• impervious areas (e.g. roofs and paved areas) which are directly connected to the
drainage system – referred to as Direct Connected Impervious Areas (DCIA).
• impervious areas which are not directly connected, runoff from which flows over
pervious surfaces before reaching the drainage system (e.g. a roof that discharges onto
a lawn) – referred to as Indirectly Connected Impervious Areas (ICIA).
• Pervious areas that interact with Indirectly Connected Impervious Areas, such as nature
strips, garden areas next to paved patios, etc.
• Pervious areas consisting of parklands and bushland that do not interact with impervious
areas.
Figure 5.3.6. Example of a Directly Connected Impervious Surface (Left) and an Indirectly
Connected Impervious Surface (Right)
32
Losses
generating runoff. This is despite the research dating back to the 1970s, identifying the
importance of the Effective Impervious Area (EIA) over the TIA (refer to Cherkaver (1975);
Beard and Shin (1979)).
Use of the TIA, which includes impervious areas with no direct connection to the drainage
network, can result in the overestimation of urban runoff volumes and peak flows. Although
definitions vary, the EIA is generally considered to be representative of the area of the
catchment that generates a rapid runoff response in rainfall events. It incorporates the
impervious area with a hydraulic connection to the drainage network (DCIA), plus a
contribution comprising discharges from an impervious area onto a pervious area (ICIA),
which rapidly saturates and acts in a similar manner to an impervious area. The EIA
therefore provides a more realistic measure of the impervious area that generates runoff at
the catchment outlet.
It is noted that in Phillips et al (2014), the Indirectly Connected Area incorporated all
residential components of the catchment outside of the EIA. Only areas such as large
parklands, bushland areas etc were separated out of the analysis. This was following a
detailed review of the behaviour, identifying only two discernible responses from within the
urban components of a catchment. This approach is recommended.
33
Losses
Figure 5.3.7. Schematic of Rainfall Depth v Runoff, from Boyd et al. (1993)
34
Losses
Figure 5.3.8. Cumulative Discharge Plot from Giralang (ACT), showing Cumulative Rainfall,
Observed Runoff and Estimated EIA runoff (Phillips et al, 2014)
• Regression analysis of streamflow and rainfall records, where sufficient data exists;
This method is done by comparing flow records with a representative rain gauge that is
located within or very near the catchment. The key to this method is isolating the runoff that
occurs only from the EIA, and not from the other impervious and pervious areas. A method
for doing this is detailed in Phillips et al (2014), with an overview of the general approach
35
Losses
provided in Figure 5.3.9. An example of the output of this kind of analysis is provided in
Figure 5.3.10.
As identified in Phillips et al (2014), the key requirements for this method are:
36
Losses
• A sufficient gauging (both rainfall and flow) record to undertake the analysis. Phillips et al
(2014) adopted a 10 year record as a minimum, although the technique that was applied
for the EIA estimation could potentially be used for much shorter records. Where shorter
records are adopted, the data should also be checked to ensure that there are sufficient
range of rainfall events in terms of magnitude, in order to create a reasonable regression.
• The catchment must have an acceptable gauge rating. Further details on this are
discussed in Book 3, Chapter 2. It is noted that because of the technique to isolate EIA
events, very large events are generally excluded due to the presence of pervious area
runoff. In the absence of any detailed studies on this, a gauge that has an acceptable
rating up to around an AEP of 20% may provide a reasonable representation, as long as
all events above the acceptable flow level of the gauge are excluded. It is important that
engineering judgement is undertaken in reviewing the suitability of the data set.
• A relatively small catchment area. A catchment area of 5 km2 was used as a target
catchment area for the Phillips et al (2014) analysis, although there is no strict guide as to
what is appropriate. Larger catchment areas result in a number of potential issues:
• Greater difficulty in isolating the EIA runoff due to longer lag periods from upper
catchment areas;
• More potential for baseflow which will need to be excluded from the analysis;
• Spatial variation of rainfall becomes more important, and therefore more gauges should
potentially be used in the analysis. This will require spatial averaging techniques for
rainfall, as discussed in Book 2, Chapter 4.
• A relatively stationary upstream catchment during the period of record (ie. minimal
changes in land-use, development intensity etc.).
37
Losses
Figure 5.3.10. Example Regression Analysis for Albany Drain Catchment in Western
Australia
The EIA/ TIA ratio has been found in a number of studies (e.g. Phillips et al (2014); Ball and
Powell (1998); Boyd et al. (1993)) to be a good indicator, removing the variability of total
imperviousness to create a measure that can be extrapolated to other catchments.
Australian Estimates
38
Losses
This study identified that EIA is typically 55 to 65% of the TIA for most of the catchments
identified. This range was recommended in the study in estimating the EIA for most
Australian catchments. Based on a sensitivity analysis undertaken within Phillips et al (2014)
of some of the key assumptions, the estimates of EIA are expected to fall within +/- 5% to
10% of this estimated range.
It is noted that one catchment from the ACT was identified to have a higher ratio of 74% to
80%. It was theorised that this is likely due to the higher degree of connected surfaces (as
discussed in Goyen (2000)), although there were insufficient additional catchments to
confirm this hypothesis.
The Phillips et al (2014) study also estimated the DCIA using GIS methods, which primarily
included road, roof and driveways (where these driveways drained to the street). It is noted
that the road and roof area represents the majority of this area. The analysis suggested that
the EIA was roughly in the range of 70 to 80% of this area, suggesting that not all of the roof
and road area contributed to runoff.
In order to derive the estimates of EIA/ TIA, Phillips et al (2014) used detailed mapping of
different land-uses and aerial photography to estimate the TIA. This was undertaken by
taking representative areas within the catchments, detailing the impervious areas, and then
extrapolating this to the wider catchment based on land-use mapping which was also
derived from aerial photography.
39
Losses
Ball and Powell (1998) estimated the EIA for Powells Creek in NSW (also analysed by the
Phillips et al (2014) research). The analysis of rainfall and runoff data was undertaken by
also comparing the antecedent moisture (AMC) in the catchment, based on rainfall in the
days leading up to the storm event (refer Book 5, Chapter 3, Section 4). The analysis
showed an EIA percentage of the total catchment area (not the TIA) of ranging from 35% to
44% depending on the AMC. Based on the TIA estimated in Table 5.3.2 for the same
catchment, this represents an EIA/ TIA of around 53% to 67%, depending on the AMC. This
range is very similar to that found under Phillips et al (2014), both for Powells Creek as well
as for the wider catchments analysed. Similarly, Chiew and McMahon (1999) undertook an
analysis of Powells Creek and found an EIA to total catchment area of around 40%.
Zaman and Ball (1994) undertook a study on Salt Pan Creek in NSW (Southern Sydney).
This study estimated the EIA using two alternative methods. The first looked at estimates
using orthophoto maps and applying estimates of the EIA to different land-uses. The second
analysed rainfall and runoff records to estimate the EIA. Both methods estimated an
approximate EIA to Total Area of 39% to 41% of the total catchment area. However, it is
noted in the description of this catchment that there are open space areas, making it difficult
to directly compare this to other studies. Also, there was no clear description of the TIA to be
able to provide a comparative EIA/TIA ratio. Chiew and McMahon (1999) by comparison
estimated an EIA to Total Area of 27% for this catchment.
Boyd et al. (1993) analysed 26 catchments, 9 located within Australia (Sydney, Canberra
and Melbourne), while the others were international (USA, Canada, UK, Japan and a
number of European countries). This study undertook a similar analysis to that of Phillips et
al (2014). A regression analysis was undertaken on the EIA/Total Area versus the TIA/Total
Area and estimated that the EIA/ TIA ratio of around 74%. A summary of the results is
provided in Book 5, Chapter 3, Section 8.
One key thing to note from this study is that there were a number of catchments where the
EIA was identified as being greater than the TIA. This is unlikely to be the case in reality,
unless the pervious areas have little infiltration. This was not discussed in the paper, but it
assumed that the TIA estimated based on aerial photography may not have been
appropriate. The data from this study was re-analysed where EIA > TIA catchments are
excluded from the analysis, with the results shown in Figure 5.3.12 (the Australian
catchments are circled for reference). This was also mapped against Urban Area (ie
excluding large pervious areas like bushland and parks), rather than Total Area, to normalise
it with the Phillips et al (2014) study and exclude the effects of bushland etc. in the
catchments. The re-analysed EIA/TIA ratio is 71%, which is close to the 55% to 65% range
identified in Phillips et al (2014).
40
Losses
Dayaratne (2000), which is also referenced in O'Loughlin and Stack (2014), obtained
relationships with housing density from modelling storms on 16 gauged residential
catchments in four Victorian municipalities:
It is important to note that this study was based on a range of 7 to 14 houses per hectare.
Beyond this range, the equation has significant limitations, as demonstrated in Figure 5.3.11
(ie DCIA reduces with increasing households per hectare for households greater than
around 15 per hectare. Also, DCIA reduces below 0% for less than 5 households per
hectare). The Phillips et al (2014) results are also shown on this graph for reference.
A further review of just the Australian data from the Boyd et al. (1993) study was compared
with that of the Phillips et al (2014) study. The results of this review are shown in
Figure 5.3.13 (and removing those catchments which overlap in both studies). The EIA/
Urban Area from this analysis is estimated to be around 60%.
41
Losses
Figure 5.3.13. Australian data from Phillips et al (2014) and Boyd et al. (1993)
International Estimates
The research by Alley and Veenhuis (1983) shows close correlation with the Australian
studies for residential catchments (between 53% to 77%, refer Table 5.3.4). This study also
incorporated commercial and industrial land-uses, indicating 94% and 77% respectively. As
1Australian catchments circled for reference
42
Losses
many of the studies have been undertaken in residential (or predominantly residential)
areas, this provides a useful comparison for commercial and industrial land-uses.
Alley and Veenhuis (1983) also derived a relationship between the TIA/TA ratio versus the
EIA/TA ratio as a combination of all 19 catchments analysed. This equation is as follows:
1.41
EIA TA = 0.15 TIA EA (5.3.5)
It is noted that when applying this relationship both the TIA/TA and EIA/TA should be
expressed in terms of the percentage multiplied by 100.
In the Boyd et al. (1993) research, 23 of the 26 catchments were predominantly residential
(although some had low rise apartments). Non-residential catchments included Sample
Road (highway, industrial), Fort Lauderdale (large shopping mall) and Vika (city centre). Both
Sample Road and Fort Lauderdale resulted in much higher EIA/TIA, with 74% and 98%
respectively. With Fort Lauderdale being nearly completely impervious, this high value of
EIA/ TIA would seem reasonable.
Pompano Creek (USA) in the Boyd et al. (1993) data was identified as having limited pipe
infrastructure, and where water flowed through grass swales prior to reaching this
infrastructure. This may provide some basis for suggesting greater infiltration through WSUD
style features, although there is insufficient data to draw any detailed conclusions.
A compilation of the available data (where individual catchment data is available) is provided
in Figure 5.3.14. Both the Alley and Veenhuis (1983) (the equation for which was derived
from data from Denver USA) and the USA/ Canada data from Boyd et al. (1993) generally
align, and are generally higher than the European and Australian data. The reason for the
difference unknown, and may come down to variations in methods as well as catchment
characteristics and rainfall patterns.
43
Losses
Table 5.3.4. EIA Results of Alley and Veenhuis (1983) for 19 Catchments in Denver, as
summarised in Shuster et al. (2005)
44
Losses
EIA/TIA
Based on the international literature, and in the absence of any local streamflow and rainfall
data, an EIA/ TIA ratio of 50% to 70% would appear to be appropriate for the large majority
of urban catchments. Most values from the recent Phillips et al (2014) study fit within a more
refined range of 55% to 65%, and this range could be used if the catchments are similar to
those described in Phillips et al (2014) (primarily single lot residential).
• Whether the roof areas are connected to the stormwater infrastructure. Where this is not
the case, it is likely to be on the lower end of the range; and
• If the drainage infrastructure is piped or whether WSUD features (eg swales) are adopted.
Some international studies would suggest that large lengths of drainage swales (rather
than pipes) results in a lowering of EIA/TIA, although there is insufficient data to
adequately characterise this effect.
The following additional points should be noted in relation to the recommended range of
values:
2Australian data derived from Boyd et al. (1993) and Phillips et al (2014). USA, Canada and European data is from
Boyd et al. (1993). TA = Urban Area for Boyd et al. (1993) and Phillips et al (2014)
The 50% and 70% lines provide the recommended ranges in Book 5, Chapter 3, Section 4
45
Losses
• This range has been adopted primarily from residential catchments, and generally
catchments with TIA/ total area of between 30% to 70%. There are no Australian
catchments in the literature identified with percentage impervious greater than 70%;
• In one situation in Phillips et al (2014), a catchment in Canberra exhibited EIA/ TIA in the
74% to 80% range. It is possible that this catchment had a higher proportion of
“connected” areas, although there is insufficient data to explore this further; and
• Results from US based catchments suggests that for highly impervious industrial and
commercial areas, there is a higher level of connectivity, resulting in a much higher EIA/
TIA. In one catchment, nearly 100% EIA/ TIA was observed (although the total
imperviousness was also around 100%). However, this was not observed in data from
Europe, and the US data does not appear to correlate with European or Australian results.
There are insufficient results from Australia with catchments with industrial/ commercial
land-uses and high total imperviousness to compare with the international studies.
However, it may be appropriate to adopt higher values for EIA/TIA for highly impervious
industrial, commercial as well as metropolitan areas (ie total imperviousness greater than
80%).
The above method estimates the EIA as a proportion of the TIA, and therefore needs a
reasonable estimate of the TIA. Most of the research undertaken as summarised in Book 5,
Chapter 3, Section 4 estimated the TIA based on a detailed analysis of aerial imagery. In
most applications, this will be the most appropriate method for estimating TIA. Further
discussion on the use of GIS and mapping methods to estimate TIA are provided in Book 5,
Chapter 3, Section 4.
• Level 1 – undertake mapping of land-use areas within a catchment, and use references to
derive the proportion of TIA based on these land-use types; and
The Level 2 analysis provides a higher level of certainty, but requires additional work in
undertaking the mapping.
2. Identify small representative areas within the catchment that represent the different land-
uses;
46
Losses
3. Undertake detailed mapping of these representative areas. Refer Figure 5.3.15 for an
example. Use this mapping to estimate the TIA for the sample area;
4. Apply the estimates from Step 3 to the land-use areas from Step 1, to identify the overall
TIA within the catchment; and
5. Using this TIA estimate, the EIA can be estimated from the recommended ratio of EIA/
TIA from Book 5, Chapter 3, Section 4.
Figure 5.3.15. Example Sample Area Analysis for Residential and Commercial Land Use for
the Giralang Catchment (ACT)
Phillips et al (2014) showed that the large majority of what was estimated to be DCIA using
GIS mapping of the catchments analysed was road and roof area. The EIA calculated from
47
Losses
The outcomes of these studies effectively suggest that areas that are traditionally thought of
DCIA (e.g. roof areas), are in fact not directly connected. This is likely due to a number of
factors, such as transmission loss (e.g. cracks in stormwater pipes), poorly connected roof
drainage, drainage swales along roads etc.
The result is that the use of GIS methods to estimate DCIA and subsequently EIA can be
problematic. In general, if GIS methods are to be used, then a reduction factor will be
required in order to convert the estimated DCIA to the EIA.
In the absence of other information or data, then a range of 70% to 80% could be adopted in
converting the GIS DCIA estimate to EIA.
For the regression analysis, as identified in Book 5, Chapter 3, Section 4, it is important that
this be taken into account. Ideally a stationary catchment (i.e. limited or not change in
density) for the period of the gauging record should be used to estimate the EIA.
For GIS methods, consideration should be made on changes to the catchment since the
date of the aerial photography. Where the aerial photography is older, it may no longer be
representative of the existing development. Alternatively, for analysis of historical flooding,
the aerial photography may not be representative of the development at that time.
The literature in estimating EIA/ TIA ratios is based on catchments with reasonably long flow
gauge records (typically greater than 10 years), and with stationary catchment conditions
(i.e. limited new development). WSUD has increased in prevalence in design over the last 10
years or so. As a result, it is unlikely that any of these catchments in the research
48
Losses
incorporate a large proportion of WSUD features, and even if one or two did, there would be
insufficient data to establish any trends.
Therefore, while it is expected that WSUD may influence the estimate of EIA in catchments,
there is insufficient data at this time to fully understand the impact.
The recommendations in this chapter have been drawn largely from regional studies of loss
values undertaken by Phillips et al (2014) for urban catchments and Hill et al. (2014a), Hill et
al. (2015), and Hill et al. (2016) for rural catchments.
These 2 studies concluded that the IL/CL model is typically the most suitable for design flood
estimation and hence the recommendations in this chapter relate to this model. If other loss
models are to be used for design then it is important that consideration is given to the
requirements discussed in Book 5, Chapter 3, Section 2.
The loss values recommended in this chapter are intended for application to complete
design storms. Thus the initial loss is denoted as ILs to indicate that it is applicable to a
complete storm. However, if design bursts, rather than complete storms, are used in design
then the burst initial loss needs to be reduced to account for the pre-burst rainfall Book 5,
Chapter 3, Section 2.
Initial attempts to derive prediction equation using all 35 catchment across Australia resulted
in considerable uncertainty in the estimated loss values and therefore prediction equations
were developed for different regions which were based upon soil moisture characteristics
49
Losses
from AWRA-L. Assessing regions on the basis of differences soil moisture characteristics
provides a more logical basis for regionalisation than rainfall alone, as changes in soil
moisture reflect the combined influence of climate regime and catchment storage.
The hydrologic similarity was assessed on the basis of two measures representing the
seasonality and magnitude of variations in soil moisture. Regional differences in soil
moisture characteristics were determined using cluster analysis, and mapping of the
identified groups revealed that catchments allocated to the same group were located in
largely geographically contiguous regions.
Four regions were defined (refer Figure 5.3.16). Regions 1 and 3 represent the primary
summer- and winter-dominant regions, and region 4 largely represents catchments in the
south-west of Western Australia. Region 2 represents a more uniform climate: while the
region is very large, information is only available on catchment losses for a small eastern
portion of this region. The seasonality of average gridded soil moisture in each of the 4
regions is shown in Figure 5.3.17.
50
Losses
Figure 5.3.17. Seasonality of Average Gridded Soil Moisture in Each Defined Region (Using
Gridded Data)
Multi-linear regression was used to develop prediction equations for ILs and CL in each of
the four regions. Given the relatively small number of catchments in each region, the number
of independent variables was limited to a maximum of two. The resulting prediction
equations are:
Region 1
There are 7 catchments in Region 1 and the prediction equations of loss parameters are
displayed below. Initial loss is a function of maximum storage capacity of the shallow soil
layer while CL is a function of mean annual PET and surface soil hydraulic conductivity.
Where:
Region 2
9 catchments are in Region 2 and most of them are located near the coast. The prediction
equations for this region are:
51
Losses
Where:
SOLPAWHC is the average plant available water holding capacity across catchment (mm)
Region 3
There are 11 catchments in Region 3 and their loss parameters were estimated as follows:
Where:
s0_wtr is the soil moisture in the surface store in winter season (mm)
Region 4
There are 8 catchments in this region and the prediction equations are presented as follows:
Where:
52
Losses
SOLPAWHC is the average plant available water holding capacity across catchment (mm)
The above equations were applied to the relevant regions in Australia using independent
variables derived for a grid size of 15 km x 15 km. Given the uncertainty in the prediction
equations and the desire to have smooth variations in loss across catchment areas, the
gridded values were smoothed using a window of 45 km x 45 km.
Based upon the range of values used in the derivation of the prediction equations, the
median loss values were constrained so that the ILs varied between 0 and 80 mm and the
CL constrained between 0 and 10 mm/h.
The range of values for the independent variables and the loss values for the 35 catchments
used to derive the prediction equations is summarised in Table 5.3.5 and Table 5.3.6.
53
Losses
annual rainfalls less than 350 mm (shown in grey in both figures) there are no
recommendations for design loss information because the prediction equations were
developed using data from wetter catchments. Recommended loss values can be accessed
via the ARR Data Hub (Babister et al (2016), accessible at http://data.arr-software.org/).
54
Losses
It should be noted that the recommended values were derived based upon only 35
catchments and the standard error of the estimates range between 20% and 50%.
Because of the limited number of catchments available, the prediction equations are based
upon one or two independent variables. However, it is anticipated that a wide range of
characteristics combine to influence the loss values for a particular catchment and therefore
judgement is recommended when selecting suitable values for use in design. For example
for catchments with very dense vegetation, it would be expected that the loss values would
be higher. Similarly, steep catchments with little vegetation would be expected to have lower
loss values. Any such adjustment from the regional values should be done giving
consideration to the range of loss values obtained in Hill et al. (2014a) and other studies and
the implications on the design flood estimates.
Lastly, it is important to note that the recommended loss values in the above figure relate to
the median for a particular catchment. It is expected that the loss for any particular event
could lie well outside of this range. For many catchments, the storm initial loss for any
particular event could range from nearly zero, if the storm occurs on a wet catchment, to
more than 100 mm if there is little antecedent rainfall.
Kemp and Wright (2014) noted high loss rates for the Gammon Ranges in mid-north South
Australia. They attributed the high values to the runoff processes being dominated by the
55
Losses
large amounts of storage within the gravel bed of the tributaries and main streams which
absorbs a significant amount of runoff from the hillsides. Thus the initial loss represents loss
in the tributaries and main stream in addition to that occurring within the catchment. This
explanation is supported by observations in arid western New South Wales (Cordery et al,
1983).
Board et al. (1989) estimated losses for the Emily Creek and Todd River catchments in
central Australia by calibrating a RORB model to 3 floods. The ILs varied from 10 to 60 mm
and the CL varied from 1.5 to 4.5 mm. It should be noted that the events were selected
based upon the largest floods and hence the sample is likely to be biased to wet antecedent
conditions which would indicate that the ILs values are likely to underestimate the median
value.
The median loss values in the Pilbara are some of the highest in Australia. There was only 1
catchment from the Pilbara included in the Hill et al. (2014a) study but information is also
available from Pearcey et al. (2014) which documents the calibration of RORB models to 19
catchments in the Pilbara region. Although the events in Pearcey et al. (2014) were selected
on the basis of streamflow rather than rainfall (and hence potentially biased towards wet
antecedent conditions), they support high values of loss.
For the Pilbara, Flavell and Belstead (1986) recommended IL values of approximately 40 to
50 mm and a CL of 5 mm/h. It should be noted that the loss values were derived from
reconciling rainfall based estimates with flood frequency analysis and thus the IL reflects a
burst initial loss and a higher initial loss would be expected if complete storms are adopted
so the range of IL reported by Flavell and Belstead (1986) should be considered a lower limit
of expected ILs values.
3.5.2.4. South-Western WA
The runoff characteristics of much of south-west Western Australia are different from that
found in many other parts of Australia. The highly permeable soils and large soil water
storages of the south-west landforms means that the continuing loss rates tend to be high.
The dominate contribution to runoff is believed to be saturated areas in the broad valley
floors which represent a relatively small proportion of the total catchment. For catchments
where runoff is predominately generated via this mechanism, then a storage capacity loss
model such as SWMOD should be applied to estimate the rainfall excess (Refer Book 5,
Chapter 3, Section 2). For other catchments in south-west WA the IL/CL model is
recommended.
For Southwest WA the rainfall and losses are markedly seasonal in nature and it should be
noted that the 160 events analysed by Hill et al. (2014a) were biased towards the winter
months with 70% of them occurring in May, June or July. Considerations for the selection of
seasonal loss values is discussed in Book 5, Chapter 3, Section 7.
56
Losses
their derivation. Furthermore, in each study the sample of events was selected based upon
rainfall rather than streamflow (to avoid any bias towards wet antecedent conditions).
For some studies the CL was estimated as the volume of loss (less the ILs) divided by the
duration of the event post the commencement of surface runoff. As discussed in Book 5,
Chapter 3, Section 2 this will underestimate the CL as there are likely to be timesteps where
rainfall is less than the CL rate. Values of CL from such studies were removed from the
dataset and hence there are less values of CL than ILs provided in Book 5, Chapter 3,
Section 8.
The analysis in Phillips et al (2014) had initial losses ranging from 1 to 3 mm across the
country, with no real identifiable trend between the different regions (although the data was
limited). A similar approach was undertaken by Kemp and Lipp (1999), Ball and Zaman
(1994) and Chiew and McMahon (1999), and these studies identified typical values in the
order of 0 to 1mm
Bufill and Boyd (1992) analysed 16 catchments, 10 within Australia and 6 international
catchments. Their analysis identified initial loss by estimating the mean initial loss across all
events for these catchments. It is important to note that this is a slightly different way of
undertaking the analysis from Phillips et al (2014), and therefore can result in slightly
different results. However, this study found that typical initial loss rates for the 10 catchments
within Australia are around 0 to 1 mm. These catchments were subsequently re-analysed
using the EIA technique in Boyd et al. (1993).
Bufill and Boyd (1992) also analysed 6 international catchments, using the method noted
above, and found an initial loss ranging from around 0.5 to 1 mm. Boyd et al. (1993) had 17
international catchments in their analysis, with initial loss values typically ranging from 0 to
1.3 mm, although two catchments in the US had 3.7 mm and 6.1 mm.
A summary of the different initial loss estimates available for the different studies are
provided in Book 5, Chapter 3, Section 8. It is important to note that there is only one
catchment from the northern parts of Australia (i.e. Monsoonal North, Wet Tropics, Pilbara)
or the central portion of Australia (Rangelands). However, given the consistency of the
estimates across the regions, and with international estimates, it is unlikely that the initial
loss for EIA in these areas will be significantly different.
All studies assumed that the ongoing losses on the EIA were effectively zero.
57
Losses
One of the key challenges in urban catchments is the lack of gauged urban catchments with
reasonable records and relatively stable development. In addition to this, many rainfall
events do not produce any Indirectly Connected Area runoff, which can make it difficult to
obtain sufficient data to determine appropriate losses.
Phillips et al (2014) used the same catchments as those identified in Book 5, Chapter 3,
Section 4 for EIA. In order to isolate events with flow generated from the Indirectly
Connected Area, they selected events where the flow generated was 10% higher than the
flow estimated by calculating the runoff from the EIA area alone. A number of other criteria
were adopted. Further details on the storm event selection are identified in Phillips et al
(2014). A summary of the number of events identified for each of the catchments is
summarised in Table 5.3.7.
Two catchments from the EIA analysis in Phillips et al (2014) were excluded:
• Parra Hills (SA) – only a limited number of storms were identified. The number of storms
was too small to provide any meaningful analysis of the losses.
• Kinkora Road (VIC) – further analysis of the data suggested some unusual behaviour, with
periods of runoff with no rainfall and vice versa for the selected events. This catchment
was therefore not included in any further analysis.
One of the key challenges in the use of the urban data in Phillips et al (2014), and with many
other studies, is that the length of record is relatively small. This, together with the filtering
method, reduces the number of large storm events to estimate losses. Figure 5.3.20 shows
the storm magnitude from the selected events from Phillips et al (2014), based on the ARR
1987 AEPs (storms not shown are less than 63% AEP or 1 EY).
58
Losses
Based on the selected storms, Phillips et al (2014) identified that mean storm initial losses
across the range of storms for the catchments analysed were generally in the range of 20 to
30 mm, with the majority between 10 and 40 mm, and would appear fairly consistent across
most of the catchments. A summary of the results is provided in Figure 5.3.20, providing an
indication of the ranges. Phillips et al (2014) noted that the Argyle Street (TAS) was not well
suited to the analysis that was undertaken, due to large pervious areas and bushland. As
with the rural catchments, there is significant variation in the results, which is driven by
numerous factors such as antecedent conditions and temporal pattern of the storm.
The storms in Phillips et al (2014) were real storms, and may differ to the design storm
temporal patterns. Furthermore, the higher loss values may be skewed by rainfall events that
occurred where low depths continued for a prolonged period of time at the start of the event.
On this basis, the storm initial loss for design storms may be lower than the range estimated
in Phillips et al (2014).
Boyd et al. (1994) fitted a model for urban catchment runoff to 3 catchments in Australia (all
in ACT). This model conceptualised that the urban catchment was comprised of EIA,
pervious area that contributed for small rainfall events (<40 mm) and pervious areas that
contributed for larger rainfall events (>40mm). For the “small pervious” area, the study
indicated initial losses of 0 to 4 mm, while for the “large pervious” area, the study indicated
30 to 50 mm. It is noted that in other studies the “small pervious” area is effectively lumped
into the EIA (such as Phillips et al (2014)), so that the 30 to 50 mm initial loss would be
generally consistent with the initial loss concept for the Indirectly Connected Area in this
chapter. However, key challenges with the Boyd et al. (1994) catchments is that they are low
density (5 to 25% total impervious fraction), and the model applied effectively incorporates a
proportional loss into the fraction of “small pervious” and “large pervious” areas.
59
Losses
Kemp and Lipp (1999) analysed three catchments in South Australia. For the Indirectly
Connected Areas, they were not able to identify any clear runoff events from these areas for
these catchments. Based on the available research, this paper recommended 45 mm of
initial loss be adopted for Adelaide, although this could go as high as 60 mm. However,
these estimates were identified as preliminary, and there was no runoff records to verify this.
Interestingly, Phillips et al (2014) were also unable to identify sufficient events with runoff
from the Indirectly Connected Area for South Australia.
Figure 5.3.21. Summary of Initial Losses for Urban Catchments (from Phillips et al (2014))
Conceptualisation
As per the conceptualisation in this chapter, the Indirectly Connected Area is composed of
impervious and pervious areas interacting with each other. The pervious area would be
expected to tend towards the rural losses, although will be modified due to urban pervious
area modification (such as import of top-soil, differing vegetation etc). However, the impact of
the impervious area would be expected to lessen the initial loss across the combined area.
A comparison of the initial loss derived in the literature with the recommended ILs for Rural
catchments is provided in Table 5.3.8. This table also provides the ratio of the Indirectly
Connected Impervious Area over Indirectly Connected Area. Key points:
• The initial loss values are similar to the recommended rural loss values although obviously
there is significant scatter in both data sets;
• The initial loss values are similar to the recommended rural loss values. Generally, the
values are in the order of 60 to 80% of the recommended median Rural ILs, although
obviously there is significant scatter in both data sets;
• There is insufficient data to determine any appropriate relationships with ICIA/ICA with
initial loss and its relation to the recommended median rural loss.
60
Losses
• The catchments from Boyd et al. (1994) have almost no ICIA, and the median values for
ILs are in the range of the recommended median rural loss.
• There is insufficient data covering all regions as identified in the rural section.
Table 5.3.8. Comparison of Initial Loss Literature Values with Rural Recommended Values
Catchment Region Median ILs (mm) Reference Recommended ICIA/
Rural ILs (mm) ICA
Giralang (ACT) Murray Darling 17 Phillips et al 23 9%
(2014)
Powells Creek East Coast 24.5 Phillips et al 33 43%
(NSW) (2014)
Albany Drain South-West WA 18 Phillips et al 31 18%
(WA) (2014)
McArthur Park Monsoonal North 18.9 Phillips et al 25 17%
(NT) (2014)
Argyle Street South-East 7.9 Phillips et al 27 6%
(TAS) Coast (2014)
Long Gully Creek Murray Darling 34 Boyd et al. 23 0%
(ACT) (1994)
Mawson (ACT) Murray Darling 49 Boyd et al. 23 5%
(1994)
Curtin (ACT) Murray Darling 31 Boyd et al. 18 0%
(1994)
South Australia South-Central SA 45 Kemp and Lipp 23
(3 catchments) (1999)
Recommendation
Based on the limited available information, it is recommended that a median ILs of 60 to 80%
of the recommended rural catchment ILs be adopted.
It is noted that this may trend towards 100% as the proportion of impervious area in the
Indirectly Connected Area reduces. Based on the data that is available, this might occur
when the impervious area drops below 5% of the total Indirectly Connected Area.
The constant continuing losses estimated in Phillips et al (2014) ranged generally from 0 to 4
mm/h across the catchments. However, this excludes the catchment in Northern Territory
which was influenced by the presence of a large detention basin that affected results.
However, more recent analysis of this catchment with regards to timestep (see Book 5,
Chapter 3, Section 7), would suggest that for a 6 minute interval the CL estimate is 2.8
mm/h, which is within the range of the other catchments. Phillips et al (2014) also cautioned
with the results of the Tasmanian catchment (Argyle Street), which is influenced by a large
bushland area component, significantly larger than the urban area of the catchment.
Both NSW and ACT had median constant continuing loss values in the order of 2.5 mm/h.
WA exhibited higher continuing losses, with a median value close to 4mm/h.
61
Losses
A comparison of the median continuing loss values with those of the recommended values
for the rural catchments is provided in Table 5.3.9. It is difficult to draw any real conclusions
based on the limited data set, other than to note that they are generally in the same range.
Table 5.3.9. Urban Continuing Loss Values Compared with Rural Continuing Loss Values
Figure 5.3.22. Indirectly Connected Area Continuing Loss Estimates (from Phillips et al
(2014))
Recommendations
In the absence of other data, the following is recommended where appropriate gauging data
is not available:
62
Losses
• For southeastern Australia, a typical value of 2.5mm/h, with a range of 1 to 3 mm/h, would
be appropriate. The value should be adjusted based on engineering judgement and
reviewing the catchment characteristics such as soil types, interaction of indirectly
connected impervious areas with pervious areas etc.
• Similar to initial losses, where the impervious proportion of the indirectly connected area is
very low, it may be appropriate to adopt the rural continuing losses. However, there is
insufficient data to confirm this.
The challenge in the research for these areas is identifying the runoff component from these
portions, which are typically dominated by runoff from impervious areas. However, there is
nothing to say that these areas will behave the same as rural areas. Areas like parks and
sporting fields are highly disturbed from their natural state, and therefore may exhibit very
different characteristics. However, with little research and information available, the losses
for the rural catchment provide the best estimate that is available at this time.
Therefore, in the absence of better information, it is recommended to adopt the loss values
for rural catchments from Book 5, Chapter 3, Section 5.
Two alternative models that are commonly applied in urban environments are the
proportional loss models and the Horton loss models. These two loss models are described
in the following sections.
The initial loss proportional loss model is described in Book 5, Chapter 3, Section 2.
The initial loss for the model should be adopted as per Book 5, Chapter 3, Section 5.
In addition to testing the IL/CL loss model, Phillips et al (2014) also tested the proportional
loss model. A summary of the results from this assessment are provided in Figure 5.3.23. As
identified in Book 5, Chapter 3, Section 5, some care should be taken with the interpretation
of the results from Tasmania (Argle Street) and Northern Territory (McArthur Park).
The key challenge with the results of this analysis is that the results range from median
proportional losses of around 45% through to 80%. This makes it difficult to provide a
general guidance for catchments.
63
Losses
As discussed in Book 5, Chapter 3, Section 3, the greatest challenge in applying the IL/PL
model for design flood estimation is understanding how the proportional loss varies with
AEP. Great care should therefore be exercised if the IL/PL is to be applied to events outside
of the range of events used in the derivation of the values. For this reason it is generally not
considered appropriate for estimating rare or extreme events (ie AEP < 1%).
Therefore, it is recommended that this method only be used where suitable data is available
to calibrate the loss model, either for a specific catchment or for a similar catchment nearby.
Alternatively, should more research become available this could assist in informing
appropriate parameters for design.
Figure 5.3.23. Indirectly Connected Area Proportional Loss Estimates (from Phillips et al
(2014))
The Horton loss model is described in Book 5, Chapter 3, Section 2, with the equation below
repeated for reference.
�� = �� + �0 − �� �−�� (5.3.14)
where:
64
Losses
This model is for pervious areas only. The hydrological models that use this model separate
out the Indirectly Connected Impervious Areas from the Indirectly Connected Pervious
Areas, treating the losses separately. An examples of this is ILSAX, as detailed in O'Loughlin
and Stack (2014). O'Loughlin and Stack (2014) has been used as the key reference for this
section of the chapter, and the parameters reported here are based on this reference.
The Horton model assumes that the losses (or infiltration) of runoff decreases over the
duration of the storm. The shape of this decay function is described by the k value, which is
typically assumed to be 2 (h-1). The remaining parameters to describe the decay curve are
the initial infiltration rate (f0) and the final infiltration rate (fc). These are defined by the soil
characteristics. Soil classifications that are used are described in Table 5.3.10, which are
based on numerous reference and reproduced from O'Loughlin and Stack (2014).
In applying the model, a “starting point” is required for the analysis. This represents the
infiltration rate at the start of the storm, which is based on the Antecedent Moisture Condition
(AMC). The AMC can be categorised from Table 5.3.11 (based on O'Loughlin and Stack
(2014)).
Using the soil classification and the AMC number, the Horton Loss Model parameters can be
defined based on Table 5.3.12. The resulting loss models for the different classifications,
together with the AMC numbers, are shown in Figure 5.3.24.
65
Losses
Soil Type
A B C D
Initial Rate 250 200 125 75
(f0) (mm/hr)
Final rate (fc) 25 13 6 3
(mm/hr)
Shape Factor 2 2 2 2
(k) (h-1)
Initial Infiltration Rates (mm/h) for AMCs
1 250 200 125 75
2 162.3 130.1 78 40.9
3 83.6 66.3 33.7 7.4
4 33.1 30.7 6.6 3.0
Figure 5.3.24. Horton Loss Model with Different Soil Classifications & AMC Numbers
O'Loughlin and Stack (2014) report that this method has been used in the calibration of a
number of ILSAX models to gauged catchments. While no references are provided, it is
anticipated that this calibration is for whole hydrological and hydraulic models, which
includes a number of parameters not just isolated to losses.
O'Loughlin and Stack (2014) report that Siriwardene et al. (2003) compared the infiltration
rates from with those measured with infiltrometers at eight urban gauged catchments in
66
Losses
Victoria. These rates suggested slightly higher rates than those reported in Table 5.3.12.
However, the research only focused on the soils, and did not look at the other components
that loss models are trying to represent (such as those identified in Book 5, Chapter 3,
Section 2).
More research is required to compare the effectiveness of this loss model in comparison to
constant continuing loss model.
The degree of variability in the loss values reflects both natural variability in the factors
contributing to loss (initial state of catchment wetness, seasonal effects on vegetation) and
impacts of error in rainfall and streamflow data. As long as these errors are of a random
rather than systematic nature, they should not bias the estimated loss distribution.
These approaches can be grouped into parametric and non-parametric and are discussed in
the following sections.
67
Losses
Figure 5.3.25. Variation in Location but Not Shape of Initial Loss Distribution Nathan et al.
(2003)
The standardised distributions of storm initial loss and continuing loss from Hill et al. (2014a)
are shown in Figure 5.3.26 and the values presented in Table 5.3.13. These standardised
loss distributions are remarkably consistent for the different regions across Australia, which
demonstrates that while the magnitude of losses may vary between different regions, the
shape of the distribution does not.
68
Losses
Figure 5.3.26. Regional average standardised loss distributions (Hill et al., 2014a)
69
Losses
A number of studies have investigated different candidate distributions for storm initial loss
from samples of catchments from Victoria (Rahman et al., 2002), Queensland (Tularam and
Ilahee, 2007) and from around Australia (Hill et al., 2013). Based on these studies the four-
parameter Beta distribution is recommended for its flexibility and because its parameters
lend themselves readily to physical interpretation. The lower limit can be set to zero, thus
reducing the number of parameters to 3.
There has been comparatively less attention paid to the distribution of continuing loss.
Based upon a case study of three catchments in Queensland, Ilahee and Rahman (2003)
found that the continuing loss values could be approximated by an exponential function.
Ishak and Rahman (2006) investigated the probabilistic nature of continuing loss for four
Victorian catchments and none of the distributions fitted the observed the distribution
satisfactory, however the four-parameter Beta distribution providing the best approximation.
Hill et al. (2013) investigated different distributions for 10 catchments from around Australia
and concluded that the Gamma (two parameter) distribution was best. Given these different
outcomes, further work is required before a preferred distribution for the continuing loss is
recommended.
It is likely, however, that the observed variation in continuing loss between one event and the
next is more due to the propagation of data errors in the analysis rather than differences in
70
Losses
The correlation between loss parameter values requires further investigation. In addition to
the correlation of loss values for individual events it would be useful to analyse the
distribution of total loss.
This conclusion is not surprising because any potential variation of loss with rainfall severity
is difficult to infer from the empirical analysis of data due to the lack of severe rainfall events
in the recorded data. This is compounded where the storm severity is characterised as the
AEP of the rainfall burst, whereas the loss values relate to the complete storm, and this
discrepancy further hinders the identification of any trend with storm severity.
The Australian studies that present loss values that vary with AEP tend to be those where
the loss values are derived by verification against flood frequency quantiles. In such studies
it is difficult to ascertain whether any variation in loss is meaningful or simply a reflection of
the uncertainty in the flood frequency quantiles and the link to the adopted design inputs.
The conclusions that there is no evidence to vary loss with magnitude is supported by the
analysis of rainfall antecedent to extreme storms recorded over Southeast Australia which
showed that the antecedent rainfall was not significantly greater than normal for the location
and time of the year (Minty and Meighen, 1999). The implication of this is that the storm
initial losses for large and extreme storms should be similar to those of smaller, more
frequent storms”.
The recommendation is therefore to keep the ILs and CL values the same for AEPs unless
there is specific evidence to suggest that there is a systematic variation of loss with AEP.
It should be noted that the stores in a storage capacity loss models such as SWMOD fill up
during event and hence the proportion of the catchment contributing to the loss increases
throughout the event. The net effect of this is an initial loss (which represents the initial filling
of the smallest store) following by a variable proportional loss. This proportional loss
decreases throughout the event and also decreases for larger rainfall events.
In considering how loss varies with event magnitude it is worth considering that extreme
rainfalls may be associated with changed runoff behaviour from that observed for more
frequent events with the stripping of vegetation cover.
Book 8 discusses how continuing loss and proportional loss would be expected to vary with
event magnitude. The interpretation of proportional loss as the unsaturated proportion of the
catchment implies that with larger storm events the unsaturated proportion of the catchment
is reducing and the proportional loss reduces. It is however difficult to extrapolate the rate of
this reduction to extreme events and hence how proportional loss varies with event
magnitude. However, continuing loss is expected to approach a limiting value for saturated
71
Losses
catchment conditions which makes it more suitable for application in extreme flood
estimation.
For urban catchments, Phillips et al (2014) undertook an analysis of the correlation between
the peak 1 hour intensity after the commencement of indirectly connected area runoff with
the continuing loss that was estimated. That study found that for almost all catchments, there
was no clear relationship between the two. The one exception to this was the Giralang (ACT)
catchment, which showed a strong correlation with the 1 hour peak rainfall intensity after ICA
runoff and the continuing loss estimate. The reasoning for this exception was not clear,
although it was thought it could be due to the types of storms that fell on the Giralang
catchment, which tended to be high intensity in the first hour after the Indirectly Connected
Area runoff occurred.
Figure 5.3.27. Correlation between 1 hr Peak Rainfall Intensity and Continuing Loss (left
Giralang (ACT) and right Powells Creek (NSW)) (from Phillips et al (2014))
This reduction of continuing loss with duration has also been noted by studies which have
analysed a large number of events for rural catchments (e.g. (Ilahee and Imteaz, 2009), and
(Ishak and Rahman, 2006)).
Although potentially important for real-time applications, the potential decrease of CL with
duration is not significant for most design applications because the critical duration is
typically shorter than a day. For very large rural catchments where the critical duration can
be multiple days then it would be reasonable to reduce the CL. Ideally this relationship with
duration should be based upon analysis of at site data but can also be informed by
72
Losses
For storage capacity loss models such as SWMOD, the moisture content is continuously
updated throughout the event which results in a variable proportional loss. This reduction in
proportional loss throughout the event may have advantages for modelling of large rural
catchments.
However, the definition of Continuing Loss as the threshold rate above which rainfall excess
is generated, means that it is dependent upon the timestep. This is because as the timestep
reduces there is an increased likelihood that there will be some timesteps in which the
rainfall depth is less than the Continuing Loss rate threshold. Thus to achieve the same
volume of rainfall excess the Continuing Loss will typically need to be increased for shorter
timesteps.
This is demonstrated in Figure 5.3.28 for an event at Currambene Creek at Falls Creek in
NSW. If the modelling timestep is reduced from 1 hour to 5 minutes the continuing loss rate
needs to increase from 4.5 to 7.2 mm/h to maintain the same volume of rainfall excess. This
is because for 5 minutes there is a higher proportion of timesteps for which the rainfall depth
is less than the threshold value.
73
Losses
Figure 5.3.28. Example of Continuing Loss Varying with Modelling Timestep (February 1977
event at Currambene Creek at Falls Creek in NSW)
This adjustment of CL is important if the timestep used in the derivation of the loss values is
different from that used in design. It should therefore be noted that the timestep used in the
derivation of the regional loss information presented in Book 5, Chapter 3, Section 5 was 1
hour for the rural catchments and up to 5 minutes for the urban catchments. If a different
timestep is to be adopted in design then the continuing loss should be adjusted accordingly.
The factor is a function of the rainfall depth with the adjustment factor increasing for smaller
rainfall depth. For larger depths, it is more likely that the full CL value can be satisfied in
each timestep which reduces the adjustment factor.
This line of best fit can be used to relate continuing losses modelled at sub-hourly time steps
to hourly values and vice versa. For example, if the average storm depth is approximately
200 mm and the timestep is reduced from 1 hour to 15 minutes then the continuing loss
needs to be increased by 30%.
74
Losses
Figure 5.3.29. Variation of Continuing Loss with Modelling Timestep for Rural Catchments
The urban catchment data derived in Phillips et al (2014) was based on shorter timestep
pluviometer data than the rural catchments.
This data was re-analysed comparing with 6 minute time intervals, which is similar to a large
majority of the pluviometer data that is available in Australia, and typical of the duration that
would be analysed in urban catchments. This re-analysis suggests that there are minimal
changes to the CL loss estimates, and well within the error margins of the estimates.
There are insufficient storms for the loss estimates to develop a similar graph to that for the
rural catchment (i.e. Figure 5.3.29). Instead, the different catchments were plotted against
timestep, and are shown in Figure 5.3.30. This graph shows the percentage change relative
to the median CL estimate for 6 minute timestep. A recommended curve is also provided,
based on the average of the values provided. The equation of this can be broadly
approximated by:
CL�
= 0.2 ln � + 1.35 (5.3.15)
CL6min
75
Losses
To avoid the arbitrary selection of the period over which to define the antecedent rainfall, the
Antecedent Precipitation Index (API) can be used as a measure of the initial wetness of a
catchment. API is calculated by discounting the time series of daily rainfall prior to the event
using an empirical decay factor and the basic equation is (Cordery, 1970):
Where k is an empirical decay factor less than unity and Pd is rainfall for day d. The value of
k varies typically in the range of 0.85 to 0.98. Linsley et al. (1982) and Cordery (1970) found
that the average value for Australian catchments was 0.92. The value of k is considered to
vary seasonally and has been linked to the variation in potential evapotranspiration (Mein et
al., 1995).
Cordery (1970) then related the ILs to the API using a relationship of the form:
API
IL� = ILmax � (5.3.17)
76
Losses
explaining the variability of loss is AWRA-L (refer Book 5, Chapter 3, Section 5 and Frost et
al. (2015)).
There are only a limited number of studies that have investigated the relationship between
loss values and these estimates of soil moisture but it is expected that soil moisture will be
provide a more useful estimate of loss than indices based only upon rainfall.
The results for IL are shown in Figure 5.3.31 and demonstrate that soil moisture has a higher
correlation with IL than API for the majority of the catchments. There are still, however, some
catchments for which the initial loss is relatively independent of both the API and AWRA-L
soil moisture.
Figure 5.3.31. Proportion of Variance Explained (r2) between Storm Initial Loss and API (Hill
et al., 2014a)
Studies by the Bureau of Meteorology (T. Pagano pers comm.) and Seqwater (D. Pokarier
per. comm.) have also found that the storm initial loss is also more highly correlated with soil
moisture estimated by AWRA-L than API.
Where such relationships can be established, they can help inform the absolute and
seasonal distribution of ILs.
77
Losses
of the preceding days rainfall (such as API) may not reflect the true antecedent conditions of
the catchment.
Phillips et al (2014) undertook an analysis of the antecedent rainfall in the days leading up to
the storms used to estimate the initial loss. They compared the rainfall in 1, 3 and 7 days
prior to the event with the initial loss and continuing loss estimates, and found no clear
correlation between the two. Samples of this analysis are provided in Figure 5.3.32.
Figure 5.3.32. Plot of Antecedent Rainfall Versus Initial Loss for Indirectly Connected Area
(Phillips et al, 2014)
78
Losses
3.7.5. Seasonality
The discussion of losses in the preceding sections has concentrated on the median annual
values. However, due to the seasonal variation in rainfall, evapotranspiration (and to a lesser
extent vegetation) many regions in Australia are characterised by distinct seasonality in
hydrology. The estimation of seasonal design inputs, including loss values, is required in
cases where:
• there is a strong seasonal variation in the flood producing mechanisms which need to be
accounted for in order to estimate the annual risk; or
• the risk is required to be assessed for a particular period within the year such as the flood
risk during construction or upgrade of major infrastructure.
The loss parameters (both initial and continuing loss) can be influenced by antecedent
moisture and therefore may display significant seasonal variation. This is likely to primarily
reflect changes in antecedent moisture but vegetation change may also contribute in some
locations.
The different seasonality across Australia is demonstrated in Figure 5.3.33. This shows the
seasonal distribution of the 803 events analysed by Hill et al. (2014a) which were selected
on the basis of rainfall. It is clear that in south-eastern Australia the events are reasonably
distributed throughout the year, whereas the majority of events in Northern Australia occur in
summer and south-west WA is dominated by the winter months.
It is important to consider this seasonal variation when selecting losses for design flood
estimation. In some cases the loss values may need to be adjusted to account for a bias in
the sample and for some locations it may be necessary to explicitly incorporate the
seasonality in the adopted losses and design flood estimation framework.
79
Losses
80
Losses
Figure 5.3.34. Seasonality of Standardised Storm Initial Loss Values for Different Regions in
Australia
In south-west WA, two different rainfall mechanisms have been identified which result in
distinctly seasonal nature of rainfall and losses. The majority of events have been recorded
in the winter seasons which are typically associated with lower losses, however the rarer
events are more likely to occur in summer when there is typically higher losses. It is
therefore important to recognise this in the selection of losses.
This is highlighted by Pearce (2011) which developed seasonal runoff coefficients for south-
west WA by generalising the results of the application of SWMOD to a number of
catchments (refer to Figure 5.3.35). The results show the important influence of the season
and the proportion of the catchment cleared of vegetation (refer to Book 5, Chapter 3,
Section 7).
81
Losses
Figure 5.3.35. Regional Runoff Coefficient Curves for South-West Western Australia
(Pearce, 2011)
Another approach to deriving seasonal loss values is to verify the rainfall-based estimates
from a rainfall runoff model to seasonal flood frequency quantiles. That is, adopt a similar
approach as outlined in Book 5, Chapter 3, Section 3 but on a seasonal, rather than annual,
basis.
One exception to this is south-west WA where the losses have been directly linked to the
proportion of the catchment cleared of native vegetation (Pearce, 2011).
82
Losses
The rainfall excess obtained from loss models where the excess is a proportion of the total
rainfall in the timestep (such as IL/PL or SWMOD) will tend to result in a more temporally
uniform (less peaky) rainfall excess when compared to IL/CL loss model in which a constant
rate of loss is applied. This is because a greater volume of loss is extracted in the timesteps
of greatest rainfall.
This means that more attenuation is required for the rainfall excess resulting from the IL/CL
loss models than those for the IL/PL or SWMOD.
For example, Australian Rainfall and Runoff Revision Project 6 (Hill et al., 2014a) considered
the routing parameters for IL/CL and SWMOD for 38 catchments from across Australia and
demonstrated that the adopted loss model affected the selection of the C0.8 value (non-
dimensional routing parameter in RORB). With the C0.8 value for SWMOD being typically
75% of that for the IL/CL model (refer Figure 5.3.36).
Therefore, the routing parameters derived using one conceptual loss model are not
necessarily applicable for an alternate loss model. This is important if different loss models
are to be applied to flood models than those used in calibration.
83
Losses
Figure 5.3.36. Comparison of Adopted Routing Parameters for IL/CL and SWMOD
A number of studies have shown significant dependence of annual maxima floods in Eastern
Australia on the Interdecadal Pacific Oscillation (IPO). However, the annual maxima
84
Losses
precipitation does not exhibit a similar level of dependency on the IPO. Pui et al (2009)
hypothesize that the difference in flood characteristics as a function of the IPO is a result of
catchment antecedent conditions prior to the rainfall event. From the analysis of 88 daily
rainfall stations in Eastern Australia they found that the antecedent conditions prior to storm
events varied significantly across the two IPO phases.
The influence of these key climate drives on loss rates warrants further research.
3.8. Appendix
Table 5.3.14. Median Loss Values for Rural Catchments
Region Method Gauge River Name Area N Median Median Study
ILs CL
(mm) (mm/hr
)
East Coast QLD 141001 South Kiamba 33 22 38 2.7 Hill et al.
Maroochy (2014a)
East Coast QLD 141009 North Eumundi 41 23 20 2.2 Hill et al.
Maroochy (2014a)
East Coast NSW 201001 Oxley River Eungella 218 53 50 2.6 Loveridge
(Unpublished)
East Coast NSW 203010 Leycester Rock Valley 179 48 65 0.3 Loveridge
River (Unpublished)
East Coast NSW 204017 Bielsdown Dorrigo No. 82 57 50 1.4 Loveridge
Creek 2 & No.3 (Unpublished)
East Coast NSW 204025 Orara River Karangi 135 37 71 4 Loveridge
(Unpublished)
East Coast NSW 208007 Nowendoc Nowendoc 218 37 50 2.3 Loveridge
River (Unpublished)
East Coast NSW 210068 Pokolbin Pokolbin 25 36 40 2.0 Loveridge
Creek Site 3 (Unpublished)
East Coast NSW 211013 Ourimbah Upstream 83 25 40 3.7 Hill et al.
Creek Weir (2014a)
East Coast NSW 213200 O'Hares Wedderburn 73 22 60 1.6 Hill et al.
Creek (2014a)
East Coast QLD 136108A Monal Creek Upper 92 12 13 Ilahee (2005)
Monal
East Coast QLD 141009A N. Maroochy Eumundi 38 22 42 Ilahee (2005)
River
East Coast QLD 142001A Caboolture Upper 94 20 50 1.4 Hill et al.
Caboolture (2014a)
East Coast QLD 143110A Bremer River Adams 125 37 39 Ilahee (2005)
Bridge
East Coast QLD 145003B Logan River Forest 175 42 31 Ilahee (2005)
Home
85
Losses
86
Losses
87
Losses
88
Losses
89
Losses
90
Losses
91
Losses
3.9. References
Alley, W.M. and Veenhuis, J. E. (1983), 'Effective impervious area in urban runoff
modeling.'J. Hydraul. Eng., 109(2), 313-319.
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Ball, J.E. and Powell, M. (1998), Inference of catchment modelling system control
parameters, Proc. UDM '98: Developments in Urban Drainage Modelling, London, UK, 1,
313-320.
Ball, J.E. and Zaman, S. (1994) 'Simulation of small events in an urban catchment', National
Conference Publication - Institution of Engineers, Australia, pp: 353-358.
Beard, L.R. and Shin Chang. (1979), Urbanization impact on streamflow, Journal of the
Hydraulics Division ASCE, 105: 647-59.
Black, D.C. and Aitken, A.P. (1977), Simulation of the Urban Runoff Process, Australian
Government Publishing Service, 75/82.
Board, R.J., Barlow, F.T.H. and Lawrence, J.R. (1989), Flood and Yield Estimation in the Arid
Zone of the Northern Territory. Hydrology and Water Resources Symposium 1989.
Christchurch, New Zealand, pp: 367-371
Boughton, W.C. (2015), Master recession analysis of transmission loss in some Australian
streams [online]. Australian Journal of Water Resources, 19(1), 43-51. Available at http://
search.informit.com.au/
Boughton, W.C. (1989), A Review of the USDA SCS Curve Number Method. Australian
Journal of Soil Research, 27: 511-523
Boughton, W.C. and Hill, P.I. (1997), A Design Flood Estimation Procedure using Data
Generation and a Daily Water Balance Model. CRC for Catchment Hydrology. Report 97/8.
Boughton, W.C., Muncaster, S.H., Srikanthan, R., Weinmann, P.E., Mein, R.G. (1999)
Continuous Simulation for Design Flood Estimation - a Workable Approach. Water 99: Joint
Congress; 25th Hydrology & Water Resources Symposium, 2nd International Conference on
Water Resources & Environment Research; Handbook and Proceedings; pp: 178-183.
Boyd, M.J., Bufill, M.C. and Knee, R.M. (1993), Pervious and Impervious Runoff in urban
catchments, Hydrological Sciences Journal, 38(6), 463-478.
Boyd, M.J., Bufill, M.C. and Knee, R.M. (1994), Predicting Pervious and Impervious Storm
Runoff from Urban Drainage Basins, Hydrological Sciences Journal, 39(4), 321-332.
92
Losses
Bufill, M.C. and Boyd M.J. (1992), A sample flood hydrograph model for urban catchment,
Proceedings of the International Symposium on Urban Management, 98: 98-103, Sydney.
Carroll, D.G. (2012), URBS (Unified River Basin Simulator) A Rainfall Runoff Routing Model
for Flood Forecasting and Design. Version 5.00 Dec, 2012.
Chapman, T.G. (1970), Optimisation of a rainfall-runoff model for an arid zone catchment, in
Symposium on the results of research on representative and experimental basins, pp.
126-144, IASH Publ. No. 96, Wellington, New Zealand.
Chiew, F.H.S. and McMahon, T.A. (1999), Modelling Runoff and diffuse pollution loads in
urban areas. Water Science Technology, 39(12), 241-248.
Chu, S.T. (1978), Infiltration during an Unsteady Rain. Water Resources Research. 14(3),
461-466.
Cordery, I. (1970)' Antecedent Wetness for Design Flood Estimation, Civil Engineering
Transaction, Institution of Engineers, Australia, CE12(2), 181-184.
Cordery, I. and Pilgrim, D.H. (1983), On the Lack of Dependence of Losses from Flood
Runoff on Soil and Cover Characteristics, IAHS Pub., (140), 187-195.
Cordery, I., Pilgrim, D.H. and Doran, D.G., (1983), Some Hydrological Characteristics of Arid
Western New South Wales. Hydrology and Resources Symposium. I.E.Aust. Nat. Conf.
Pub., (83/13), 287-292.
Dayaratne, S.T. (2000), Modelling of Urban Stormwater Drainage Systems using ILSAX,
PhD Thesis, Victoria University of Technology, August.
Dyer, B.G., Nathan, R.J., McMahon, T.A. and O'Neill, I.C. (1994), Development of Regional
Prediction Equations for the RORB Runoff Routing Model. CRC for Catchment Hydrology
Report 94/1. March 1994.
Eastgate, W.I., Swartz, G.L. and Briggs, H.S. (1979), Estimation of Runoff Rates for Small
Queensland Catchments, Department of Primary Industries, Queensland Technical Bulletin
No.15.
El-Kafagee, M. and Rahman, A. (2011), A study on initial and continuing losses for design
flood estimation in New South Wales. In Chan, F., Marinova, D. and Anderssen, R.S. (eds)
MODSIM2011, 19th International Congress on Modelling and Simulation, (pp: 3782-3787),
Perth, Australia.
Flavell, D.J. and Belstead, B.S. (1986), Losses for Design Flood Estimation in Western
Australia, Hydrology and Water Resources Symposium, Griffith University, pp: 203-208.
93
Losses
Fleming, P.M. and Smiles, D.E. (1975), Infiltration of Water into Soil. In Chapman, T.G. and
Dunin, F.X. eds. Prediction in Catchment Hydrology. Canberra. Australian Academy of
Science. Pp: 83-110.
Frost, A., Ramchurn, A., Hafeez, M., Zhao, F., Haverd, V., Beringer, J., and Briggs, P. (2015),
Evaluation of AWRA-L: the Australian Water Resource Assessment model. Modsim 2015 -
21st International Congress on Modelling and Simulation, 29 Nov - 4 Dec 2015, Gold Coast,
0, 2047–2053.
Goyen, A.G. (2000), Spatial and Temporal Effects on Urban Rainfall / Runoff Modelling. PhD
Thesis. University of Technology, Sydney. Faculty of Engineering.
Goyen, A.G. (1983), A Model to Statistically Derive Design Rainfall Losses Hydrology and
Water Resources Symposium 1983: Preprints of Papers; pages: 220-225. Institution of
Engineers, Australia, National conference publication, No.83/13.
Goyen, A.G., O'Loughlin, G.G. (1999a), The Effects of Infiltration Spatial and Temporal
Patterns on Urban Runoff. Water 99: Joint Congress; 25th Hydrology & Water Resources
Symposium, 2nd International Conference on Water Resources & Environment Research;
Handbook and Proceedings, pp: 819-822.
Goyen, A.G., O'Loughlin, G.G. (1999b), Examining the basic building blocks of urban runoff.
Urban Storm Drainage - 8th International Proceedings. Sydney, 30 Aug.-3 Sept. 1999, 3:
1382-1390.
Green, R.E. and Ampt, G.A. (1911), Studies on Soil Physics: 1. Flow of Air and Water
through Soils. Journal of Agricultural Science, 4: 1-24
Heneker, T.M, Lambert, M.F. and Kuczera, G. (2003), Overcoming the Joint Probability
Problem Associated with Initial Loss Estimation in Design Flood Estimation. Australian
Journal of Water Resources, 7(2), 101-109.
Hill, P.I., Graszkiewicz, Z., Sih, K. and Rahman, A. (2013), Revision project 6: Loss models
for catchment simulation. Stage 2 Report P6/S2/016A.
Hill, P.I., Maheepala, U. and Mein, R.G. (1996), Empirical Analysis of Data to Derive Losses:
Methodology, Programs and Results. CRC for Catchment Hydrology Working Document
96/5.
Hill, P.I., Graszkiewicz, Z., Loveridge, M., Nathan, R.J. and Scorah, M. (2015), Analysis of
loss values for Australian rural catchments to underpin ARR guidance. Hydrology and Water
Resources Symposium 2015, Hobart, 9-10 December 2015.
Hill et al
Hill, P.I., Graszkiewicz, Z., Taylor, M. and Nathan, R.J. (2014a), ARR Revision Project 6 Loss
models for catchment simulation. Stage 4 Analysis of rural catchments. May 2014.
Hill, P.I., Graszkiewicz, Z., Nathan, R.J., Stephens, D.A. and Pearce, L. (2014b), Testing the
Suitability of a probability distributed storage capacity loss model for design flood estimation.
2014 Hydrology and Water Resources Symposium, Perth.
Ilahee, M. (2005), Modelling Losses in Flood Estimation, A thesis submitted to the School of
Urban Development Queensland University of Technology in partial fulfilment of the
requirements for the Degree of Doctor of Philosophy, March 2005.
94
Losses
Ilahee, M. and Imteaz, M.A. (2009), Improved Continuing Losses Estimation Using Initial
Loss-Continuing Loss Model for Medium Sized Rural Catchments. American J. of
Engineering and Applied Sciences, 2(4), 796-803.
Ilahee, M. and Rahman, A. (2003), Investigation of Continuing Losses for Design Flood
Estimation: A Case Study in Queensland. 28th International Hydrology and Water
Resources Symposium, 10-14 November 2003,Wollongong, NSW, 1: 191-197.
Ishak, E., and Rahman, A. (2006) Investigation into Probabilistic Nature of Continuing Loss
in Four Catchments in Victoria. In: 30th Hydrology & Water Resources Symposium: Past,
Present & Future; pages: 432-437.
Jensen, M. (1990), Rain-runoff parameters for six small gauged urban catchments. Nordic
and drol., 21: 165-184.
Kemp, D.J. and Lipp, W.R. (1999), Predicting Stormwater Runoff in Adelaid - What do we
Know?, Living with Water Seminar, The Hydrological Society of South Australia, Adelaide.
Kemp, D.J. and Wright, C. (2014), Flood Hydrology in an Arid Area - Findings from the
Gammon Ranges Project. 2014 Hydrology and Water Resources Symposium, Perth. 24-27
February 2014, pp: 109-116.
Kjeldsen, T.R., Stewart, E.J., Packman, J.C., Folwell, S.S. and Bayliss, A.C. (2005),
Revitalisation of the FSR/FEH rainfall-runoff method. Joint Defra/EA Flood and Coastal
Erosion Risk Management R&D Programme. R&D Technical Report FD1913/TR.
Knighton, A.D. and Nanson, G.C. (1994), Flow transmission along an arid zone
anastomosing river, Cooper Creek, Australia. Hydrological Processes, 8: 137-154.
Kuczera, G., Lambert, M., Heneker, T., Jennings, S., Frost, A. and Coombes, P. (2006), Joint
probability and design storms at the crossroads, Australian Journal of Water Resources,
10(1), 63-79.
Lang, S.M., Hill, P.I. Scorah, M. and Stephens, D.A. (2015), Defining and calculating
continuing loss for flood estimation, Proceedings of the 2015 Hydrology and Water
Resources Symposium, Hobart.
Lee, J. and Lim, W.H. (1995), Green-Ampt Equation and the Soil Water Storage Depth in a
Watershed Model, Proceedings of the Second International Symposium on Urban
Stormwater Management, pp: 157 -161, 11-13 July, Melbourne.
Linsley, R.K., Kohler, M.A. and Paulhus, J.L. (1982), Hydrology for Engineers, ed 3,
NewYork: McGraw-Hill.
Loveridge, M. (unpublished), Evaluation of Monte Carlo simulation for flood frequency curve
derivation using an event-based rainfall-runoff model, Doctoral Thesis, University of Western
Sydney.
95
Losses
Mag, V.S. and Mein, R.G. (1994), A flood forecasting procedure which combines the RORB
and TOPOG models, Proceedings of the Hydrology and Water Resources Symposium, I.E.
Nat. Conf. 94/15: 217-222.
Manley, R.E. (1974), Catchment models for river management, MSc Thesis, University of
Birmingham.
Mein, R.G. and Goyen, A.G. (1988), Urban Runoff. Transactions of the Institution of
Engineers, Australia: Civil Engineering, CE30(4), 225-238.
Mein, R.G. And Larson, C.L. (1973), Modeling infiltration during a steady rain, Water
Resources Research, 9: 384-394.
Mein, R. G. and McMahon, T. A. (1982), Review of the role of process modelling in the
Australian Representative Basins Program, In Review of the Australian Representative
Basin Program. Rep. Basin Ser. Rep.4.
Mein, R.G. and O'Loughlin, E.M. (1991), A new approach to flood forecasting, Proceedings
of the Hydrology and Water Resources Symposium, I.E.Aust. Nat. Conf. Pub. No. 91/12, pp:
219-224, Perth.
Mein, R.G., Nandakumar, N. and Siriwardena, L. (1995), Estimation of initial loss from soil
moisture indices (Pilot Study), Working Document 95/1, Cooperative Research Centre for
Catchment Hydrology.
Melanen, M. and Laukkanen, R. (1981), Dependence of runoff coefficient on area type and
hydrological factors, Proceedings of the 2nd International Conference on Urban Storm
Drainage, Water Resources Publications, pp: 404-410, Littleton, Colorado, USA.
Moore, R.J. (1985), The probability-distributed principle and runoff production at point and
basin scales, Hydrological Sciences Journal, 30(2), 273-297.
Muncaster, S.H., Weinmann, P.E. and Boughton, W.C. (1999), The representation of loss in
continuous simulation models for design flood estimation, Proceedings of Water 99 Joint
Congress, Institution of Engineers, Australia, pp: 184-189, 6-8 July, Brisbane.
Nathan, R.J., Weinmann, P.E. and Hill, P.I. (2003), Use of a Monte-Carlo Simulation to
estimate the Expected Probability of large to extreme floods, Proceedings of the 28th
International Hydrology and Water Resources Symposium, pp: 1105-1112, 10-14 November,
Wollongong.
O'Loughlin, G. and Stack, R. (2014), DRAINS User Manual - a manual on the DRAINS
program for urban stormwater drainage system design and analysis, Available at: <http://
www.watercom.com.au/DRAINS%20Manual.pdf>.
Pearce, L.J. (2011), Regional runoff coefficients for summer and winter design flood events
in south-west Western Australia, Proceedings of the 24th IAHR World Congress - Balance
and Uncertainty, 26 June - 1 July, Brisbane, Australia.
96
Losses
Pearcey, M., Pettett, S., Cheng, S., and Knoesen, D. (2014), Estimation of RORB Kc
parameter for ungauged catchments in the Pilbara Region of Western Australia,
Proceedings of the 2014 Hydrology and Water Resources Symposium, 24 - 27 February,
Perth.
Phillips, B., Goyen, A., Thomson, R., Pathiraja, S. and Pomeroy, L. (2014), Australian
Rainfall and Runoff Revision Project 6: Loss models for catchment simulation - Urban
Losses Stage 2 Report, February.
Pui, A., Lall, A. and Sharma, A. (2009), How does the interdecadal pacific oscillation affect
design floods in Eastern Australia, Proceedings of H2009 - 32nd Hydrology and Water
Resources Symposium, November, pp: 266-274.
Rahman, A., Weinmann, P.E. and Mein, R.G. (2002), The use of probability-distributed initial
losses in design flood estimation, Australian Journal of Water Resources, 6(1), 17-29.
Rajendran, R., Mein, R.G. and Laurenson, E.M. (1982), A spatial model for the prediction of
losses on small rural catchments, Department of Civil Engineering, Monash University,
Australian Water Resources Council Technical Paper, No.75, Canberra.
Ren-Jun., Z (1992) The Xinanjiang model applied in China. Journal of Hydrology. 135:
371-381
Rigby, E.H. and Bannigan, D.J. (1996), The embedded design storm concept - a Critical
Review, Proceedings of the Hydrology and Water Resources Symposium 1996: Water and
the Environment, Preprints of Papers, National Conference Publication, (96/05), 453-459.
Riley, S.J. and Fanning, P.C. (1997), Spatial patterns of infiltration in suburban developments
in the Hawkesbury-Nepean catchment,Science and Technology in the Environmental
Management of the Hawkesbury-Nepean Catchment, Institution of Engineers, Australia,
NCP 97/01, 10-11 July, Nepean.
Ruprecht, J.K. and Schofield, N.J. (1993), Infiltration characteristics of a complex lateritic soil
profile, Hydrological Processes, 7(1), 87-97.
Shuster, W.D., Bonta, J., Thurston, H., Warnemuende, E. and Smith, D.R. (2005), Impacts of
impervious surface on watershed hydrology: A review, Urban Water Journal, 2(4), 263-275.
Siriwardene, N.R., Cheung, B.P.M. and Perera, B.J.C. (2003), Estimation of soil infiltration
rates of urban catchments, Proceedings of the 28th International Hydrology and Water
Resources Symposium, Institution of Engineers, Australia, Wollongong.
Skukla, M.K., Lal, R. and Unkefer, P. (2003), Experimental validation of infiltration models for
different land use and soil management systems, Journal of Soil Science, 168(3), 178-191.
Smith, A., Rahman, J., Baron-hay, S., and Shipman, D. (2016), A new web based water
information service leveraging the Australian Water Resources Assessment Modelling
System Australian Water Resources Assessment Modelling System, (January).
Stewart, B.J. and Boughton, W.C. (1983), Transmission losses in natural streambeds - a
Review, Proceedings of the Hydrology and Water Resources Symposium 1983, Institution of
Engineers, Australia, preprints of papers, pp: 226-230.
Stokes, R.A. (1989), Calculation file for Soil Water Model - Concept and theoretical basis of
soil water model for the south west of Western Australia, Unpublished Report, Water
Authority of W.A. Water Resources Directorate.
97
Losses
Tularam, G.A. and Ilahee, M. (2007), Initial loss estimates for tropical catchments of
Australia, Environmental Impact Assessment Review, 27(6), 493-504.
United States Army Corps of Engineers (USACE) (1998), Runoff from snowmelt,
engineering manual, 31 March, Washington: USACE.
Van Mullem, J.A., Woodward, D.E., Hawkins, R.H., and Hjelmfelt, A.T. Jr. (2002), Runoff
curve number method: beyond the handbook. In: Hydrologic Modeling for the 21st Century,
Proceedings of the Second Federal Interagency Hydrologic Modeling Conference, July 28 -
August 1, Las Vegas.
Walsh, M.A., Pilgrim, D.H. and Cordery, I. (1991), Initial losses for design flood estimation in
New South Wales, Proceedings of the International Hydrology and Water Resources
Symposium, I.E.Aust. Nat. Conf. Pub. (91/19), 283-288.
Ward, T.J., Hawkins, R.H., Van Mullem, J.A., Woodward. D.E. (2009), Curve number
hydrology: state of the practice, College Park: American Society of Civil Engineers.
Water and Rivers Commission (2003), SWMOD A rainfall loss model for calculating rainfall
excess - user manual (version 2.11), Prepared by Hydrology and Water Resources Branch
Resource Science Division, September.
Waugh, A.S. (1991), Design losses in flood estimation, Proceedings of the International
Hydrology and Water Resources Symposium, I.E.Aust. Nat. Conf. Pub. No.91/19, pp:
629-630, Perth.
William, J. (1994), Preliminary report on the use of the Green Ampt Infiltration Model for
measuring surface runoff on rural catchments,unpublished report 18 February 1994, CRC for
Catchment Hydrology.
Woodward, D.E., Hawkins, R.H. and Quan, Q.D. (2002), Curve number method: origins,
applications and limitations, Proceedings of Hydrologic Modeling for the 21st Century,
Second Federal Interagency Hydrologic Modeling Conference, July 28 - August 1, Las
Vegas.
Woolmington, E. and Burgess, J.S. (1983), Hedonistic water use and low-flow runoff in
Australia's national capital, Urban Ecology, 7(3), 215-227.
Zaman, S. and Ball, J.E. (1994), Simulation of small events in an urban catchment,
Proceedings of the 1994 Hydrology and Water Resources Conference - Water Down Under
'94, I.E.Aust. Nat. Conf. Pub. No.94/15, pp: 353 - 358, Adelaide.
98
Chapter 4. Baseflow Models
Peter Hill, Rachel Brown, Rory Nathan, Zuzanna Graszkiewicz
Chapter Status Final
Date last updated 14/5/2019
4.1. Introduction
Streamflow consists of two components based on response timing, following a rainfall event.
Water that enters a stream rapidly is termed as quickflow and is sourced from rainfall
excess, after the loss has been satisfied (representing a range of processes such as
interception, infiltration and depression storage). On the contrary, water that takes longer to
reach a river is termed as baseflow and is sourced primarily from groundwater discharge into
the river. Also, different locations have varying degrees of baseflow contribution to
streamflow, based on regional hydrogeological conditions.
According to Nathan and McMahon (1990) and Brodie and Hostetler (2005), the baseflow
hydrograph has the following features:
• The low flow before the start of a flood event is assumed to consist entirely of baseflow;
• The rapid rise of river during a rainfall event increases the volume of water held as bank
storage, which returns to the main streamflow after a delay and creates a baseflow peak
after the main flood peak;
• The recession of the baseflow peak continues after the recession of the streamflow peak;
• The baseflow hydrograph rejoins the total hydrograph as the quickflow ceases.
The majority of design flood estimation in Australia utilises flood event models that focus on
surface runoff processes. In such models the baseflow component is either ignored or
incorporated after the surface runoff has been estimated. The conceptualisation of some
continuous models or fully integrated surface water - groundwater models explicitly
incorporate the estimation of the baseflow component. However, given the prevalence of
flood event models for design flood estimation, the following guide concentrates on
estimating a baseflow hydrographs to combine with an existing estimate of surface runoff.
This document utilises the method developed in ARR Revision Project 7 to provide guidance
on how to estimate baseflow for design flood estimation.
99
Baseflow Models
• Book 5, Chapter 4, Section 2 describes those characteristics of the baseflow that need to
be estimated;
• Book 5, Chapter 4, Section 4 outlines the different approaches to estimating the baseflow
contribution to design hydrographs; and
Users should consider the characteristics of the particular catchment with respect to the
underlying assumptions that form the basis of the method outlined in this chapter. The
following approach is applicable across the vast majority of Australian catchments. However,
users should draw upon their understanding of the particular catchment of interest to make
an informed decision regarding the relevance of each step, considering the following issues:
• Significant farm dam development or other flow regulations in the catchment, which can
mask the contribution of baseflow.
• Design flood estimation for Rare to Extreme events. The method outlined below is only
relevant to events up to approximately the 1% AEP and guidance for baseflow contribution
to very rare and extreme events is provided in Book 8.
• Small catchments away from the main stem of the river network. The regional
estimates relate to a location on the main stem of the river and reflect the characteristics
of all the contributing catchments. The baseflow characteristics of individual tributaries
may be different from those in the larger contributing catchment.
• Estimation of historic events. The approach in this chapter has been developed for
application in design flood estimation, but not for the estimation of the baseflow
component of streamflow for individual historic events.
100
Baseflow Models
• Estimation of baseflow for extended periods. This guide is relevant for design events
only. Users should refer to the technical documents supporting ARR Revision Project 7 for
more general information on baseflow estimation for longer sequences.
If any of the above factors are deemed to be important for the catchment of interest, it is
recommended that the user considers the suitability of this approach to their catchment of
interest. It is also relevant to interpret the outcomes in the light of underlying assumptions,
and draw on local data to supplement the approach or to consider alternative methods,
where these assumptions are not fully met.
Users should refer to the technical reports and analyses undertaken for ARR Revision
Project 7 (Murphy et al., 2009; Murphy et al., 2011b) for full details of the data analysis and
assumptions that form the basis of the method outlined in this chapter. Users may also like
to refer to these supporting documents and data to draw further local conclusions from the
significant body of work undertaken through the study.
For instance, for a 10% AEP, about two-thirds of Australian unregulated rural catchments
have baseflow contributions that are estimated to be between approximately 5% and 30% of
the peak flow. There are only about 5% of catchments that have a higher proportion of
baseflow, which these tend to be located in south-west WA, south-east SA and some areas
in the tropics. In just less than a third of unregulated rural catchments, the baseflow is
estimated to be less than 5% of the peak (for a 10% AEP). Baseflow is typically insignificant
in urban catchments due to the degree of channel modifications and extent of impervious
areas.
In the context of design flood estimation, a surface runoff hydrograph will typically be
generated using a flood event model that excludes baseflow. It is therefore necessary to
estimate the baseflow contribution in order to represent the total event peak and volume,
and to generate a total streamflow hydrograph. This concept is represented in Figure 5.4.1,
which depicts the following features of an event hydrograph:
• Surface runoff peak - the maximum flow associated with the surface runoff event.
• Time to the surface runoff peak - measured from the start of the event to the surface runoff
peak.
101
Baseflow Models
• Volume of surface runoff for the event - event volume, represented by the area under the
hydrograph.
• Time to the baseflow peak - measured from the start of the event to the baseflow peak.
• Baseflow under the surface runoff peak - baseflow that occurs at the time of the surface
runoff peak.
102
Baseflow Models
Figure 5.4.2. Decision Tree for Method to Estimate Baseflow Contribution to Design Flood
103
Baseflow Models
• Figure 5.4.3 is provided to readily identify the relative magnitude of baseflow compared to
surface runoff for catchments across Australia for a 10% AEP event. Practitioners can
identify their catchment of interest within this data set and note the value of Baseflow Peak
Factor. This map shows catchments to match with the screening criteria identified below,
while Figure 5.4.5 provides a more detailed estimate of the baseflow. The data used to
generate these figures is available in Geographic Information System (GIS) format, to
assist locating catchments/points of interest.
• If available, streamflow data for the catchment should be reviewed. The magnitude of
flows between flood events relative to peaks can be used to determine whether baseflow
is likely to be an important component of the design flood hydrograph.
Where baseflow is expected to be a small component compared to the surface runoff, the
design flood peak can be adjusted up to approximately 5% to make an allowance for
baseflow. Catchments with a Baseflow Peak Factor less than 0.05 are considered suitable
for this approach. This reflects approximately 30% of the catchments mapped in
Figure 5.4.3.
It is suggested that catchments that have a baseflow contribution greater than approximately
5% of the surface runoff should explicitly incorporate the baseflow component into the
design flood, using a more rigorous approach.
While baseflow is a very large component of the event (a contribution of more than
approximately 30% is suggested, reflected by a Baseflow Peak Factor greater than 0.3), the
baseflow contributions should be estimated using techniques that are suited to the nature
and availability of local data (e.g. Brodie and Hostetler (2005), Chapman and Maxwell
(1996), Nathan and McMahon (1990), and Ladson et al. (2013)). Approximately 5% of
Australian catchments, generally the areas of tropical north Australia, south-west Western
Australia and the south-east coastline of South Australia, fall into this category. The specific
approach of relevance will depend on local conditions and the user is guided to the above
references to determine the most appropriate baseflow estimation technique.
Where baseflow is between approximately 5% and 30% of the surface runoff (Baseflow
Peak Factor between 0.05 and 0.3), the approach outlined below is recommended. This
relates to approximately 65% of the catchments mapped in Figure 5.4.3.
Additionally, there are various activities that can impact upon the flow characteristics
associated with the baseflow. These activities include:
• Flow regulation from upstream reservoirs – reservoirs that release outflows that are
different to inflows will produce a low flow response that can be misinterpreted as
baseflow at downstream flow gauges.
104
Baseflow Models
• Catchment farm dams – high concentration of catchment farm dams could influence
baseflow but only where the dams are located in a manner where they intercept and store
flows arising from long-term depletion of catchment storage.
• Major diversions – diversions for consumptive use such as irrigation channels, urban
diversions, etc. These diversions decrease low flows and hence appear to reduce
estimates of baseflow. Allowances can be made for those diversions where they are
metered.
• Urbanisation – in urban areas, features such as excess garden or sports field watering can
increase low flows during summer that appear similar to baseflow in streamflow data.
• Return flows – water can be returned to rivers from sewage treatment plants or from
industry,increasing low flows and appearing similar to baseflow.
Where present, these activities will influence the observed flow characteristics making it
difficult to identify and quantify baseflow. While it is possible to estimate baseflow in these
locations using the regional approach, the baseflow estimate will reflect the unregulated
baseflow conditions.
Figure 5.4.3. Preliminary Assessment of Baseflow Peak Factor for a 10% AEP Event
105
Baseflow Models
streamflow data does not meet these criteria, this approach is not relevant and practitioners
are directed to the regional approach outlined in Book 5, Chapter 4, Section 5.
To estimate baseflow directly from available streamflow data, the following steps should be
followed:
Review the streamflow data and eliminate any poor quality data, as determined using the
quality codes for each time step.
Determine the resulting period of record available for analysis. If less than 10 years of
data is available, the regional approach in Book 5, Chapter 4, Section 5 may be more
appropriate for application. If more than approximately 10 years of data is available, the
following steps can be applied.
Extract a series of peak flows from the recorded streamflow data and undertake a flood
frequency analysis as described in Book 3, Chapter 2. It is recommended that as a
minimum, the 10% AEP event should be identified. If the streamflow data record is
suitable, identify events of similar magnitude to the design flood of interest.
Estimate the baseflow for flood events identified above. Literature, such as Nathan and
McMahon (1990), Chapman and Maxwell (1996) and Brodie and Hostetler (2005),
provides guidance on key features of the baseflow hydrograph. If the streamflow data is
suitable, the baseflow should be estimated for events of similar magnitude to the design
flood of interest.
If the above step has been applied to events of similar magnitude to the design flood of
interest, the estimated baseflow magnitude and volume can be used directly with the
design flood surface runoff hydrograph to generate the total streamflow estimate. Refer to
Book 5, Chapter 4, Section 5 for a description of how to generate the total streamflow
hydrograph. In this case, the key baseflow features, including timing, should be taken
from baseflow estimated above .
If the design events are outside of the range of recorded events it is necessary to scale
the baseflow contribution to reflect the AEP of interest. The method outlined in Book 5,
Chapter 4, Section 5 should be applied, with the key baseflow characteristics determined
from the recorded streamflow rather than the regional approach, as outlined in the
following section.
106
Baseflow Models
catchments within this range. Practitioners should be mindful of these constraints when
applying and interpreting the outcomes from this method.
The following three parameters are defined to characterise the contribution of baseflow to
design flood hydrographs:
1. Baseflow Peak Factor: This factor is applied to the estimated surface runoff peak flow to
give the value of peak baseflow for a 10% AEP event.
2. Baseflow Volume Factor: This factor is applied to the estimated surface runoff volume to
give the volume of the baseflow for a 10% AEP event.
3. Baseflow Under Peak Factor: This factor is applied to the estimated surface runoff peak
flow to give the baseflow at the time of the peak surface runoff and can be determined
from the Baseflow Peak Factor, such that the Baseflow Under Peak Factor = 0.7 x
Baseflow Peak Factor.
The Baseflow Peak Factor and Baseflow Volume Factor are presented in Figure 5.4.5 and
Figure 5.4.6, which covers the whole of Australia. It should be noted that the maps represent
the values for the total area upstream of the main stem of the river, rather than any smaller
sub-catchments. As baseflow characteristics may vary from the main stream, the estimation
of baseflow in subcatchments may require the approach to be supplemented with additional
local data or through an alternative approach, such as transposition from another location.
These factors provide information on baseflow contribution to design flood events for a 10%
AEP event. Table 5.4.1 shows the AEP scaling factors that should be applied to the 10%
AEP Baseflow Peak Factor and Baseflow Volume Factor to scale relevant factors to reflect
events of other AEPs.
Table 5.4.1. AEP Scaling Factors, FAEP, to be applied to the 10% AEP Baseflow Peak Factor
and the Baseflow Volume Factor to determine the Baseflow Peak Factor for events of
various AEPs
EY AEP (%) Baseflow Peak Factor Baseflow Volume Factor
2 86.47 3.0 2.6
1 63.21 2.2 2.0
0.5 50 1.7 1.6
0.2 18.13 1.2 1.2
0.11 10 1.0 1.0
0.05 5 0.8 0.8
0.02 2 0.7 0.7
0.01 1 0.6 0.6
For events of AEPs not shown in Table 5.4.1, Figure 5.4.4 can be used to determine an
appropriate AEP factor. This is to be multiplied by the 10% AEP Baseflow Peak Factor or the
107
Baseflow Models
Baseflow Volume Factor as relevant, to determine the factor for other event magnitudes.
Guidance for baseflow contribution to Rare and Extreme Events is provided in Book 8.
Figure 5.4.4. AEP Factors, FAEP to be applied to the 10% AEP Baseflow Volume Factor to
determined the Baseflow Volume Factor for events of various AEPs
This information is applied to the design flood event using the procedure outlined in the
relationships below that relate to the typical flood hydrograph in Figure 5.4.1.
1. Determine the Baseflow Peak Factor for a 10% AEP (�BPF,10%AEP) from Figure 5.4.5.
2. Determine the AEP factor, corresponding to the event AEP using Table 5.4.1 or
Figure 5.4.4. Scale the 10% AEP Baseflow Peak Factor appropriately to determine the
Baseflow Peak Factor for the event severity of interest.
3. Apply the Baseflow Peak Factor to the calculated peak surface runoff as in Equation
(5.4.1).
4. Calculate the timing of the baseflow peak using Equation (5.4.2). The time to the peak
surface runoff should be applied in units of hours from the start of the event.
108
Baseflow Models
109
Baseflow Models
To calculate the baseflow under the peak streamflow (Point B in Figure 5.4.1):
1. The Baseflow Peak Factor (RBPF) calculated for the appropriate AEP event as above
should be used in Equation (5.4.3) to calculate the Baseflow Under Peak Factor (RBUPF).
2. RBUPF should be used as in Equation (5.4.4) to calculate the baseflow under the peak
streamflow.
1. Calculate the baseflow under the streamflow peak for the appropriate AEP as above.
2. Add the baseflow under the streamflow peak calculated using Equation (5.4.4), to the
calculated peak surface runoff as in Equation (5.4.5).
1. Determine the Baseflow Volume Factor for a 10% AEP (RBVF,10yrARI) from Figure 5.4.5.
2. Determine the AEP factor corresponding to the AEP event using Table 5.4.1 or
Figure 5.4.4. Scale the 10% AEP Baseflow Volume Factor appropriately to determine the
Baseflow Volume Factor (RBVF) for the event.
3. Apply the Baseflow Volume Factor to the calculated surface runoff volume as in Equation
(5.4.6).
1. Calculate the baseflow volume for the event using the appropriate AEP factors.
2. The baseflow volume calculated using Equation (5.4.6) should be added to the calculated
surface runoff as in Equation (5.4.7).
This approach can be directly applied to estimate the baseflow contribution to any event
between a 2 EY and a 1% AEP.
110
Baseflow Models
Figure 5.4.7. Total flow hydrograph generation approach, where (a) the data values
calculated through the baseflow estimation process are plotted; (b) linear interpolation
between the baseflow data points and matching the area under the curve to the baseflow
event volume is used to estimate the baseflow time series, which is plotted on the
hydrograph in green; and (c) the total streamflow time series is generated by summing the
surface runoff and baseflow time series values, with the streamflow hydrograph plotted in
dark blue.
4.6. Example
The process described in Book 5, Chapter 4, Section 5 is worked through in a number of
different case study examples.
111
Baseflow Models
The 10% AEP event is of interest for this case study. The reviewed flow data was used to
identify flood peaks, in particular the 10% AEP event. A comparable event was identified in
the record in February 1999. The event hydrograph was plotted and key characteristics of
the baseflow were identified manually (Figure 5.4.8; manually identified baseflow features
shown by green points). Straight lines were used to join the key baseflow features, to
estimate the baseflow time series. In this instance, the baseflow peak occurs 18 hours after
the peak of the streamflow.
Figure 5.4.8. Streamflow Hydrograph Approximating the 10% AEP Event for the North
Maroochy River at Eumundi
The surface runoff hydrograph for the design flood event was generated using a flood event
model with a critical duration of 18 hours (Figure 5.4.11, and details in Table 5.4.3).
112
Baseflow Models
Figure 5.4.9. Surface Runoff Hydrograph for the 10% AEP Design Flood event at Eumundi
Table 5.4.2. Key Surface Runoff Characteristics for the 10% AEP Design Flood Event at
Eumundi
The baseflow series estimated above was used directly to approximate the baseflow for the
10% AEP design flood event. The baseflow at the time of the streamflow peak (from
Figure 5.4.8) was aligned with the Surface Runoff Peak in the design hydrograph
(Figure 5.4.11), with the rest of the baseflow hydrograph used to guide the behaviour
through the duration of the design event.
113
Baseflow Models
Figure 5.4.10. Surface Runoff, Baseflow and Total Streamflow Hydrographs for the 10% AEP
Event at Eumundi
114
Baseflow Models
Figure 5.4.11. Surface Runoff Hydrograph for the 1% AEP Design Flood Event at Dirk Brook
Table 5.4.3. Key Surface Runoff Characteristics for the 1% AEP Design Flood Event at Dirk
Brook
Aligning the catchment boundary shape file with the spatial data from Figure 5.4.5 allows the
Baseflow Peak Factor and Baseflow Volume Factor for the 10% event to be extracted for the
catchment area directly:
The scaling factor for the 1% AEP event is sourced from Table 5.4.1, with a value of 0.6 for
both the peak and volume calculations. Using the relationships described earlier, the final
Baseflow Peak Factor, Baseflow Volume Factor and Baseflow Under Peak Factor for
application are outlined in Table 5.4.4. These values are applied to calculate the baseflow
and total streamflow characteristics in Table 5.4.5, and plotted in Figure 5.4.12.
Table 5.4.4. Calculation of Baseflow Factors for the 1% AEP Design Event for the Dirk Brook
115
Baseflow Models
Table 5.4.5. Calculation of Baseflow and Total Streamflow Characteristics for the 1% AEP
Event for the Dirk Brook Catchment
QBaseflow under peak streamflow = RBUPF QPeak surface runoff = 1.9 m3/s
Total Streamflow Peak
Total Streamflow Peak Flow Equation (5.4.5) = 23.9 + 1.9
QPeak streamflow = QPeak surface runoff + QBaseflow under peak streamflow = 25.8 m3/s
Baseflow Volume
Baseflow Volume Equation (5.4.6) = 0.66 x 1.25 x 106
116
Baseflow Models
Figure 5.4.12. Surface Runoff, Baseflow and Total Streamflow Hydrographs for the 1% AEP
Event at Dirk Brook
More than 230 suitable catchments across Australia were identified for analysis for ARR
Revision Project 7, and hourly streamflow data was collated for each of these locations. A
baseflow time series was generated from each flow record using the Lyne and Hollick digital
filter, modified to suit hourly streamflow data. The top 4N flood events were identified in the
hourly time series data for each catchment, generating a data set of more than 30,000 flood
events across the 236 catchments. For each of these events, the magnitude and timing of
the total streamflow peak and baseflow peak were identified. The time to these peaks was
calculated from the start of the event. At each location, the average time to the streamflow
and baseflow peaks were then calculated based on the 4N events.
For the purposes of this assessment, it was assumed that the total streamflow and surface
runoff peaks would coincide. This is considered a reasonable assumption since surface
runoff is generally the main component of the total streamflow hydrograph.
117
Baseflow Models
Analysis of the average time to peak data identified a strong relationship between the time to
the surface runoff (and streamflow) and baseflow peaks, as presented in Figure 5.4.13. This
relationship provides a direct calculation from which to estimate the timing of the baseflow
peak, based upon knowledge of the surface runoff event generated using a flood event
model. That is, the time to the baseflow peak (in hours from the start of the event) can be
calculated as:
Figure 5.4.13. Comparison of Average Time to Surface Runoff Peak and Time to Baseflow
Peak, Based on Analysis of more than 30,000 Flood Events from Catchments across
Australia.
4.8. References
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Brodie, R.S. and Hostetler, S. (2005), A review of techniques for analysing baseflow from
stream hydrographs, Proceedings of the NZHS-IAH-NZSSS 2005 conference, 28 November
- 2 December, Auckland, New Zealand.
Chapman, T.G. and Maxwell, A.I. (1996), Baseflow separation - Comparison of numerical
methods with tracer experiments, Proceedings of the 23rd Hydrology and Water Resources
Symposium, 21-24 May, Hobart, Australia.
Graszkiewicz, Z., Murphy, R.E., Hill, P.I., Nathan, R.J. (2011), Review of techniques for
estimating the contribution of baseflow to flood hydrographs, Proceedings of the 34th IAHR
World Congress, 26 June - 1 July Brisbane, Australia.
118
Baseflow Models
Kinkela, K., Pearce, L.J. (2014), Assessment of baseflow seasonality and application to
design flood events in southwest Western Australia, Australian Journal of Water Resources,
18(1), 27-38.
Ladson, A.R., Brown, R., Neal, B. and Nathan, R. (2013), A standard approach to baseflow
separation using the Lyne and Hollick filter. Aus J Water Resour., 17(1), 25-33.
Murphy, R., Graszkiewicz, Z., Hill, P., Neal, B., Nathan, R., and Ladson, T. (2009), Project 7:
Baseflow for catchment simulation, Phase 1 - selection of baseflow separation approach.
Report prepared for the Australian Rainfall and Runoff Technical Committee by Sinclair
Knight Merz, AR&R Report Number P7/S1/004, ISBN 978-085825-9218.
Murphy, R., Graszkiewicz, Z., Hill, P. and Nathan R. (2010), Project 7: Baseflow for
catchment simulation, Data collection and catchment characteristics, Report prepared for
Engineers Australia. Avaliable at http://arr.ga.gov.au/downloads-and-software/revision-
project-reports
Murphy, R.E., Graszkiewicz, Z., Hill, P.I., Neal, B.P., Nathan, R.J. (2011a), Predicting
baseflow contributions to design flood events in Australia, Proceedings of the 34th IAHR
World Congress, 26 June - 1 July , Brisbane, Australia.
Murphy, R., Graszkiewicz, Z., Hill, P., Neal, B., and Nathan, R. (2011b), Project 7: Baseflow
for catchment simulation, Phase 2 - development of baseflow estimation approach. Report
prepared for the Australian Rainfall and Runoff Technical Committee by Sinclair Knight Merz,
ARR Report Number P7/S2/017, ISBN 978-0-85825-916-4.
Nathan, R.J. and McMahon, T.A. (1990), Evaluation of automated techniques for base flow
and recession analyses, Water Resources Research, 26(7), 1465-1473.
119
Chapter 5. Flood Routing Principles
James Ball, Erwin Weinmann, Michael Boyd
5.1. Introduction
This chapter deals with the modelling of how the direct runoff and baseflow contributions
from different parts of the catchment (derived from the models discussed in Book 5, Chapter
3 and Book 5, Chapter 4) are combined and modified on their movement through the
catchment to form a hydrograph at points of interest, both at the catchment outlet and inside
the catchment. In Book 5, Chapter 5, Section 2 a number of fundamental concepts relevant
to flood routing are introduced. Book 5, Chapter 5, Section 3 then deals with the hydrologic
principles and methods of storage routing applied in the most widely used flood hydrograph
estimation models. In Book 5, Chapter 5, Section 4 the storage routing principles are
expanded from linear to non-linear models. Finally, Book 5, Chapter 5, Section 5 introduces
a range of hydraulic flood routing approaches that are based on various forms of the
unsteady flow equations.
The focus of the descriptions of flood routing approaches and methods in this chapter is on
explaining the background, merits and limitations of the different methods employed in flood
hydrograph estimation models. The details of how these flood routing principles and
methods are applied in flood hydrograph estimation models are covered in Book 5, Chapter
6. For more details on hydraulic analysis and modelling approaches refer to Book 7.
This chapter focuses on rural catchments. Similar routing approaches also apply for urban
catchments, but specific issues in urban hydrology are described in detail in Book 9.
• Stream channels;
• Stream banks;
• Floodplains; and
These forms of storage are distributed in nature – the storage is spread along these
catchment elements. In flood hydrograph estimation modelling the different forms of storage
do not need to be represented separately but can be modelled as combined (conceptual)
storage elements.
120
Flood Routing Principles
In addition to the distributed forms of storage, a catchment may also contain lakes,
reservoirs or flood detention basins where the storage occurs in a more concentrated form
and is represented in models by concentrated storage elements. For these concentrated
storage elements a more direct relationship exists between inflow and outflow than for
distributed forms of storage, as is explained further in Book 5, Chapter 5, Section 3. It is also
possible to use concentrated storage elements as a simplified representation of distributed
forms of storage (Book 5, Chapter 5, Section 4).
The effects of the different forms of catchment storage on the transformation of flow inputs
are twofold (Figure 5.5.1):
i. Translation of the hydrograph peak and other ordinates forward in time or, expressed
differently, delaying the arrival of the hydrograph peak at a downstream location; and
ii. Attenuation or flattening of the hydrograph as it moves along the stream network; this
results in a reduction of the peak flow but also in diffusion (spreading out) of the
hydrograph, thus extending its duration.
The effects of storage can be modelled through the formulation of the continuity equation for
a specific catchment element and over a time interval Δt:
�� = �� + �� (5.5.1)
where Iv is the volume of inflow to the catchment element, Ov is the volume of outflow from
the element, and ΔS is the change in the storage during the time interval. The inflow volume
(Iv) may represent runoff and baseflow inputs or outflow from an upstream element. While
ΔS is positive, the inflow volume to the element is greater than the outflow volume and
therefore the storage within the element will increase. Conversely, when ΔS is negative, the
121
Flood Routing Principles
outflow volume is greater than the inflow volume and the storage in the element will
decrease.
Due to the principle of mass conservation, the total volumes of inflow to and outflow from the
catchment element must be equal. In many situations and particularly in the arid and semi-
arid regions of the country, flow in a channel may infiltrate into the banks or bed of the
channel; in other words, transmission losses will occur. In these situations, the principle of
mass conservation remains, with the volume of the inflow being equal to the volume of the
outflow hydrograph plus the volume of the transmission loss.
The application of the continuity equation in the form above refers to temporary storage or
detention storage in different catchment elements. In contrast to this form of storage, where
all water is released again during the flood event, there may also be catchment elements
with retention storage (e.g. reservoirs with a flood storage compartment), where water is
retained more permanently and released from the storage mostly after the flood event by
controlled releases or, more gradually, through evapotranspiration and seepage losses.
These fundamental flood routing concepts form the basis of the runoff-routing approaches to
flood hydrograph estimation discussed in Book 5, Chapter 6, Section 4. Different flood
hydrograph estimation modelling systems use flood routing approaches of different
complexity, with correspondingly different data requirements. The following Book 5, Chapter
5, Section 3 to Book 5, Chapter 5, Section 5 explain in more detail the theoretical basis and
practical application of these different flood routing approaches.
In piped drainage systems the travel time through a pipe segment can be directly determined
from the flow velocity through the pipe.
In channels or natural streams the travel time T of a flood hydrograph through a routing
reach of length Δx is related to the kinematic wave speed ck:
��
�= (5.5.2)
��
The travel time or lag is thus directly proportional to the length of the channel reach. For a
wide rectangular channel and constant Manning’s n, the kinematic wave speed can be
approximated as 1.67 times the average flow velocity through the routing reach.
For practical flood routing applications, estimates of the lag time are generally based on
systematic analysis of observed flood peak travel times and their variation with flood
magnitude. Wong and Laurenson (1983) have examined the variation of the wave speed
(reach length divided by travel time of flood peak) with flow magnitude in a number of
Australian river reaches. They found that for in-bank flow the wave speed typically increases
with flow magnitude but reaches a maximum before bank-full flow and then reduces rapidly,
most likely because the effects of bank vegetation become more pronounced. With fully
developed floodplain flow the wave speed increases again. This means that travel time
122
Flood Routing Principles
estimates from smaller floods cannot be directly applied to estimate travel times for larger
floods and vice versa.
A variation of the simple hydrograph translation approach that also takes into account
attenuation effects is the ‘Lag and Route’ method in which the hydrograph is first translated
by the appropriate lag time and then routed through a concentrated linear storage (Fread,
1985).
Following some basic background on storage routing principles introduced in Section 5.4.1,
the main methods in practical use are described in Section 5.4.2 to Section 5.4.4. All the
storage routing methods described in these sections are based on a linear relationship
between storage and discharge. Some important limitations of the storage routing methods
are discussed in Book 5, Chapter 5, Section 4. The non-linear storage routing methods
described in Book 5, Chapter 5, Section 5 can overcome some of these limitations.
Where I and O respectively are the average rates of inflow and outflow and dS is the change
in storage during the time interval dt. Multiplication of Equation (5.5.3) by the time interval dt
yields the continuity equation expressed in terms of volumes:
INFLOW ‐ OUTFLOW = CHANGE OF STORAGE (5.5.4)
It is important to note that only the change in storage is considered, rather than the total
storage volume; this means that the datum used for the determination of storage volumes is
not important as it does not influence the routing calculations.
Storage routing methods do not use a momentum equation (see Book 6, Chapter 2, Section
8) but can reflect the conservation of momentum (dynamic effects) through appropriate
selection of their parameters (Koussis, 2009).
123
Flood Routing Principles
invariant function (ie. a function not subject to hysteresis). These relationships imply that for
a given stage (water surface elevation) the outflow is unique and independent of how that
stage is developed. Reservoirs or systems with horizontal water surfaces have relationships
of this type. For these concentrated storage systems, the peak outflow from the reservoir
occurs when the outflow hydrograph intersects the recession limb of the inflow hydrograph,
as illustrated in Figure 5.5.2.
The suitability of the assumption of a horizontal water surface during a flood event should be
considered when level pool storage routing techniques are applied. If backwater effects
create a ‘wedge storage’ effect (similar to the wedge storage discussed in Book 5, Chapter
5, Section 4) then it might be necessary to develop a storage-discharge relationship for the
reservoir where storage depends not only on outflow but also on inflow.
1 1
� + �2 �� − �1 + �2 �� = �2 − �1 (5.5.5)
2 1 2
where Δt, is the time increment used for the calculations, and subscripts 1 and 2 refer to the
start and end, respectively, of the time period being considered. All variables with the
subscript 1 are known either from the initial conditions or from previous calculations. In
addition, the inflow at the end of the time period (I2) is known. Hence, only S2 and O2 (ie. the
storage and the outflow at the end of the time period) are unknown.
The relationship between storage in a reservoir or detention pond and discharge from it
through spillways and outlets is generally highly nonlinear. This means that Equation (5.5.5)
cannot be solved analytically but requires a numerical solution method (or traditionally a
graphical solution technique).
124
Flood Routing Principles
There are a large number of alternative numerical and graphical techniques for solving
Equation (5.5.5); some of these alternatives are presented by Henderson (1966) and
Bedient et al. (2008). Possibly, the most commonly used method is the Modified Puls
method. The basis of this method is Equation (5.5.5) and the storage indication curve given
by:
2�
+ � vs. � (5.5.6)
��
2�2 2�1
+ �2 = �1 + �2 + − �1 (5.5.7)
�� ��
In this equation, all of the known parameters are on the right hand side of the equation while
all of the unknown parameters are on the left hand side of the equation. As the value of the
right hand side of Equation (5.5.7) is known, Equation (5.5.6) can be used to determine
values for S2and O2. Calculations then proceed to the next time step.
An alternative approach was presented by Henderson (1966) which has the advantage of
being self-correcting; in other words, an error in estimating flows at one time period will not
flow into subsequent time periods. The approach is based on using a variable N defined by:
� �
�= + (5.5.8)
�� 2
Substituting this variable into Equation (5.5.5), after rearranging, results in:
1
�2 = �1 + � − �2 − �1 (5.5.9)
2 1
In this form, all the unknown variables in Equation (5.5.9) are located on the left-hand side of
the equation and all the known variables are found on the right-hand side. Equation (5.5.9),
therefore, can be solved incrementally for values of N2 which, with Equation (5.5.8), enables
prediction of the unknown outflow rate (O2).
�2 = �1 + �� �1 − �1 (5.5.10)
Where the outflow O is a well-defined function of the storage content S. This explicit
numerical scheme is stable and accurate if the computational time step Δt is chosen
sufficiently small (significantly smaller than the time steps used to define the inflow
hydrograph).
125
Flood Routing Principles
The general purpose runoff routing modelling systems described in Book 5, Chapter 6,
Section 4 incorporate options for routing through reservoirs and detention basins as special
cases. Generally the routing routines applied allow for a range of different non-linear
formulations of the storage-discharge relationship.
An alternative form of the governing equation for storage routing is given by Fenton (1992),
based on expressing both storage content and outflow as a function of h, the water surface
level in the reservoir (or the head above the spillway crest). The storage increment ΔS can
be expressed as the product of the reservoir surface area A and level increment Δh. The
outflow O is then also defined as a function of h, and in cases where the outflow depends on
operational decisions (e.g. for gated spillways), also as a function of time. Numerical solution
methods discussed by Fenton (1992) range from a first order approximation by Euler’s
method to second order and higher order Runge-Kutta methods.
Boyd et al. (1989) and Zoppou (1999) showed that the following centred explicit finite
difference scheme produces stable estimates of the inflow hydrograph without the need for
any filtering or smoothing of the calculated hydrograph:
� � + �� − � � − ��
� � =� � + (5.5.11)
2��
However, as demonstrated by (Boyd et al., 1989), some oscillations may still be introduced if
the time step selected is too small, requiring the application of a simple smoothing algorithm
to the calculated hydrograph ordinates.
Despite this apparent simplicity, the Muskingum Method, with appropriate selection of its
parameters, can be shown to be equivalent to the solution of the convective diffusion
equation, the simplest physically-grounded flood routing model (Koussis, 2009). There are
many papers in the technical literature discussing its strengths and limitations, as well as
proposed enhancements to the classical Muskingum Method. Among these is the classical
paper by Cunge (1969) which led to the development of the now widely used Muskingum-
Cunge flood routing procedure (Book 5, Chapter 5, Section 4).
126
Flood Routing Principles
Using this concept, it is common to subdivide the storage within the control volume into
prism and wedge storage. These two conceptual storages are schematically illustrated in
Figure 5.5.3. The prism storage is formed by a volume of constant cross-section along the
length of the prismatic section which is dependent only on the outflow. Wedge storage is
dependent on the difference between the inflow and the outflow from the control volume.
During the rising limb of the hydrograph, inflow will exceed the outflow and the wedge
storage will be positive. Similarly, during the falling limb of the hydrograph outflow will
exceed the inflow and the wedge storage will be negative.
Figure 5.5.3. Prism Storage and Wedge Storage in a River Reach on the Rising Limb of the
Hydrograph
The concepts of prism storage and wedge storage can also be applied to non-prismatic
natural channels (rivers, streams and floodplains) with prism storage representing uniform
flow conditions in the irregular channel.
Assuming a linear relationship between the storage and outflow from the control volume, the
prism storage can be shown to be equal to KO while the wedge storage will be KX(I-O)
127
Flood Routing Principles
� = � �� + 1 − � � (5.5.13)
This is the standard form of the storage-discharge relationship used with the Muskingum
Method.
The two coefficients, X and K, can be related to physical characteristics of the routing
element. The ‘weighting coefficient’ X depends on the shape of the wedge storage and K
reflects the travel time of the flood wave through the routing element (given by the time lag
between the centroids of the inflow and the outflow hydrographs). The value of X varies from
0 for a reservoir type storage to 0.5 for a full wedge (or fully distributed storage). When X is
equal to 0, there is no wedge and, hence, the inflow has no influence on the storage volume;
this being the implicit assumption made with level pool routing. In this case the Muskingum
equation reduces to S = KQ, the storage-discharge (S-Q) relationship for a fully
concentrated storage. In most natural streams, X is approximately 0.2 but can vary from 0 to
0.3. A value of X equal to 0.5 corresponds to fully distributed storage, where the hydrograph
is translated with little attenuation. Great accuracy in determining the value of X is not
necessary as the predicted hydrograph is relatively insensitive to the value of this parameter.
The storage coefficient (K) has dimensions of time and represents the average travel time of
the flood wave through the reach; this time can be estimated by considering the centroids of
the inflow and outflow hydrographs. The relationship of K with the physical characteristics of
the routing element is further discussed in Book 5, Chapter 5, Section 4.
� � + � � + 1 � � + �� + 1 �� + 1 − ��
− = (5.5.14)
2 2 ��
together with Equation (5.5.11) expressed for time tn and time tn+1 respectively as:
�� = � ��� + 1 − � �� (5.5.15)
�� + 1 = � ��� + 1 + 1 − � �� + 1 (5.5.16)
where Δt is the time increment between times t and n tn+1. Subtracting Equation (5.5.13)
from Equation (5.5.14) gives the change in storage over time Δt as:
Combining Equation (5.5.15) with Equation (5.5.12) results in the routing equation for the
Muskingum Method which usually is expressed as:
where the Muskingum coefficients (C1, C2 and C3) are given by:
128
Flood Routing Principles
�� − 2��
�1 =
2� 1 − � + ��
�� + 2��
�2 = (5.5.19)
2� 1 − � + ��
2� 1 − � − ��
�3 =
2� 1 − � + ��
It should be noted that summation of the Muskingum coefficients should give a value of unity
(C1 + C2 + C3 = 1). This provides an easy and a quick check that the coefficient values have
been calculated correctly.
129
Flood Routing Principles
Figure 5.5.4 shows the inflow hydrograph and the observed and calculated outflow
hydrographs. The inflow hydrograph at Melton Reservoir (Column 1) has a peak flow of
420 m3/s at 2.00 am of Day 2, while the calculated outflow peak at the end of the 20
km reach (Column 7) is 357 m3/s occurring at 4.00 am on the same day. This is close
to the actual (observed) peak flow of 356 m3/s (Column 6).
The calculated hydrograph has a small ‘initial dip’ (negative flow values ) at time 16.00
hour on Day 1. Such a dip results for values of X > 0 if the time step is shorter than the
travel time through the reach (in other words, the outflow is calculated before the
change in inflow has travelled through the reach).
Figure 5.5.4. Werribee River Example – Inflow Hydrograph and Observed and
Calculated Outflow Hydrographs
130
Flood Routing Principles
Figure 5.5.5 illustrates the impact of changing the routing parameter X from the
optimum value of X = 0.25 to X = 0 (concentrated or reservoir-type storage) and X =
0.5 (fully distributed storage). It can be seen that a concentrated storage results in
greater attenuation, with the peak of the outflow hydrograph on the falling limb of the
inflow hydrograph, while fully distributed storage results in almost pure translation of
the inflow hydrograph. With X = 0.5, there is a very noticeable initial dip in the outflow
hydrograph.
� 1−�
�1 = 1 −
��
� 1−�
�2 = −�
�� (5.5.20)
�3 = �
−��
� 1−�
�=�
Pilgrim (1987) suggests that these coefficients are more accurate than the classical
coefficients. The basis of this suggestion is that the coefficients proposed by Nash (1959) do
not require the ratio Δt/K to be small and, furthermore, the coefficients are not based on the
finite difference approximation to the continuity equation (Equation (5.5.12)) but rather on the
differential form of the continuity equation.
When Δt/K is small, the two alternative estimates of the routing coefficients should converge.
131
Flood Routing Principles
Given that current approaches to implementation of any flood routing approach are based on
computer applications, the historical need for large Δt and hence large ratios of Δt/K due to
the use of hand calculations is no longer relevant. Therefore, those applying Muskingum
techniques within computerised applications should not notice any difference between use of
the classical and Nash formulations of the coefficients, so long as an appropriately short time
step is adopted for the simulations.
For the Werribee River example, replacing the standard Muskingum coefficients with Nash
coefficients results in an outflow peak of 353 m3/s, a difference of only about 1% from the
original result.
There are a number of methods by which the recorded flood information can be used to
derive values for K and X. These vary from graphical approaches as outlined below to
optimisation approaches as presented by Stephenson (1979) and Chang et al. (1983).
A classical graphical method is based on combining Equation (5.5.14) and Equation (5.5.17)
which, after rearrangement, results in:
0.5�� �� + 1 + �� − �� + 1 + ��
�= (5.5.21)
� �� + 1 − �� + 1 − � �� + 1 − ��
The numerator represents the change in storage during the time interval Δt and the
denominator is the weighted discharge for a selected value of X. The computed values of the
cumulative storage values are plotted against the weighted discharge for each time interval,
with the usual result being a graph in the form of a loop. The value of X that produces a loop
closest to a straight line is adopted as the value for X. The value for K is given by the slope
of the line.
Figure 5.5.4 illustrates typical results obtained with this technique for the Werribee River
example introduced in Book 5, Chapter 5, Section 4 (Laurenson, 1998). In this example a
value of X = 0.25 produced the narrowest loop. The K value is computed as the slope of the
fitted line:
1 5750000 − −100000
�= = 4.64ℎ���� (5.5.22)
3600 350 − 0
132
Flood Routing Principles
• Failure to collapse to a straight line indicates that the length of the reach being considered
is too long;
• Inflow and outflow hydrograph peaks of similar magnitude indicates that X will be close to
0.5;
• A peak of the outflow hydrograph much smaller than the peak inflow indicates that X will
be close to zero;
• An inconsistent slope of the line after evaluation of X indicates a change in the storage
characteristics. This change may be due to, for example, inundation of the floodplains
adjacent to the river channel. In these circumstances, the practitioner needs to use the
slope of the line most appropriate for the problem being investigated to select the value of
K; and
• As discussed in Book 5, Chapter 5, Section 3, flood travel times vary substantially with
flood magnitude. If floods of different magnitudes are to be routed, the storage analysis
needs to be carried out for a range of floods and the flood routing parameters varied
accordingly.
133
Flood Routing Principles
Stephenson (1979) presents one such application where linear programming was used to
minimise the difference, or error, between a predicted and recorded hydrograph for a
recorded inflow hydrograph. The error function used in this application was the sum of the
absolute values of the differences between the recorded and predicted hydrographs; values
of the routing coefficients that minimised this error function were assumed to be the
appropriate values for the coefficients. It was noted, however, that use of alternative error
functions would result in different values for the routing coefficients.
1 �� �1
�� − 1 = �� − �� − 1 − �� (5.5.23)
�2 �2 �2
However, application of the Muskingum technique results in attenuation of the flood wave as
it moves downstream. This contradiction between the analytical and the numerical
applications required investigation. It is worthwhile noting that other methods, such as
numerical solutions of the kinematic wave equation, also demonstrate similar characteristics,
ie. application of the method results in attenuation of flood waves despite theoretical
considerations indicating that no flood wave attenuation should occur.
Since its proposal by Cunge (1969), the Muskingum-Cunge technique has achieved
widespread usage; for example, it is an option available in several flood hydrograph
modelling systems for routing of flows along channels. One advantage of the Muskingum-
Cunge technique is that its application does not require the use of historical flood events for
estimation of the lag parameter or the weighting coefficient.
134
Flood Routing Principles
∂� ∂� ∂2 �
+ �� =� 2 (5.5.24)
∂� ∂� ∂�
with
∂�
�� = (5.5.25)
∂�
�
�= (5.5.26)
2���
where Q is the discharge, x the distance along the channel, A the cross-sectional area, B the
water surface width and So the channel slope.
Through the diffusion term, the convective diffusion equation allows for the diffusive effects
of the flood wave movement (attenuation of the flood peak) on its movement downstream. It
should be noted that, through inclusion of the ‘pressure term’ from the complete momentum
equation, the diffusion wave equation allows for backwater effects to be reflected in the flood
routing. However, this feature is lost through the second order approximation in the
Muskingum-Cunge method.
However, the direct link to the hydraulically-based convective diffusion or kinematic wave
equation now allows the coefficients given by Equation (5.5.17) or Equation (5.5.18) to be
determined using the hydraulic characteristics of the channel reach.
The Muskingum lag parameter K (which has the dimensions of time) is directly linked to the
kinematic wave celerity Ck:
��
�= (5.5.28)
��
where Δx is the length of the routing reach and Ck is as defined by Equation (5.5.25). When
Q is calculated from the Manning Equation, and the cross-sectional area, wetted perimeter
and the roughness parameter are known functions of depth or stage, the derivative dQ/dA in
Equation (5.5.25) can be evaluated. For a wide rectangular channel and Manning’s n
constant with changing flow depth, the kinematic wave celerity can be approximated as 1.67
times the average flow velocity through the routing reach.
To avoid confusion with the distance (x), the Muskingum weighting coefficient (X) is now
labelled θ and evaluated as:
1 �
�= 1− (5.5.29)
2 �������
135
Flood Routing Principles
where Q is a representative discharge for the hydrograph being routed and the other terms
are as defined before (corresponding to the same representative discharge).
For the Werribee River example from Example: Muskingum Flood Routing – Werribee
River (after Laurenson (1998)) the relevant routing reach characteristics are as follows:
Δx = 20 km
Ck = 1.2 m/s
Q = 210 m3/s
B = 35 m and
S0 = 0.0005.
X = 0.25. As these values are almost identical to the values used in the original
calculations, there is little difference in the results obtained with the Muskingum-Cunge
method.
�
�� * = (5.5.30)
�����
where Δx* is the characteristic reach length proposed by Kalinin and Miljukov (1958). The
optimum number of sub-reaches represented by a concentrated storage(N*) can then be
calculated as L/Δx*, where L is the total length of channel to be routed through.
This means that the steeper the channel and the faster the flood wave travels for a given
discharge per unit width, the shorter the routing reaches and thus the larger the number of
routing reaches required. For very flat channels and slow moving flood waves, the number of
sub-reaches required approaches one; the whole river reach can thus be expected to act like
a concentrated storage. Using a number of storages less than N* will tend to overestimate
the degree of attenuation compared to translation, while using a larger number of storages
will have the opposite effect.
136
Flood Routing Principles
Wong (1985) confirmed that using too few concentrated storages resulted in underestimation
of both the peak flow and the travel time. Conversely, using a greater number of
concentrated storages enhances the translation effects and increases the peak flow.
However, beyond a certain number of storages the lag time does not increase any further
but the peak flow will still increase.
These findings have implications for the application of runoff-routing models using a series of
linear or nonlinear storages, as discussed in Book 5, Chapter 6, Section 4.
For this example the total routing reach of 20 km is divided into two sub-reaches of 10
km length, each being represented by a concentrated storage (X = 0). If the same wave
speed of ck = 1.2 m/s is used as for the Muskingum-Cunge Method, the routing
parameter is calculated as K = Δx/ck = 2.31 hours.
Figure 5.5.7 shows the outflow hydrographs for the two sub reaches; it indicates that
routing through a cascade of two linear storages (ie. the application of the Kalinin-
Miljukov Method) results in slightly greater attenuation of the peak flow than obtained
with the single reach Muskingum Method. The calculated peak flow is 338 m3/s, which
is 5% less than the actual observed flow at Werribee Weir.
137
Flood Routing Principles
ii. For the classical Muskingum Method the evaluation of X and K requires the use of historic
flood data and therefore is based on the channel geometry within the limited range of that
flood data. Extrapolation for higher flood levels may require modification of the values for
X and K to reflect any significant changes in channel characteristics. The Muskingum-
Cunge Method can at least partly overcome this limitation;
iii. The need for the volume of the inflow hydrograph to equal the volume of the outflow
hydrograph, which means that lateral inflows have to be added at either end of the routing
reach;
iv. The method has an inherent problem in that it may produce physically unrealistic negative
outflows (an ‘initial outflow dip’) when the inflow hydrograph rises steeply. This can be
overcome by specifying a minimum routing time step Δt of 2KX;
v. The inability of the method to consider downstream disturbances that propagate upstream
(backwater effects). This places limitations on the application of the method in relatively
flat stream reaches; and
vi. The limited ability to deal with fast rising hydrographs, due to the neglect of the
acceleration terms in the momentum equation.
Limitations (ii) to (vi) also apply to the non-linear storage routing methods described in Book
5, Chapter 5, Section 5.
� = �� (5.5.31)
138
Flood Routing Principles
The constant coefficient K represents the lag time between inflow and outflow (or the
average travel time through the routing element).
As shown in the Book 5, Chapter 5, Section 5, hydraulic analysis of various routing elements
indicates that their S-Q relations are typically non-linear and that they can be approximated
by a power function relationship of the following form:
� = ��� (5.5.32)
�
�= = ��� − 1 (5.5.33)
�
Non-linear storage routing methods require an iterative numerical procedure for their
solution, such as the Regula Falsi (False Position) method or the Newton-Raphson method
(e.g. Chapra and Canale (2010)). A numerical method for non-linear flood routing has been
developed by Laurenson (1986) and is summarised in Pilgrim (1987).
This relationship plots as a straight line on log-log paper, and the exponent m represents the
slope of the line. For any two points on the line:
�log�
�= (5.5.35)
�log�
The exponent m can thus be interpreted as indicating the relative efficiency of storage and
discharge with increasing water level (or increasing flood magnitude). Furthermore, Equation
(5.5.33) indicates how the lag time changes for different values of m. Three different cases
can be distinguished:
i. m = 1 (equivalent to the linear S-Q relationship) means that storage and discharge
increase at a similar rate – the lag time remains constant;
ii. m < 1 represents relatively efficient flow and storage increasing slowly – the lag time
decreases with increasing flood magnitude; and
iii. m > 1 indicates that flow is relatively inefficient and storage increases rapidly – the lag
time increases with increasing flow.
An example of case (ii) is discharge and storage in a wide rectangular channel of Length L
with water depth y (Mein et al., 1975):
� = ��� (5.5.36)
139
Flood Routing Principles
1
�� 2 5 3 (5.5.37)
�= ��
�
�0.6�0.4� 0.6
�= � (5.5.38)
��0.3
In this case the exponent m is 0.6 (efficient flow compared to storage) and the coefficient k is
represented by the fraction before Q in Equation (5.5.38). A similar analysis for a triangular
cross-section will yield an exponent m = 0.75.
In contrast to this, the analysis of storage and discharge for a rectangular channel being
blocked by an embankment with a culvert, where discharge occurs as flow through an orifice
of fixed size (inefficient flow), will yield a value of the exponent m substantially greater than
1.0.
Examples of S-Q curves with different values of the exponent m are illustrated in
Figure 5.5.8, where values of S are plotted against Q on logarithmic scale axes. The
different curves are plotted so that they cross at a representative discharge of about 30 m3/s
(representing the middle of the range of flood magnitudes used in model calibration).
It follows from the examples plotted in Figure 5.5.8 that different combinations of k and m
can give similar values of storage for a given discharge. Calibration of a runoff-routing model
over a limited range of flood magnitudes can thus only give a broad indication of the
appropriate degree of non-linearity when the model is applied for the flow conditions of a
different flood magnitude, and application in the extrapolated range needs to be guided by
consideration of the physical characteristics of the routing reach.
140
Flood Routing Principles
The non-linear nature of catchment storage and values of the exponent m for application in
runoff-routing applications are further discussed in Book 5, Chapter 6. Application of runoff-
routing models for the range of Very Rare to Extreme floods is discussed in Book 8.
� (5.5.39)
� = � �� + 1 − � �
or more simply:
141
Flood Routing Principles
� = ���� (5.5.40)
An example of the application of the non-linear Muskingum Method for routing hydrographs
through river reaches is in the URBS runoff routing model (Carroll, 2012).
The effect of different non-linearity assumptions used for routing hydrographs through non-
linear distributed storage (or a series of concentrated non-linear storages) is to produce
different degrees of attenuation when the calibrated k and m parameters are applied to
routing floods of different magnitude. This is illustrated in Figure 5.5.9 for the case where the
non-linear routing model is applied to a flood hydrograph twice the magnitude of the
observed flood used for calibrating the model. It is shown that a lower value of m (with a
correspondingly higher value of k) produces a higher peak that occurs earlier than if a k and
m parameter combination for a linear model had been used.
142
Flood Routing Principles
For application in flood hydrograph estimation models it is most useful to present the one
dimensional unsteady flow equations in terms of the discharge Q and stage z (Weinmann,
1977; Fread, 1985):
Continuity:
∂� ∂�
+ −�=0 (5.5.41)
∂� ∂�
Momentum:
∂� ∂ �2 ∂�
+ + �� + �� = 0 (5.5.42)
∂� ∂� � ∂�
where q is the rate of lateral inflow to the routing reach, g the gravitational acceleration, z the
water level or stage and Sf the average friction slope of the routing reach. The friction slope
can be determined from a uniform flow resistance formula (Book 6, Chapter 2, Section 5) as
Sf =Q2/C2, where C is the conveyance of the cross-section.
This system of equations can be applied in flood hydrograph estimation models to track the
movement of a flood hydrograph through river and floodplain reaches. The equations have
no analytical solution and flood routing methods based on the full dynamic equations thus
need to apply one of the numerical solution procedures described in Book 6, Chapter 4,
Section 7. Explicit numerical solution schemes provide a more direct and more
computationally efficient solution than implicit schemes but, to avoid computational stability
problems, they require the time and space steps to be selected in accordance with the
Courant stability criterion.
The particular advantage of the application of the full dynamic wave equations is in their
ability to allow for backwater effects or tidal influences and to deal more accurately with
rapidly rising or falling flood hydrographs. The flood routing models based on the full
dynamic equations can also produce flood level hydrographs and rating curves at points of
interest.
The application of hydraulic routing approaches requires the geometry of the channel and
floodplain system to be defined by cross-sectional information obtained from river surveys or
Digital Elevation Models. The representation of the actual river and floodplain system in the
model is highly conceptualised, as the computational cross-sections are generally quite
widely spaced and a smooth variation of the hydraulic characteristics over the model reach
is assumed.
Traditionally the application of the full dynamic wave equations has been limited by the fact
that their numerical solution is more demanding on computer resources but this is no longer
an important factor.
143
Flood Routing Principles
Two dimensional forms of the unsteady flow equations are introduced in Book 6, Chapter 2,
Section 9 and Book 6, Chapter 4, Section 7. This form of the dynamic wave equations (or a
simplified form of the equations) is applied in the rainfall-on-grid models discussed in Book 5,
Chapter 6, Section 5.
∂�
�� = �0 − (5.5.43)
∂�
The inclusion of the pressure term ∂z/∂x allows for the effects of a downstream boundary
condition (backwater, tidal influences) to be included in the routing computations.
By combining this simplified momentum equation with the continuity equation (Equation
(5.5.41)) the form of the convective-diffusion (diffusion wave) equation (Equation (5.5.44))
used in hydrologic flood routing models is obtained:
∂� ∂� ∂2 �
+ �� = � 2 + ��� (5.5.44)
∂� ∂� ∂�
∂�
with �� = ∂�
the kinematic wave celerity
�
and � = 2���
the diffusion coefficient
where Q is the discharge, q the rate of lateral inflow, x the distance along the channel, A the
cross-sectional area, B the water surface width and So the channel slope.
The diffusion term in Equation (5.5.44) allows explicitly for the diffusion and peak attenuation
effects observed in the movement of flood waves through river and floodplain reaches. This
is in contrast to the Muskingum Method where the diffusion effects are only introduced
through judicious choice of the numerical solution scheme and determination of parameter
values.
∂� ∂�
+ �� = ��� (5.5.45)
∂� ∂�
The term ‘kinematic wave’ was introduced by Lighthill and Witham (1955) to describe the
motion of waves in time and space without considering mass and force. Equation (5.5.45)
144
Flood Routing Principles
can be obtained from the full unsteady flow equations by replacing the momentum equation
(Equation (5.5.42)) by a uniform flow relationship.
Kinematic waves are theoretically not dispersive (ie. they travel without attenuation) but the
variation of the travel speed Ck with Q produces a change of wave form, resulting in a
gradual steepening of the wave front as it travels downstream, eventually leading to a
'kinematic shock' (Henderson, 1966). Analytical solutions for the kinematic wave equations
exist only for a few idealised situations (Miller, 1984). Numerical solution schemes for the
kinematic wave equation introduce some degree of dispersion/attenuation of the flood wave,
and thus match more closely the behaviour of actual flood waves.
5.7. References
Bedient, P.B., Huber, W.C. and Vieux, B.E. (2008), Hydrology and floodplain analysis, ed. 4,
Upper Saddle River: Prentice Hall.
Boyd, M.J., Pilgrim, D.H., Knee, R.M. and Budiawan, D. (1989), Reverse Routing to Obtain
Upstream Hydrographs. Hydrology and Water Resources Symposium. Christchurch, NZ,
23-30 November. Preprints of Papers (p: 372). Institution of Engineers, Australia.
Carroll, D.G. (2012), URBS (Unified River Basin Simulator) A Rainfall Runoff Routing Model
for Flood Forecasting and Design. Version 5.00 Dec, 2012.
Chang, C.N., da Motta Singer, E. and Koussis, A.D. (1983), On the mathematics of storage
routing, Jounral of Hydrology, 61(4), 357-370.
Chapra, S.C. and Canale, R.P. (2010), Numerical methods for engineers, New York:
McGraw Hill.
Cunge, J.A. (1969), On the subject of a flood propagation method, Journal of Hydrology
Research, 7(2), 205-230.
Fenton, J.D. (1992), Reservoir routing, Hydrological Sciences Journal, 37(3), 233-246.
Fread, D.L. (1985), Channel routing. In 'Hydrological Forecasting', Chichester: John Wiley
and Sons Ltd.
HEC (1993). Introduction and application of kinematic wave routing techniques using HEC-1.
US Army Corp of Engineers, Hydrologic Engineering Centre, http://www.hec.usace.army.mil/
publications/TrainingDocuments/TD-10.pdf.
Henderson, F.M. (1966), Open Channel Flow, Macmillan Publishing Co., New York
Kalinin, G. P. and Miljukov, P. I. (1958), Approximate methods for computing unsteady flow
movement of water masses (in Russian), Transactions Central Forecasting Institute, 66 p.
Koussis, A.D. (2009), Assessment and review of the hydraulics of storage routing 70 years
after the presentation of the Muskingum method. Hydrological Sciences Journal, 54(1),
43-61, available at <http://dx.doi.org/10.1623/hysj.54.1.43>.
145
Flood Routing Principles
Laurenson, E.M. (1998), Hydrology CIV3263 Course Notes, Monash University, Department
of Civil Engineering.
Laurenson, E.M. (1962), Hydrograph synthesis by runoff routing, Report No. 66, Sydney:
Water Research Laboratory, University of New South Wales.
Laurenson, E.M. (1986), Variable time-step nonlinear flood routing, In Hydrosoft '86;
Hydraulic Engineering Software, Proceeding sof the 2nd International Conference,
Computational Mechanics Publns, Springer-Verlag, 9-12 September, pp: 61-72, Berlin.
Laurenson, E.M., Mein, R.G. and Nathan, R.J. (2010). RORB Version 6 Runoff Routing
Program - User Manual.
Lighthill, M.J. and Witham, G.B. (1955), On kinematic waves I, Proceedings of the Royal
Society of London, 229: 281-316.
McCarthy, G.T. (1938), The unit hydrograph and flood routing, Manuscript presented at a
conference of the North Atlantic Division, United States Army Corps of Engineers, 24 June
(unpublished).
Mein, R.G., Laurenson, E.M. and McMahon, T.A. (1975), Simple nonlinear model for flood
estimation, Journal of Hydrology, 100(HY11), 1507-1518.
Miller, J.E. (1984), Basic concepts in kinematic wave models. U.S. Geological Survey
Professional Paper 1302, Washington: United States Government Printing Office.
Nash, J.E. (1959), A note on the Muskingum flood-routing method, Journal of Geophysical
Research, 64(8), 1053-1056.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Weinmann, P.E. (1977), Comparison of flood routing methods for natural rivers, Civil
Engineering Research Report No.2/1977, Clayton: Monash University.
Wong, T.H.F. (1985), Reach subdivision for storage routing in rivers, Proceedings of the 21st
IAHR Congress, 19-23 August, Melbourne.
Wong, T.H.F. and Laurenson E.M. (1983), Wave speed-discharge relations in natural
channels, Water Resources Res., 19: 701-706.
Zoppou, C. (1999), Reverse routing of flood hydrographs using level pool routing, Journal of
Hydrologic Engineering, 4(2), 184-188.
146
Chapter 6. Flood Hydrograph
Modelling Approaches
James Ball, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
6.1. Introduction
This chapter deals with a range of approaches available to calculate design flood
hydrographs at the catchment outlet and other points of interest. It therefore integrates the
previous chapters in Book 5 and also links to Book 7, where practical applications are
discussed.
The time-area approaches (Book 5, Chapter 6, Section 2) and unit hydrograph approaches
(Book 5, Chapter 6, Section 3) allow a relatively simple transformation of rainfall excess
inputs to flood hydrograph outputs and are directly applicable to a lumped representation of
the flood formation process in a catchment, where the inputs and processes can be
assumed to be spatially uniform (or at least spatially consistent between different events).
These “traditional” approaches have generally been replaced by more flexible approaches.
However, they also find application to represent the overland flow phase of hydrograph
formation in some of the node-link type models discussed in subsequent sections.
The most widely used flood hydrograph estimation models are based on the runoff routing
approach, in which both the runoff production and hydrograph formation phases can be
represented in a distributed fashion, reflecting the spatial variation of rainfall inputs and flood
processes in a catchment. The two principal groups of models are the network (node-link
type) models described in Book 5, Chapter 6, Section 4 and the rainfall-on-grid (or direct
rainfall) models described in Book 5, Chapter 6, Section 5.
The routing methods incorporated in these models have their foundations in the open
channel hydraulics introduced in Book 6 and apply the flood routing principles outlined in
Book 5, Chapter 5 of this book. The descriptions in Book 5, Chapter 6, Section 4 and Book
5, Chapter 6, Section 5 focus on the specific way these principles are applied to represent
different parts of the flood hydrograph formation process.
The discussion of flood hydrograph modelling in this chapter is intended to introduce readers
to the different approaches used and the assumptions made in different modelling
approaches and different runoff routing modelling systems. Guidance on the practical
application of flood hydrograph models to different flood estimation problems, including
estimation of model parameters, is provided in Book 7.
As with other chapters of Book 5, this chapter deals primarily with rural catchments, and
while the principles apply also for urban catchments, urban catchment hydrology is covered
in detail in Book 9.
147
Flood Hydrograph Modelling
Approaches
The basic principle of time-area approaches, as illustrated in Figure 5.6.1, is that rainfall
excess at any time t-tt after the start of the storm that occurs at a point on the catchment with
a travel time of tt to the catchment outlet will influence flow at the catchment outlet at time t.
A fundamental assumption involved in this is that flow at the catchment outlet is influenced
only when the runoff reaches the catchment outlet, i.e. when the individual water particles
reach the catchment outlet.
148
Flood Hydrograph Modelling
Approaches
Points on the catchment which have equal travel times to the outlet can be joined to form
isochrones. Application of the time-area method requires isochrones to be drawn for the
catchment being considered; note that many computerised applications assume that the
area increases in a linear manner on small subcatchments to avoid the need for delineation
of isochrones.
When construction of isochrones is required, a common assumption is that the travel time is
related to the travel length (L) and slope (S) of the catchment by the following relationship:
149
Flood Hydrograph Modelling
Approaches
�
�� = (5.6.1)
�0.5
Equation (5.6.1) follows directly from Manning’s equation, in which flow velocity is related to
the square root of stream slope which is assumed to be the same as the energy gradient.
However, as the flow moves downstream to the catchment outlet, the flatter stream slopes
are accompanied by greater water depths, which may compensate for the decreasing
stream slope in the Manning equation. Studies by Leopold et al. (1964) and Pilgrim (1977)
indicate that stream velocities essentially remain constant along the length of the stream and
may even increase in a downstream direction. If velocities remain constant along the stream,
the travel time would be directly related to travel distance by:
�� = � (5.6.2)
A plot of the area between adjacent isochrones against the travel time produces a time-area
curve. An example of a time-area curve is presented in Figure 5.6.1.
Since there are many points on the catchment with travel times tt and corresponding times t-
tt after the start of the storm, the flowrate at the outlet at time t is the sum of all possible
combinations. As a simple example, if the catchment is divided into five segments using
isochrones (Ai), and a storm has three periods of rainfall excess (Pj in mm/h) with both the
isochrones and rainfall having the same time step, then the total time of the hydrograph is
eight time steps. Also, the discharges from the catchment at successive time steps (equal to
the isochrone interval) for this example are given by:
�0 = 0
�1 = ��1�1
�2 = � �1�2 + �2�1
�3 = � �1�3 + �2�2 + �3�1
�4 = � �2�3 + �3�2 + �4�1 (5.6.3)
�5 = � �3�3 + �4�2 + �5�1
�6 = � �4�3 + �5�2
�7 = ��5�3
�8 = 0
where k is an appropriate unit conversion factor (k varies with the units of Q and A).
�
�� = � ∑ ��� � − � + 1 (5.6.4)
�=1
where Ai is the area between the i-1 and i isochrones, Pj is the rainfall excess depth in the
jth period of the storm event, and k is a conversion factor. As the conversion factor (k) will
vary with the isochrone interval, it is recommended that the intensity of rainfall excess
(mm/h) be used; details of the conversion factors for different combinations of discharge,
area units and rainfall excess intensity in mm/h are given in Table 5.6.1.
150
Flood Hydrograph Modelling
Approaches
• Isochrones of travel time usually are not known, except in a few experimental studies and
must be estimated. These experimental catchments include those monitored by (Pilgrim,
1966a; Pilgrim, 1966b) using tracers to ascertain travel times. To overcome this
disadvantage, many applications adopt a simplified time-area relationship. A common
simplified relationship is based on a linear growth in area with time (in essence, an
assumption of a rectangular shape with a length given by the response time and a width
defined by the catchment area), and thus there is a need only to estimate a representative
travel time for the conceptualised catchment.
• The time-area curve cannot be easily derived from recorded rainfall and streamflow data.
• Construction of the direct runoff hydrograph assumes that flow is translated to the outlet
with a lag but without attenuation. In other words, a kinematic response is assumed. As a
result of this, time area methods are more likely to be applicable to estimation of flows
from small catchments and particularly to estimation of surface flows in urban catchments.
• The method is linear, i.e. a doubling of rainfall excess results is a doubling of predicted
discharges, whereas data from many catchments, particularly the larger rural catchments,
demonstrates a nonlinear response to changes in rainfall excess.
151
Flood Hydrograph Modelling
Approaches
Since the storm event has 7 periods, each of 3 minutes duration, the catchment will be
divided into 5 subareas by isochrones spaced at 3 minute intervals. The hydrograph
base length is given by the catchment time of concentration plus the storm duration, ie
15 minutes + 21 minutes = 36 minutes. The total depth of rainfall excess is 8.1 mm
(see Column 3 in Figure 5.6.2). Using this depth of rainfall excess and the catchment
area, the volume of direct runoff from the catchment is 405 m3.
The subarea sizes between each adjacent pair of 3 minute isochrones, proceeding
from the outlet of the catchment to the top of the catchment, are 0.33, 0.67, 1.00, 1.33
and 1.67 ha. The resultant time-area relationship is shown in Figure 5.6.2.
152
Flood Hydrograph Modelling
Approaches
As a check, from Column 5 in Table 5.6.2, the volume of the direct runoff hydrograph 1
is given by
153
Flood Hydrograph Modelling
Approaches
154
Flood Hydrograph Modelling
Approaches
the actual flood response of the catchment to rainfall, and can be directly determined or
estimated from recorded rainfall and streamflow data. As a consequence, the resulting unit
hydrograph incorporates the effects of both translation and attenuation, and so reduces the
assumptions needed in the time-area approaches.
Direct runoff hydrographs result from the interaction between two important factors:
In the special case of the unit hydrograph, a standardised rainfall excess (1 mm) is used,
and the unit hydrograph, therefore, represents the effects of the catchment in delaying and
attenuating rainfall excess as it flows from all points on the catchment to the catchment
outlet. Use of this standardised rainfall excess provides the opportunity to relate the size and
shape of the unit hydrograph to the catchment’s geophysical properties such as area, stream
length and slope, and thus enables synthetic unit hydrographs to be estimated for
catchments where no recorded streamflow data exists.
Each unit hydrograph reflects the unique geophysical properties of the catchment and,
hence, each catchment should have its own unique unit hydrograph. Bernard (1935)
pioneered this concept by developing a dimensionless unit hydrograph which reflected the
geophysical properties of the catchment.
The basic principle of unit hydrograph theory is that the catchment responds in a linear
manner to rainfall excess and, hence, superposition is feasible. As unit hydrographs form a
linear system, the ordinates of the direct runoff hydrograph are linearly proportional to the
depth of rainfall excess. For example, if the rainfall excess is doubled, each ordinate of the
direct runoff hydrograph will be doubled. As another example, if a sequence of several
periods of rainfall excess occurs, one after another, the resulting direct runoff hydrograph is
equal to the sum of the runoff from each individual period of rain (see Figure 5.6.4).
For a given catchment the unit hydrograph for a particular specified period will be different
from those with different time specified periods. For example, a 1 hour unit hydrograph
155
Flood Hydrograph Modelling
Approaches
results from 1mm of rainfall excess falling over 1 hour at a rate of 1mm/h, whereas a 2 hour
unit hydrograph results from 1mm of rainfall excess falling over 2 hours at a rate of 0.5
mm/h. In general, the peak discharges of unit hydrographs with longer specified periods will
be lower and will occur later in time. This decrease in peak discharge arises as a result of
the lower intensity of rainfall excess as the specified period of the unit hydrograph increases.
The 1 hour unit hydrograph will be applied to a rainfall hyetograph with 1 hour rain periods,
and will produce a direct runoff hydrograph with the ordinate values predicted every hour.
The 2 hour unit hydrograph will be applied to a rainfall hyetograph with 2 hour rain periods.
Calculation of a unit hydrograph of longer time period from one of a shorter period is
accomplished by the addition of several short period unit hydrographs with each sequential
unit hydrograph delayed by the specified period. Thus, the sum of four 15 minute unit
hydrographs, each of which is delayed in time by 15 minutes, will produce a direct runoff
hydrograph resulting from 4 mm of rainfall excess during a 1 hour period. Since a unit
hydrograph is the result of 1 mm of rainfall excess during the specified time period, the 1
hour unit hydrograph is derived from this runoff hydrograph by dividing all ordinates in the
runoff hydrograph by four.
Another situation in which the specified time period needs to be changed is where an
instantaneous unit hydrograph has been obtained, as occurs in some synthetic unit
hydrograph methods. The instantaneous unit hydrograph (IUH) represents the direct runoff
hydrograph produced by 1 mm of rainfall excess which occurs at an instant in time. The
ordinates of a specified period unit hydrograph at each time t are obtained by integrating the
ordinates of the IUH over an interval from t-T to T, where T is the specified time period, then
dividing this result by T. Practically, this is achieved by averaging the IUH ordinates at times t
and t-T. For example, each ordinate of a 1 hour unit hydrograph is the average of the IUH
ordinates at this time and 1 hour before this time (Boyd, 1982).
The derivation of a unit hydrograph of shorter specified time period from one of a longer
period is less direct, but can be attempted using an S-curve. Data errors will often produce
oscillations in the S curve and it is often difficult to obtain good results when this method is
applied to real data. (Details of the S-curve method are given in many textbooks, e.g.
Bedient et al. (2008))
If the unit hydrograph has k ordinates and there are j periods of rainfall excess, the number
of direct runoff ordinates will be given by n = j + k - 1. Shown in Equation (5.6.6) is the
determination of the direct runoff hydrograph for the case where there 3 periods of rainfall
excess and 5 unit hydrograph ordinates.
156
Flood Hydrograph Modelling
Approaches
�0 = 0
�1 = �1�1
�2 = �1�2 + �2�1
�3 = �1�3 + �2�2 + �3�1
�4 = �1�4 + �2�3 + �3�2 (5.6.6)
�5 = �1�5 + �2�4 + �3�3
�6 = �2�4 + �3�4
�7 = �3�5
�8 = 0
The first column on the right hand side represents the direct runoff hydrograph from P1 mm
of rainfall excess in the first period of the storm, while the second and third columns
represent the direct runoff from the subsequent periods in the storm. Note that each direct
runoff hydrograph is delayed by one time period, because the rainfall excesses P1, P2 and
P3 occur in successive periods of the storm. Note also that the rainfall excess period, the
period of the unit hydrograph, and the time step at which the unit hydrograph ordinates are
listed, are all equal.
Figure 5.6.4. Unit Hydrograph Calculation – example with 3 periods of rainfall excess and
unit hydrograph with 5 ordinates (after Laurenson (1998)).
157
Flood Hydrograph Modelling
Approaches
�
�� = ∑ ���� − � + 1 (5.6.7)
�=1
1. Select a range of significant flood events for which the direct runoff hydrograph and
suitable rain gauge data are available. Only floods with a 50% AEP or less should be
used in order to ensure that the data properly reflects the processes which occur during
significant flood events. Furthermore, the rainfall during the selected events should be
spatially and temporally uniform over the catchment. If an insufficient number of large
events is available then it may be necessary to use smaller floods. In this case it must be
borne in mind that small floods will tend to produce unit hydrographs which have lower
peaks and longer times of rise than are appropriate for use in the estimation of large
floods.
4. Calculate a spatial average rainfall hyetograph for the storm, using hyetographs from all
available rainfall stations on or near to the catchment.
5. Calculate the rainfall excess hyetograph, by subtracting losses so that the depth of rainfall
excess equals the depth of direct runoff. Rainfall losses can be assumed either as an
initial loss-continuing loss model, or an initial loss-proportional loss model (see Book 5,
Chapter 3).
Once the recorded streamflow and rainfall data have been analysed to extract the direct
runoff hydrograph and rainfall excess hyetograph, derivation of the unit hydrograph can
proceed. Unit hydrographs can be derived from single period storms, or from multi-period
storms.
158
Flood Hydrograph Modelling
Approaches
It is important, therefore, to derive several unit hydrographs for a catchment, selecting the
larger storms. All derived unit hydrographs should be compared for consistency, and
inconsistent ones rechecked or deleted.
A plot of unit hydrograph peak discharge against the peak discharge of the recorded direct
runoff hydrograph from which it was derived may reveal a trend for unit hydrograph peaks to
increase for the larger floods. Any such trend is an indication that the catchment is not
behaving in a linear manner. In these circumstances, it may be more appropriate to use an
alternative technique for estimation of the direct runoff hydrograph. Catchments displaying a
nonlinear response to storm events can still use the unit hydrograph approach, but it may be
desirable to derive several unit hydrographs for the catchment, each one derived from, and
being applicable to a particular range of flood sizes, as discussed by Body (1962).
The unit hydrographs which are considered to be acceptable can be averaged to produce a
more representative unit hydrograph for the catchment. Averaging unit hydrographs in this
way also has the benefit of reducing any oscillations on the recessions of unit hydrographs
derived from multi-period storms.
The recommended approach to calculate the average unit hydrograph is to align the peaks
of all unit hydrographs, then average their ordinates at each successive time step (Titmarsh
and Cordery, 1991). This method produces a unit hydrograph whose time to peak is the
average of all times to peak, and peak discharge which is the average of all peak
discharges. A simple average of all unit hydrographs, without regard for the occurrence of
the peak is not recommended, as this can produce an average unit hydrograph which is
quite different from the individual unit hydrographs (see Figure 5.6.5).
159
Flood Hydrograph Modelling
Approaches
Of the various synthetic unit hydrograph approaches available, the only ones to have found
widespread use in Australia are those based on the model of Clark (1945), commonly
referred to as the Clark- Johnstone model. The Clark-Johnstone method has been simplified
by Cordery and Webb (1974) to produce a model which is suitable for some limited
applications.
160
Flood Hydrograph Modelling
Approaches
The Clark-Johnstone model involves the translation of rainfall excess to the outlet and
routing this translated flow through a lumped concentrated storage at that location. It has
been used quite widely for synthetic unit hydrograph derivation and has been shown to be
applicable to most of the east coast of Australia (Cordery et al., 1981). The basic assumption
is that the shape of the unit hydrograph may be determined from two parameters, namely
the base length of the time-area curve (C) and the catchment storage factor (K).
For more detailed description of these synthetic unit hydrograph approaches and examples
of their application the reader is referred to Cordery (1987).
However, the simplifying assumptions made in the unit hydrograph approach impose the
following limitations for its application in many practical situations:
• Particularly in larger catchments the spatial distribution of rainfall and rainfall excess is
generally non-uniform, and in very large catchments heavy rainfall may only occur over
part of the catchment. In such catchments the unit hydrograph approach would be likely to
produce flood hydrographs that are biased both in terms of their peak flow and in their
time to peak.
• In common with other lumped modelling approaches, the unit hydrograph approach
produces only hydrographs at the catchment outlet and not at internal points of interest.
• The unit hydrograph approach and other lumped hydrograph estimation approaches are
unsuitable for application in catchments where significant differences in flood response of
different parts of the catchment require a more distributed modelling approach (e.g. urban
catchments).
• Being based on a range of observed hydrographs for a particular catchment condition, the
unit hydrograph approach is not suited to determine flood behaviour for changed
catchment conditions (e.g. storage development, significant urbanisation or other major
land use changes).
On the basis of these recognised limitations and the ready availability of more flexible runoff
routing approaches (Book 5, Chapter 6, Section 4), the unit hydrograph approaches are not
recommended for practical applications.
161
Flood Hydrograph Modelling
Approaches
The actual flood formation processes in a catchment are complex and highly distributed in
nature. Direct runoff generated from storm rainfall in the upper parts of the catchment initially
moves downhill as shallow overland flow and is modified by the effects of various forms of
detention storage as it moves over the catchment surface. It is then gradually concentrated
into minor drainage pathways and successively combined with baseflows and flows from
other pathways. These flows eventually reach well defined water courses, creeks or rivers
and move downstream, being combined with other tributary flows on their way to the
catchment outlet.
The different groups of models developed in Australia and in other countries have adopted
different conceptualisations of the actual flood formation processes, with different levels of
simplifications and assumptions in terms of:
• areal variability of runoff inputs over the catchment – lumped, semi-distributed and fully
distributed models (see Book 5, Chapter 2)
• variation of routing processes from hillslopes to channel and floodplain reaches (Book 5,
Chapter 2, Section 3)
• flood routing techniques (Book 5, Chapter 2, Section 3 and Book 5, Chapter 6, Section 5)
However, all models are only approximations of reality and require care and expertise in their
application and interpretation.
The major types of runoff routing models are described firstly in terms of how they deal with
the distributed nature of the flood formation and the variation of routing processes along the
flow path from the top to the bottom of the catchment (Book 5, Chapter 6, Section 4). The
different model representations of the hillslope or overland flow phase of the hydrograph
formation are introduced in Book 5, Chapter 6, Section 4, while Book 5, Chapter 6, Section 4
deals with flood routing in the various forms of channel, natural stream and floodplain
segments of the catchment. Finally, Book 5, Chapter 6, Section 4 describes how areas of
significant extra flood storage, such as natural lakes or swamps, extensive floodplain areas,
reservoirs or flood retention/detention basins are modelled.
162
Flood Hydrograph Modelling
Approaches
A number of investigators (e.g. Robinson et al. (1995)) have examined the relative roles of
‘hillslope’ (overland flow) processes and channel routing in the modelling of hydrologic
response. Their conclusions indicate that in relatively small catchments the emphasis should
be on appropriate modelling of the hillslope response to rainfall inputs, while the spatial
variation of rainfall inputs and hillslope responses is less important. Lumped models (Book 5,
Chapter 6, Section 4) or semi-distributed models with relatively simplistic representation of
spatial variations (Book 5, Chapter 6, Section 4) can thus produce acceptable flood
hydrograph estimates.
In contrast, in large (>1000 km2) to very large (>10,000 km2) catchments the flood response
is governed primarily by the network geomorphology and the spatial distribution of runoff
inputs. In these larger catchments, and in catchments with significant storage development,
it is thus important to model the distributed nature of runoff inputs and to give a realistic
representation of the actual drainage network in node-link type models (Book 5, Chapter 6,
Section 4).
In catchments of intermediate size, the overland flow and channel routing phases may be of
similar importance in the overall catchment response to rainfall inputs, and it is thus
desirable to model the flow routing in the overland flow segments separately from the flow
routing in the drainage network segments. Similar considerations apply in catchments with
significantly different land uses (e.g. urban or partly urbanised catchments), where runoff
from subareas of different type is routed separately before routing in the pipe or channel
network.
Apart from the Time-Area Method (Book 5, Chapter 6, Section 2) and the Unit Hydrograph
Method (Book 5, Chapter 6, Section 3), the best known lumped runoff routing models are the
Clark model (Clark, 1945) and the Nash model (Nash, 1960) illustrated in Figure 5.6.6. The
Clark model represents the translation and attenuation through a linear storage placed at the
catchment outlet. By placing a number of linear storages in series and routing the rainfall
excess input successively through this cascade of storages, the Nash model provides more
flexibility in matching the routing response of the model to both the hydrograph translation
and attenuation characteristics of the catchment.
163
Flood Hydrograph Modelling
Approaches
Figure 5.6.6. (a) Clark and (b) Nash models of runoff routing
The example below illustrates the application of a simple runoff routing model to estimate
design flood hydrographs for an ungauged catchment of medium size. In most practical
applications the simple Clark model would be replaced by a node-link type runoff routing
model, implemented through one of the readily available runoff routing modelling systems
referred to in Table 5.6.3 and Table 5.6.4. However, the steps of estimating model
parameters and model inputs for the design application remain similar. The example also
illustrates the application of the critical rainfall duration concept.
164
Flood Hydrograph Modelling
Approaches
The coefficient K of the can be calculated as a function of the main stream length to the
catchment boundary (L) using the equation proposed by Cordery et al. (1981):
� = 0.70�0.57 (5.6.9)
The critical rainfall duration for this catchment is not known a priori, so a range of
rainfall durations from 3 to 24 hours are trialled to find the duration than produces the
highest peak flow. The design rainfall depths and temporal patterns have been selected
in accordance with Book 2.
Based on experience with neighbouring catchments, the following design loss values
have been adopted: IL = 15 mm, CL = 2.5 mm/h.
The resulting hydrographs for the five different durations are shown in the Figure
below. This indicates that the critical rainfall duration for this catchment is about 9
hours. The estimated peak flow for the 1% AEP design flood event is 155 m3/s.
Figure 5.6.7. Clark Runoff Routing Example- resulting hydrographs for the five different
durations
165
Flood Hydrograph Modelling
Approaches
The model uses nonlinear storages, with the relationship between discharge and storage
represented by a power function, as discussed in Book 5, Chapter 5, Section 5 and
expressed by Equation (5.5.32).
The catchment representation in the LRRM can be regarded as a linear network of ten
rainfall input nodes, ten routing links (each with a nonlinear concentrated storage) and one
output node. While the LRRM was originally conceptualised as a runoff routing model for the
whole catchment, it is now more typically used as model to represent the routing of overland
flow in a hillslope segment to a channel network node, e.g. in the XPRAFTS model
(xpsolutions, 2016).
Figure 5.6.8. Original Laurenson Runoff Routing Model (South Creek catchment) (a)
Isochrones of storage delay time (b) Time-area diagram
166
Flood Hydrograph Modelling
Approaches
Figure 5.6.9. Node-link type representation of a catchment in runoff routing models: map
view and schematic representation of node-link network in RORB and WBNM
As explained in Book 5, Chapter 6, Section 4, there are two distinctly different ways to
convert the distributed rainfall excess inputs over a subarea into a runoff hydrograph at the
subarea node (placed near the centroid of the subarea). These subarea runoff hydrographs
are then routed progressively from one node through a routing link to the next node in the
drainage network and eventually to the catchment outlet (Book 5, Chapter 6, Section 4). The
concentrated or distributed storages used in these routing links are shown in Book 5,
Chapter 6, Section 4 as small black triangles.
Some of the routing links receive the outflow hydrograph from an upstream link as well as a
subarea runoff hydrograph. In these links the two hydrographs are combined before they are
routed through to the next node in the network. At stream junction nodes the two (or more)
tributary hydrographs are combined by simple addition.
The drainage network may also have a branched structure, where flows are diverted by
natural or artificial features into a system of distributary or diversion channels, and these
flows may or may not join up again with flows in the main channel. Most runoff routing
modelling systems have the capability to represent the features controlling diversion and
return flows.
If the catchment includes areas with significant extra flood storage, such as natural lakes or
swamps, extensive floodplain areas, reservoirs or flood retention/detention basins, these can
also be included in the drainage network as ‘special storage’ nodes or links, with separately
defined storage and discharge characteristics (Book 5, Chapter 6, Section 4).
The baseflow contributions to the total flood runoff hydrograph (Book 5, Chapter 4) are
typically modelled in a lumped fashion and added to the routed hydrograph at the catchment
outlet. However, in more complex catchment situations with significant baseflow
contributions it is desirable to model the distributed nature of baseflow contributions, by
adding them at each runoff input node and routing the combined runoff hydrograph through
the drainage network.
Figure 5.6.10 shows schematically how the different modelling components can be
combined to convert the distributed rainfall inputs to a hydrograph at the catchment outlet.
167
Flood Hydrograph Modelling
Approaches
The Book 5 chapters and sections providing more detail of the individual components are
also indicated in the figure.
168
Flood Hydrograph Modelling
Approaches
169
Flood Hydrograph Modelling
Approaches
Book 7 provides practical guidance on how these different model elements can best be used
to represent the important features of a specific catchment.
As discussed in more detail in Book 4, Chapter 2, the runoff routing models routinely used in
Australia employ very simplified representations of the processes involved in runoff
production, generally dividing the rainfall inputs into a loss and rainfall excess component,
and dealing only indirectly with baseflow contributions. The main distinction in the modelling
of runoff from contributing areas is in how the routing of runoff within the subareas is treated.
(6.4.1)
This subarea runoff hydrograph is input at a stream network node near the centroid of the
subarea and then routed successively along the stream network to the catchment outlet. The
modification of the hydrograph as it travels through the stream or channel network is
generally modelled by a linear or nonlinear storage for each routing link (Book 5, Chapter 5,
Section 4 and Book 5, Chapter 5, Section 5).
Examples of the application of the combined subarea and stream network routing approach
include the RORB Runoff Routing Model (Laurenson et al., 2010) and the Basic Model
version of URBS (Carroll, 2012).
The key feature of this runoff routing conceptualisation is that all the translation and
attenuation effects experienced by the runoff inputs on their way to the catchment outlet
have to be represented in the routing through the channel links.
The justification for the combined treatment of overland flow and stream/channel network
routing is firstly that the separation of these two phases is somewhat arbitrary, in that the
change from shallow distributed flow over hillslope surfaces to concentrated flow in water
courses and streams is very gradual. Secondly, if the interest is only on hydrographs at the
catchment outlet, the internal separation into different processes is of secondary importance,
as long as the overall routing and delay characteristics of the catchment and their variation
with flood magnitude can be adequately represented. As discussed in Book 5, Chapter 6,
170
Flood Hydrograph Modelling
Approaches
Section 4, this condition is likely to be satisfied in large catchments with relatively uniform
land use.
The main limitation of this modelling approach is that it is designed to produce only an
integrated catchment response hydrograph at or near the catchment outlet. As demonstrated
by Yu and Ford (1989), this modelling approach does not satisfy the principles of ‘self-
consistency’, as the storage parameter of an individual routing element depends on the size
of the total catchment being modelled. While the model can output hydrographs at any
internal node, hydrographs produced for the upper parts of the catchment are likely to show
positive bias, as they tend to underestimate the degree of attenuation in the routing process.
Table 5.6.3. Methods used in different runoff routing modelling systems to derive the
overland flow hydrograph
Method Example Reference ARR Section
Time-area method ILSAX, DRAINS O'Loughlin and Slack Book 5, Chapter 6,
(2014) Section 2
Unit hydrograph HEC-HMS HEC (2000) Book 5, Chapter 6,
convolution Section 3
Cascade of non-linear XPRAFTS xpsolutions (2016) Book 5, Chapter 5,
storages Section 5
Book 5, Chapter 6,
Section 4
Nonlinear storage SWMM EPA (2016) Book 5, Chapter 5,
routing Section 5
URBS Carroll (2012)
Book 5, Chapter 6,
WBNM2000 Boyd et al. (2002) Section 4
Kinematic wave HEC-HMS (Option) HEC (2000) Book 5, Chapter 5,
routing Section 6
Book 5, Chapter 6,
Section 4
The hydrograph formation methods used by the first three groups of modelling systems have
been described in previous chapters, as indicated in the last column of Table 5.6.3, and the
methods used to estimate their parameters are described in the relevant user manuals. The
catchment conceptualisation used in nonlinear storage routing models and kinematic wave
routing models warrants some additional discussion (Book 5, Chapter 6, Section 4 and Book
5, Chapter 6, Section 4 respectively).
The main advantage of modelling the overland flow phase separately is that this modelling
approach can deal with different land uses in different parts of the catchment and changes to
171
Flood Hydrograph Modelling
Approaches
these land uses, such as substantial urbanisation. If the variation of the routing response
with flood magnitude is quite different for the overland flow and channel routing phases, then
a separate modelling approach lends itself better to extrapolation to Very Rare to Extreme
flood events.
The disadvantage of the separate overland and channel flow routing approach is that it
requires additional parameters to model the contributing area or overland flow component of
the overall catchment routing process. If appropriate information is not available to allow
separate calibration of the parameters to the overland and channel routing processes, then it
may be more appropriate to use a combined routing approach, as described above.
More detailed approaches for modelling runoff from the overland flow segment have also
been proposed and applied. The RRR model (Kemp and Daniell, 1996) allows for the
generation of the runoff hydrograph by two or more different processes, with different losses
and subarea routing delays being applied to each runoff component. Kemp (2002)
postulated three different conceptual processes that can contribute to runoff at the subarea
scale: baseflow, ‘slow flow’ and ‘fast flow’.
For urban catchments, further sub-division of the overland component on a spatial basis to
allotment-size units and subsequent scaling up to the subareas has been proposed by
Goyen (2005).
� = ��� (5.6.11)
where the coefficient k is a delay or lag time parameter related to the lag parameter K in
linear storage routing models (Book 5, Chapter 5, Section 4) by the following equation
� = ��1 − � (5.6.12)
and the exponent m expresses the degree of nonlinearity of the routing response. Exponent
values in the range of 0.6 to 0.8 are typically used.
� = ��� (5.6.13)
where C is a lag parameter for the subarea of the catchment. Equation (5.6.13) with an
exponent value b = 0.57 is the basis of the subarea routing elements used in the
WBNM2000 model (Boyd et al., 2002).
A similar expression for k is used in the URBS model, with an exponent value b = 0.5
(Carroll, 2012). URBS also allows adjustments of k for the degree of forestation of the
subarea (increasing the value of k) and for the fraction of the subarea being urbanised
(reducing the value of k).
The detailed form of the equations used and the adopted numerical solution method are
described in the manuals of the respective runoff routing modelling systems.
172
Flood Hydrograph Modelling
Approaches
The overland flow discharging from the hillslope segment into the channel (at a node) is
computed as flow in a wide rectangular channel, giving the simplified expression
1
5
�2 3 (5.6.14)
�= �
�
where q is the discharge per unit width of the hillslope, S is the slope of the hillslope plane, n
a roughness coefficient and y the average flow depth over the plane. It should be noted that
the roughness coefficient n for overland flow over a particular type of surface and ground
cover is typically higher than for channels (Bedient et al., 2008).
173
Flood Hydrograph Modelling
Approaches
Figure 5.6.11. Hillslope representation in kinematic wave routing models (a) actual
catchment (b) model representation (from HEC-HMS Manual)
Equation (5.6.15) is substituted in the appropriate form of the kinematic wave equation
∂� ∂�
+ =� (5.6.15)
∂� ∂�
where is x is the distance in the direction of the overland flow and i the rate of rainfall excess
(mm/h) on the hillslope plane.
While analytical solutions are available to solve the overland flow equations, runoff routing
modelling systems such as HEC-HMS (HEC, 2000) use a numerical solution scheme to
solve the kinematic wave equation for y at each time step. This value is then substituted into
Equation (5.6.14) and the flow rate q per unit width (from one of the planes) is multiplied by
twice the width of the hillslope (measured parallel to the channel) to determine the total
overland flow hydrograph from the subarea.
174
Flood Hydrograph Modelling
Approaches
A more detailed description of kinematic wave routing techniques and their application in a
flood hydrograph estimation model (HEC-1 or HEC-HMS) is given in HEC (1993).
Table 5.6.4. Routing models used in different runoff routing modelling systems to route flows
through channel, stream and floodplain reaches
The parameters of the simple lag and storage routing models are generally estimated by
analysis of or calibration to observed hydrographs in the catchment being modelled, or by
transfer of information from gauged catchments in regions with similar streamflow
characteristics. In these methods it is generally assumed that the same parameter set
applies to the different routing links, except for an adjustment to reflect the different time lag
associated with routing reaches of different lengths.
175
Flood Hydrograph Modelling
Approaches
In the Muskingum-Cunge Method, the kinematic wave and the dynamic routing methods the
routing parameters can be determined from direct links with stream survey data and
hydraulic flow characteristics, e.g. channel slope and hydraulic roughness (Equation (5.5.25)
and Equation (5.5.26)). These parameters will thus vary naturally with the topographic and
hydraulic characteristics of the routing reaches.
In the modelling systems using the nonlinear storage routing methods described in Book 5,
Chapter 5, Section 5, the value of the exponent m in the nonlinear storage-discharge
relationship found from calibration to observed hydrographs typically varies in the range from
0.6 to 0.8. These values imply that with increasing flood magnitude discharge increases
more rapidly than storage. A value of m = 1.0 (linear storage) would imply that discharge and
storage increase at the same rate).
The expected variation of the exponent with flood magnitude is particularly important when
appropriate routing parameter values for the estimation of Very Rare to Extreme floods need
to be selected. This question is discussed in more detail in Book 8, Chapter 5, Section 4.
176
Flood Hydrograph Modelling
Approaches
Figure 5.6.12. Routing of rainfall excess hydrograph through a series of nonlinear storages
(after Laurenson et al. (2010))
The characteristic reach length criterion expressed by Equation (5.5.28) means that the
degree of subdivision of the drainage network into sub-reaches represented by concentrated
storages cannot be chosen arbitrarily. Too few or too many sub-reaches will make it difficult
to accurately reflect both the translation and attenuation effects experienced by flood waves
as they move through the drainage network of the actual catchment. Boyd (1985) has shown
empirically that the optimum degree of subdivision of a catchment into subareas and routing
reaches increases approximately with the square root of catchment area.
The different methods available to estimate the routing parameters for different runoff routing
modelling systems are discussed in Book 7, Chapter 5.
177
Flood Hydrograph Modelling
Approaches
The approaches used to represent the routing effects through a special storage range from
linear reservoir routing methods (Book 5, Chapter 5, Section 4) to non-linear storage routing
methods (Book 5, Chapter 5, Section 5), with different methods being applied to define the
S-Q relationship for the storage:
• a nonlinear S-Q relationship for a special channel or floodplain routing link determined
from calculations of storage volumes in the link and corresponding flow through the link for
different flood magnitudes, or from the analysis of hydraulic modelling results
• a set of S-Q relationships for regulated storages with information on the triggers for the
application of the individual S-Q relationships
Details of these different options are provided in the user manuals of the different runoff
routing modelling systems.
The catchment is represented by a large number of grid cells, each with its individual rainfall
input and runoff output – in other words, the cells act as the equivalent to subareas in node-
link type models. However, the different scales of these basic catchment elements have
important implications for the modelling of flood runoff, as they require different
representations of the typical properties of the catchment elements. The non-linear nature of
runoff generation means that the average response from many different small scale
catchment elements cannot be expected to correspond to the lumped response of a subarea
with average properties (e.g. a hillslope in a kinematic wave model).
178
Flood Hydrograph Modelling
Approaches
• a ‘hybrid approach’, where only the flatter parts of the catchment with more complex
floodplain topography are represented by a 2D grid; they receive inflow hydrographs from
the rest of the catchment produced by a traditional runoff routing approach.
The computational basis of the rainfall-on-grid approaches are the two-dimensional unsteady
flow equations introduced in Book 6, Chapter 4, Section 7, or simplified forms of these
equations.
Figure 5.6.13 shows how the generation of runoff from a grid cell is conceptualised. Different
direct rainfall models vary in the degree of detail adopted in modelling the runoff generated
on a grid element, as discussed in Book 5, Chapter 6, Section 5.
The successive routing of the runoff through other grid cells to the catchment outlet then
uses the hydraulic routing approaches incorporated into the modelling package.
However the amount and direction of outflow from the cell here also depends on additional
factors:
179
Flood Hydrograph Modelling
Approaches
In principle the model can accept a detailed space-time distribution of the rainfall inputs but
in practice limited data availability means that more discretised rainfall inputs need to be
used.
The traditional loss models described in Book 5, Chapter 3 can be used but the way these
losses are applied may vary in different models, and the traditional design loss values may
thus not be directly applicable. Loss models that have a more direct physical basis could
also be applied to reflect the varying infiltration, depression storage and transmission losses
that reduce the volume of rainfall inputs but the required data and parameters estimates are
not readily available. Finally, at least in theory, a groundwater model could be integrated to
allow a consistent estimation of losses and baseflow contributions when modelling over an
extending time period.
The topographic information included in the model means that the model can include
relatively large depression storage areas which interact with losses. A process of ‘pre-
wetting’ these storage areas (priming the model with an artificial rainfall burst to fill
depression storages) may have to be used to prevent low bias in modelled flood
hydrographs.
The model outputs are quite sensitive to the selection of the hydraulic roughness
parameters. Because of the differences in conceptualisation and scale, these grid cell
roughness parameters will generally be different from the values used in traditional channel
hydraulics. Use of depth varying roughness parameters rather than constant ones may be
necessary to reflect the changing hydraulic characteristics of catchment surfaces with flow
depth.
The modelling of urban areas requires consideration of the impacts of a mix of different
catchment surfaces, buildings, drainage systems and other infrastructure. The model
resolution will generally not allow these features to be represented in detail, thus
representative cell characteristics need to be adopted, using some form of averaging of the
detailed urban area characteristics.
• Where a 2D model extends over the whole catchment, there is no need to develop and
calibrate a separate hydraulic model, allowing seamless simulation of flood level outputs
from rainfall inputs.
• Drainage paths and flow direction do not need to be predefined, which makes the
approaches particularly useful for runoff routing in flat areas and where catchment
boundaries are not well defined.
180
Flood Hydrograph Modelling
Approaches
• Potential significant increases in model run times. Hydrological models on their own
generate peak flows significantly faster than direct rainfall, which facilitate their use in
simulation frameworks that aim to ensure probability neutrality in the transformation of
rainfall into floods (as discussed in Book 4, Chapter 3).
• Require digital terrain information. Depending on the accuracy of the results required,
there may be a need for extensive survey data, such as aerial survey data.
• Insufficient resolution of smaller flowpaths may impact upon timing of routed flows. The
smaller flowpaths higher up in the catchment may not be as well-represented by the 2D
model as they may exist on a sub-grid scale. This may affect timing of runoff routing.
• The shallow flows generated in the direct rainfall approach may be outside the typical
range of application of Mannings ‘n’ roughness parameters and will thus require special
consideration.
ARR (2012) discusses these challenges/limitations and possible ways to deal with them in
more detail.
It is important for both users of rainfall-on-grid models and their clients to realise that greater
detail in the representation of catchment physiography can only be expected to translate to
greater accuracy of flood estimation results if this is accompanied by appropriate
representation of hydrologic flood formation processes at the adopted special scale. Given
the present simple representation of such processes and the difficulties of realistically
representing shallow overland flows, it is considered that at present the main value of rain-
on-grid models is their ability to accommodate the influences of hydraulic controls on flow
conditions.
At the current stage of development of these models and with the limited level of experience
gained with their practical application, it is considered premature to recommend the general
use of rainfall-on-grid models in these guidelines.
However, it is expected that further development and testing will allow rainfall-on-grid models
to be more widely applied.
6.6. References
ARR (2012). Australian Rainfall and Runoff Revision Project 15: Two-dimensional modelling
in urban and rural floodplains, Stage 1 and Stage 2 Report.
Babister, M. and Barton, C. (eds) (2016). Australian Rainfall and Runoff Support Document:
Two dimensional modelling in urban and rural floodplains. Project 15
Bedient, P.B., Huber, W.C. and Vieux, B.E. (2008), Hydrology and floodplain analysis, ed. 4,
Upper Saddle River: Prentice Hall.
Body, D.N. (1962), Significance of peak runoff intensity in the application of the unit graph
method to flood estimation, Journal of the Institution of Engineers, Australia, 34: 25-31.
181
Flood Hydrograph Modelling
Approaches
Boyd, M.J. (1982). A note on the Cordery-Webb synthetic unit hydrograph, IEAust. Civil
Engineering Transactions, CE24(1), 107-108
Boyd, M.J. (1975), Hydrograph synthesis using mathematical models, Proceedings of the
1975 Hydrology Symposium, Institution of Engineers, Australia, NCP 75/3, pp: 117-121.
Boyd, M.J. (1985), Effect of catchment subdivision on runoff routing models, Civil
Engineering Transactions, Institution of Engineers, Australia, 27: 403-410.
Boyd, M.J., Rigby, E.H., VanDrie, R. and Schymitzek, I. (2002), WBMN 2000 for flood
studies on natural and urban catchments, In 'Mathematical Models of Small Watershed
Hydrology and Applications', pp: 225-258.
Carroll, D.G. (2012), URBS (Unified River Basin Simulator) A Rainfall Runoff Routing Model
for Flood Forecasting and Design. Version 5.00 Dec, 2012.
Clark, C.O. (1945), Storage and the unit hydrograph, Transactions of the American Society
of Civil Engineers, volume 110, pp.1419-1488.
Cordery, I. (1987), The unit hydrograph method of flood estimation, Chapter 8 in Australian
Rainfall & Runoff - A Guide to Flood Estimation, Barton: Engineers Australia.
Cordery, I. and Webb, S.N. (1974), Flood estimation in eastern New South Wales - a design
method, Civil Engineering Transactions, Institution of Engineers, Australia, 16: 87-93.
Cordery, I., Pilgrim, D.H. and Baron, B.C. (1981), Validity of use of small catchment research
results for large basins, Civil Engineering Transactions, Institution of Engineers, Australia,
CE23: 131-137.
United States Environmental Protection Agency (2016), Storm water management model -
reference manual, volume 1 - Hydrology (Revised), Washington: United States
Environmental Protection Agency, available at: <http://nepis.epa.gov/Exe/ZyPDF.cgi?
Dockey=P100NYRA.txt>.
United States Environmental Protection Agency (2015), Storm water management model -
user manual version5.1, Washington: United States Environmental Protection Agency,
available at: <http://nepis.epa.gov/Exe/ZyPDF.cgi?Dockey=P100N3J6.TXT>.
HEC (2000), Hydrologic Modelling System HEC-HMS Technical Reference Manual, United
States Army Corp of Engineers, Hydrologic Engineering Centre, available at: <http://
www.hec.usace.army.mil/software/hec-hms/documentation/HEC-HMS_Technical
%20Reference%20Manual_%28CPD-74B%29.pdf>.
HEC (1993). Introduction and application of kinematic wave routing techniques using HEC-1.
US Army Corp of Engineers, Hydrologic Engineering Centre, http://www.hec.usace.army.mil/
publications/TrainingDocuments/TD-10.pdf.
Hawken, R.W.H. (1921), An analysis of maximum run-off and rainfall intensity, Transactions
of the Institution of Engineers, Australia, 2: 193-215.
182
Flood Hydrograph Modelling
Approaches
Kemp, D.J. and Daniell, T.M. (1996), A proposal for a rainfall runoff routing (RRR) model.
I.E.Aust. Proceedings of the 23rd Hydrology and Water Resources Symposium, May,
Hobart, pp: 15-20.
Laurenson, E.M. (1998). Hydrology CIV3263 Course Notes, Monash University, Department
of Civil Engineering.
Laurenson, E.M. (1962), Hydrograph synthesis by runoff routing, Research Report 66,
Sydney: Water Research Laboratory, University of New South Wales.
Laurenson, E.M. (1964), A catchment storage model for a runoff routing, Journal of
Hydrology, 2: 141-163.
Laurenson, E.M., Mein, R.G. and Nathan, R.J. (2010). RORB Version 6 Runoff Routing
Program - User Manual.
Leopold, L.B., Wolman, M.G. and Miller, J.P. (1964), Fluvial processes in geomorphology,
San Francisco: W.H. Freeman.
Nash, J.E. (1960), A unit hydrograph study with particular reference to British catchments,
Proceedings of the Institute Civil Engineers, 17: 249-282.
O'Loughlin, G. and Stack, R. (2014), DRAINS User Manual - a manual on the DRAINS
program for urban stormwater drainage system design and analysis, Available at: <http://
www.watercom.com.au/DRAINS%20Manual.pdf>.
Pilgrim, D.H. (1977), Isochrones of travel time and distribution of flood storage from a tracer
study on a small watershed, Water Resources Research, 13(3), 587-596.
Robinson, J. S., Sivapalan, M. and Snell J. D. (1995), On the relative roles of hillslope
processes, channel routing, and network morphology in the hydrologic response of natural
catchments, Water Resour. Res., 30(12), 3089-3101.
Ross, C.N. (1921), The calculation of flood discharges by the use of a time contour plan,
Transations of the Institution of Engineers, Australia, 2: 85-92.
Sherman, L.K. (1932), Streamflow from rainfall by unitgraph method, Engineering News
Record, 108: 501-505.
Titmarsh, G.W. and Cordery, I. (1991), An examination of linear and non linear flood
estimation methods for small agricultural catchments, Australian Civl Engineering
Transactions, 33(4), 233-242.
Yu, B and Ford, B.R. (1989). Self-Consistency in Runoff Routing Models. IEAust Civil Engg.
Trans., CE31(1), 47-53.
183
Flood Hydrograph Modelling
Approaches
xpsolutions (2016), Rafts: Urban and rural runoff routing application, Belconnen, available at:
<http://xpsolutions.com/assets/dms/XPRAFTS-tech-desc.pdf>.
184
BOOK 6
Flood Hydraulics
Flood Hydraulics
Table of Contents
1. Introduction ........................................................................................................................ 1
1.1. Objectives and Scope ............................................................................................. 1
1.2. References ............................................................................................................. 1
2. Open Channel Hydraulics ................................................................................................. 3
2.1. Introduction ............................................................................................................. 3
2.2. General Characteristics of Open Channels ............................................................ 4
2.2.1. The one dimensional energy equations ..................................................... 10
2.3. Classification of Free Surface Flows .................................................................... 11
2.4. Uniform Flow and Critical Flow ............................................................................. 12
2.4.1. Uniform Flow .............................................................................................. 12
2.4.2. Critical Flow ............................................................................................... 13
2.4.3. Bed Shear Stress ....................................................................................... 17
2.5. Uniform Flow Resistance Formulas ...................................................................... 18
2.5.1. Manning Formula ....................................................................................... 18
2.5.2. Application of the Manning Equation ......................................................... 19
2.5.3. Factors Affecting the Manning Roughness Parameter .............................. 25
2.5.4. Chezy Formula .......................................................................................... 25
2.5.5. Application of the Chezy Equation ............................................................. 28
2.5.6. The link between between the Manning Roughness coefficent and the
roughness height ................................................................................................. 28
2.5.7. Uniform Flow in Channels of Compound Cross-Section ........................... 30
2.6. Classification of the 1D Backwater and Drawdown Water Surface Profiles ......... 35
2.7. Methods for Calculating Steady State Backwater and Drawdown Curves ........... 38
2.7.1. Direct Step Method for Calculating Backwater and Drawdown Curves ..... 39
2.7.2. Standard Step Method for Calculating Backwater and Drawdown
Curves ................................................................................................................. 40
2.7.3. Averaging Required in Water Surface Profile Calculations ........................ 41
2.8. One Dimensional Unsteady Flow Equations ........................................................ 42
2.8.1. Derivation of the Continuity Equation for Gradually Varied Unsteady
Flow in an Open Channel .................................................................................... 43
2.8.2. Derivation of the Momentum Equation for Gradually Varied Unsteady
Flow in an Open Channel .................................................................................... 43
2.8.3. Why is the time step so important? ............................................................ 45
2.8.4. Steady Flow Equations .............................................................................. 47
2.8.5. Simplifying from Gradually Varied to Steady Uniform Flow ....................... 49
2.9. Numerical Modelling - Two Dimensional Models of Flood Flows ......................... 49
2.9.1. The Mass Equation .................................................................................... 50
2.9.2. The Momentum Equations ......................................................................... 50
2.9.3. Assumptions .............................................................................................. 51
2.10. Extension of the Equations for Modelling Applications ....................................... 52
2.10.1. Extension of the Mass Equation .............................................................. 52
2.10.2. Extension of the Momentum Equations ................................................... 52
2.10.3. Final Forms of the Equations ................................................................... 54
2.10.4. Modelling Requirements and Simplifications ........................................... 54
2.11. Three Dimensional Flow Equations .................................................................... 56
2.12. Physical Modelling .............................................................................................. 59
2.12.1. The Basis for Physical Model Design ...................................................... 59
2.12.2. Model Scales ........................................................................................... 62
2.12.3. Distorted Models ...................................................................................... 62
2.12.4. Model Scales in Froude Models .............................................................. 63
clxxxvii
Flood Hydraulics
clxxxviii
Flood Hydraulics
clxxxix
Flood Hydraulics
cxc
List of Figures
6.2.1. Illustration of the river main channel and floodplains .................................................. 4
6.2.2. The River Murray and its floodplain near Waikerie in South Australia, Courtesy of
Martin Lambert .............................................................................................................. 4
6.2.3. River bend showing areas of deposition and erosion and characteristic
cross-section. ................................................................................................................ 5
6.2.4. Parameters characterising flows in open channels ..................................................... 6
6.2.5. Variation of
cross-sectional properties in natural channel ................................................................ 7
6.2.6. Cross-section velocity distribution in a small straight laboratory channel (velocities
shown as a ratio of the mean) ....................................................................................... 8
6.2.7. Typical velocity distribution in a natural channel cross-section ................................... 8
6.2.8. Typical vertical velocity distribution ............................................................................. 9
6.2.9. Variation of the total energy grade line across the compound channel section assuming a
horizontal water surface. ............................................................................................. 10
6.2.10. Opposing forces acting on a fluid element down the channel
cancel thereby producing uniform flow. ....................................................................... 13
6.2.11. Roughness coefficient data for Esopus Creek with n = 0.030 (Page 34 in
Barnes (1967)). ....................................................................................................... 22
6.2.12. Esopus Creek .......................................................................................................... 23
6.2.13. Relative height of the roughness projection elements and the
thickness of the viscous sublayer. ............................................................................... 28
6.2.14. Comparison between Equation (6.2.31) and the
power law approximation presented in Equation (6.2.32) ........................................... 29
6.2.15. Typical compound channel with floodplains of greater roughness
than the main channel ................................................................................................. 30
6.2.16. Imaginary division of a compound channel assumed by Horton (1933)
to give the same average velocity on the floodplains and in the main channel. ........ 31
6.2.17. Vertical division of a compound channel into floodplain and
main channel subsections ........................................................................................... 31
6.2.18. Alternative approaches to subdividing a compound channel
cross‑section. .............................................................................................................. 32
6.2.19. Drawdown Water Surfaces ...................................................................................... 37
6.2.20. Lateral inflow ........................................................................................................... 40
6.2.21. Control volume used to derive the gradually varying unsteady flow
equations ....................................................................................................................... 43
6.2.22. Definition of symbols ............................................................................................... 49
6.2.23. Process Function Diagram for Friction .................................................................... 68
6.2.24. Data for Cut-throat flumes (a) Uncorrected, (b) Corrected for Scale Effects (after
Keller (1984b)) ......................................................................................................... 71
6.3.1. Schematic of Meander, Floodway, and weir .............................................................. 80
6.3.2. Types of Underflow Gates (a) Vertical (b) Radial (c) Drum ....................................... 82
cxci
Flood Hydraulics
cxcii
Flood Hydraulics
cxciii
Flood Hydraulics
6.5.16. Water Levels at Olga Bay (left) and Spencer (right) Corresponding to Cases of Complete
Dependence, Complete Independence and the Best Estimate when α=0.9 ............. 212
6.5.17. Longitudinal Comparison of 1% AEP and 10% AEP Water Levels ....................... 213
6.5.18. Water Levels at Macksville Corresponding to Cases of Complete Dependence, Complete
Independence and the Best Estimate when α=0.95 .................................................. 215
6.5.19. Comparison of Observed Water Levels at Macksville with Range of Estimates from Design
Variable Method from Complete Dependence to Complete Independence ............ 216
6.5.20. Comparison of Observed Water Levels to 90% Confidence Limits from Generalised Extreme
Value Distribution and Design Variable Method .................................................... 217
6.5.21. Comparison of Observed Water Levels at Macksville to Best Fit Estimates from Design
Variable Method Assuming Correction to Frequent Rainfall AEPs .......................... 218
6.6.1. Series of Floodplain Culverts .................................................................................. 242
6.6.2. Floodplain Culvert ................................................................................................... 242
6.6.3. Debris Deflector Walls ............................................................................................. 242
6.6.4. Post Flood Collection of Debris on Top of Deflector
Walls .......................................................................................................................... 243
6.6.5. Sediment Training Walls Incorporated with Debris Deflector
Walls (Catchments & Creeks Pty Ltd) ....................................................................... 243
6.6.6. Multi-Cell Culvert with Different Invert Levels .......................................................... 244
6.6.7. Debris Deflector Walls and Sediment Training Wall Added to
Existing Culvert ......................................................................................................... 244
6.7.1. Example of a Flood Study Depth Map (Smith andWasko (2012)) ........................... 249
6.7.2. Comparison of Provisional Flood Hazard Estimates from Numerical
Models at Differing Grid Resolutions (after Smith and Wasko,
2012) ......................................................................................................................... 250
6.7.3. Typical Modes of Human Instability in Floods (Cox et al., 2010) ............................. 251
6.7.4. Safety Criteria for People in Variable Flow Conditions
Cox et al. (2010) .................................................................................................... 253
6.7.5. Typical Modes of Vehicle Instability (Shand et al., 2011) ......................................... 254
6.7.6. Interim Safety Criteria for Vehicles in Variable Flow Conditions
(Shand et al., 2011) ............................................................................................. 256
6.7.7. Comparison of Building Stability Curves ................................................................. 257
6.7.8. Comparison of Updated Hazard Curves (after Smith et al. 2014) ........................... 259
6.7.9. Combined Flood Hazard Curves (Smith et al., 2014) .............................................. 260
6.7.10. Flood Depth Map From Numerical Model Output (Courtesy WMAwater Pty Ltd) . 263
6.7.11. Flood Hazard Classification From Numerical Model Output (Courtesy WMAwater Pty
Ltd) .......................................................................................................................... 264
6.7.12. Schematic of Proposed Warehouse Development ................................................ 265
6.7.13. 1% AEP Flood Depth Map – Existing Site ............................................................. 266
6.7.14. 1% AEP Provisional Flood Hazard Map – Existing Car Park ................................ 267
6.7.15. Comparison of Car Park Cross Sections (A-A) ..................................................... 268
6.7.16. 1% AEP Flood Depth Map – Revised Car Park .................................................... 269
cxciv
Flood Hydraulics
6.7.17. 1% AEP Provisional Flood Hazard Map – Revised Car Park ................................ 270
6.7.18. 1% AEP Flood Depth - Proposed Flood Detention Basin ..................................... 270
6.7.19. 1% AEP Provisional Flood Hazard Map - Proposed Flood Detention
Basin ......................................................................................................................... 271
6.7.20. Basin Overflow Spillway – Flood Depth ................................................................ 271
6.7.21. Provisional Flood Hazard Map – Basin Overflow Spillway .................................... 272
cxcv
List of Tables
6.2.1. Values of Roughness Coefficient n for different channel conditions (Sellin
1961) ......................................................................................................................... 19
6.2.2. Valid Manning ‘n’ Ranges for Different Land Use Types ........................................... 24
6.2.3. Values of the roughness projection height ks and Manning n
for straight, clean pipes concentrically jointed. ....................................................... 27
6.2.4. Gradually varied flow classification system (modified from table on p35 of
Fenton (2007)) ......................................................................................................... 36
6.2.5. Comparison of the direct step and standard step methods ....................................... 38
6.3.1. Culvert Flow Characteristics ...................................................................................... 96
6.5.1. Comparison design flood estimation methods in the joint probability zone ............. 185
6.5.2. Advantages and Disadvantages of Alternative Representations of Joint Extremes(based
on Zheng et al (Zheng et al., 2014b)) ...................................................................... 191
6.5.3. Worked Examples of the Probability of Two Independent Events Z = X>x or
Y>y .......................................................................................................................... 198
6.5.4. Worked Examples of the Probability of Two Independent Events Z = X>x and
Y>y .......................................................................................................................... 199
6.5.5. Calculating the Probability of Two Independent Events Z = X>x and Y>y, When
Adding the Constraint that Both Events Must Occur on the Same Day .................. 200
6.5.6. Flood Levels of Different Combinations of Rainfall and Storm Tide in Terms of Annual
Exceedance Probability, for a Particular Storm Burst Duration. Only the highlighted cells
need to be evaluated. ................................................................................................ 203
6.5.7. Flood Levels of Different Combinations of Rainfall and Storm Tide in Terms of Annual
Exceedance Probability with a Particular Storm Burst Duration ................................ 205
6.5.8. Model-Derived Water Levels (mAHD) for Given Pairs of Tide and Rainfall Boundary Input
Conditions for a Cross-section Located at Liverpool (Chainage–80 300) ................ 209
6.5.9. Pre-Screen Analysis Pairs at Olga Bay (Chainage–20
400) ......................................................................................................................... 210
6.5.10. Pre-Screen Analysis Pairs at Spencer (Chainage–34
700) ........................................................................................................................... 210
6.5.11. Model-Derived Water Levels (mAHD) for Given Pairs of Storm Tide and Rainfall Boundary
Input Conditions for a Cross-section Located at Olga Bay (Chainage–20 400). ....... 210
6.5.12. Model-Derived Water Levels (mAHD) for Given Pairs of Storm Tide and Rainfall Boundary
Input Conditions for a Cross-section Located at Spencer (Chainage–34 700) ......... 211
6.5.13. Pre-Screening Analysis Pairs for Macksville ......................................................... 213
6.5.14. Flood Levels for Various Combinations of Rainfall and Tide Levels at Macksville
(Pacific Highway Bridge) Nambucca River ............................................................... 214
6.5.15. Hydraulic Response Table for cell (294, 200) ....................................................... 219
6.6.1. Debris Availability - in Source Area of a Particular
Type/Size of Debris ................................................................................................. 232
6.6.2. Debris Mobility - Ability of a Particular Type/Size of
Debris to be Moved into Streams ............................................................................ 232
cxcvi
Flood Hydraulics
cxcvii
Chapter 1. Introduction
Martin Lambert
The primary objective of this book is to provide a document which provides background
information to assist practitioners to carry out calculations or hydraulic investigations related
to free surface flows. The needs for such calculations or investigations may be related to
floods (inundation levels, concentration of flows to endanger life, the power of flood flows to
threaten or impair structural integrity or even wash away structures, and bed level scour),
stormwater disposal, water supply distribution systems and sewerage collection systems
(the installation of new systems or augmentation of existing systems).
The present document concentrates on free surface flows. Textbooks are available which
cover the basics of open channel flow. The traditional texts in this area are Henderson
(1966) and Chow (1959) but more modern books include Chaudhry (2007) and Sturm
(2009). Pipe flow is covered in fluid mechanics textbooks like Streeter and Wylie (1981) and
Elger et al. (2014). There are numerous journals which publish research dealing with open
channel, pipe flow and river flood flows and these include the ASCE - Journal of Hydraulic
Engineering, IAHR - Journal of Hydraulic Research and the Journal of Flood Risk
Management. Those practicing and working in this area are encouraged to keep abreast of
new developments as this is an area that is evolving with increases in computational power.
1.2. References
Chaudhry, M.H. (2007), Open Channel Flow, Springer Publishing Company.
Chow, V.T. (1959), Open-Channel Hydraulics. McGraw-Hill Book Company, New York.
Elger, D.F. Williams, B.C. Clayton T. Crowe, C.T. and Roberson, J.A. (2014), Engineering
Fluid Mechanics, 10th Edition.
Henderson, F.M. (1966), Open Channel Flow, Macmillan Publishing Co., New York.
Streeter, V.L. and Wylie E.B. (1981), Fluid Mechanics. McGraw-Hill Ryerson Ltd., Singapore.
1
Introduction
Sturm, T. (2009), Open Channel Hydraulics Sturm, McGraw-Hill Water Resources and
Environmental Engineering Series, New York.
2
Chapter 2. Open Channel Hydraulics
Martin Lambert, Bruce Cathers, Robert Keller
2.1. Introduction
All hydraulic flows are three dimensional in nature and involve complex turbulent flow
motion. This is not unlike hydrology where the three dimensional and turbulent nature of
atmospheric flows create temporal and spatial variability in rainfall and runoff which must be
dealt with. The type of hydraulic computations to be undertaken will depend on the problem
to be examined and on the data that is available. As a result, some thought is needed to
determine the appropriate analysis techniques. For example, an unsteady flow analysis will
not be possible if only peak discharges are known.
Free surface flows are driven by gravity and resisted by shear forces on the channel bed and
drag forces on objects such as vegetation and obstructions. The free surface that exists in
open channel flow means that the flow depth and flow area will most likely change with
distance and/or time. In contrast, closed conduits that are operating under pressure have a
fixed cross-sectional area and are driven by the pressure gradient. A combination of free
surface flow in conjunction with closed conduit flow is not uncommon. Hydraulic modelling
for the purposes of flood estimation typically assumes that the flow is composed of water
that is incompressible but the reality maybe that it is sediment or debris laden and in some
cases multi-phase.
Hydraulic computations are usually carried out to determine flood characteristics such as:
• flow depths or water levels (e.g. for recorded floods or synthesised floods of a particular
magnitude),
i. friction losses which are cumulative and can be significant over long distances or
ii. local losses such as those which occur due flow constrictions and expansions imposed by
hydraulic structures or flow around bends).
3
Open Channel Hydraulics
Figure 6.2.2. The River Murray and its floodplain near Waikerie in South Australia, Courtesy
of Martin Lambert
At bends in the water course, natural cross-sections are asymmetrical; they tend to be
deeper on the outside of the bend due to the effect of helicoidal secondary currents which
tend to scour the outside of the bend and deposit sediment at the inside of the bend as
illustrated in Figure 6.2.3.
4
Open Channel Hydraulics
Figure 6.2.3. River bend showing areas of deposition and erosion and characteristic cross-
section.
Since the channel roughness affects the flow, it is important to be able to quantify the
roughness. The roughness in channels is determined by the materials from which the
channel is cut or made, including any vegetation which is growing (or lodged) in the channel.
In man-made channels, the roughness must also include the effects of any jointing between
panels, slabs or pipes. In channels with significant sediment transport, the roughness maybe
changing with flow as the bedforms change in dimension. The challenge of determining the
appropriate roughness to use in flood level computations should not be under-estimated and
is often based on experience and should be validated or calibrated where possible.
Apart from channel roughness, the main parameters associated with the channels are:
i. the cross-sectional area (measured in a cross-section at right angles to the flow direction
A),
iii. the top width B (sometimes also called the storage width),
Depth is usually measured vertically up from the bottom point in the cross-section (rather
than at right angles to the bed), while the stage is measured vertically from a datum to the
water surface. See Figure 6.2.4.
5
Open Channel Hydraulics
6
Open Channel Hydraulics
In natural channels the cross-sectional parameters vary with distance along the channel as
shown in Figure 6.2.5.
Open Channel Flows are three dimensional but often treated as one dimensional
Open channel flows are three dimensional in nature and must satisfy the fundamental
equations of fluid motion governed by the Navier-Stokes equations. A solution of these
three-dimensional equations using direct numerical simulation, however, is not feasible for
the spatial scales dealt with in flood estimation. Three-dimensional simulations using models
which approximate the turbulence behaviour are feasible for small sections of a river reach
and are seeing some use. Advances in computational power and numerical methods have
allowed common usage of unsteady two-dimensional (2D) depth averaged models in flood
hydraulics simulation. These approaches have largely replaced traditional one-dimensional
(1D steady and unsteady) flow analysis for most flood studies even where the flood
behaviour is essentially 1D due to ease that they produce flood inundation maps. While the
treatment of river flows as a one-dimensional flow does make considerable assumptions
about the flow field, this approach has served the engineering community well for over a
century and is still a powerful tool provided the assumptions utilised are appropriate. The
extent of some rivers are so large that even two dimensional models are infeasible, and
methods have been developed which allow the 1D and 2D approaches to work together.
When a fluid with a viscosity such as water flows in an open channel, boundary shear
stresses resist the flow and prevent the unchecked acceleration of the water in the downhill
direction. These resistive forces are transmitted throughout the main body of the flow by
either viscous or turbulent shear stresses generated by velocity gradients over the cross-
section. As a result, a uniform flow cannot have a uniform velocity distribution.
7
Open Channel Hydraulics
The velocity distribution is shown in Figure 6.2.8 where (a) shows the vertical velocity
distribution on the centreline of a rectangular channel in which the depth is equal to one half
of the breadth. In the same figure, curve (b) shows the vertical distribution of mean velocity;
each point on this curve represents the average velocity in a horizontal line across the
section at that level. Secondary currents in the plane of the cross-section produce circulation
which account for both the depression of the maximum velocity filament and the observed
movement of floating material towards the centre of the channel surface. This is another
example of the three dimensional nature of the flows.
8
Open Channel Hydraulics
A traditional one dimensional approach to open channel flow uses a single value to express
the velocity at a cross-section. This is normally the average velocity V defined as the
discharge divided by the cross-section area and forms the basis of the continuity equation.
�
�= (6.2.1)
�
This simplification leads to an error in any calculations of kinetic energy head since the mean
of the squares of individual values is always larger than the square of the mean value. To
make allowance for this effect an energy coefficient a is normally introduced so that the
�2
kinetic energy head at a cross-section is then � 2� . For complex cross-sections, such as
compound channels or close to constrictions like bridge piers and weirs, the value of � can
be significant. Figure 6.2.9 shows the variation in the total energy line between the main
channel and the floodplain if the water surface is horizontal in the channel cross-section.
9
Open Channel Hydraulics
Also shown is the average total energy line given by energy correction coefficient. For a
detailed discussion of this and other velocity coefficients see Chow (1959).
Figure 6.2.9. Variation of the total energy grade line across the compound channel section
assuming a horizontal water surface.
�2 (6.2.2)
�=�+�+�
2�
where... z = vertical distance from the datum to the channel invert or potential energy (J ) per
unit weight of fluid (m), y = depth of flow (m), u = cross-sectionally averaged flow velocity
(m/s), g = acceleration due to gravity =9.806m2/s, � = dimensionless kinetic energy
coefficient. The term
�2 (6.2.3)
�
2�
The need for the kinetic energy coefficient (α) in Equation (6.2.2) and Equation (6.2.3) arises
whenever the velocity is non-uniform over the cross-section. In the case of a uniform velocity
over a cross-section, � = 1. Departures from a uniform velocity over a cross-section, result
in the cube of the mean velocity over the cross-section having a different value from the
cube of the velocity (at each point in the cross-section) when averaged over the cross-
section.
Because the cube of the cross-sectionally averaged velocity will be less that the average of
the local velocity cubed, the parameter α is introduced as:
�=
∑ ���2�
(6.2.4)
��2
10
Open Channel Hydraulics
�=
∑ ���3�
(6.2.5)
��3
where Vi = velocity through cross-sectional area Ai, V = cross-section averaged velocity And
A = total flow area.
In reality, the flow velocity varies over the cross-section and where measurements are
available it is possible to evaluate the parameter �. In practice, for turbulent flow in pipes,
� = 1.03 − 1.06 and is normally set to unity, but for open channels, particularly compound
channels the value of � can depart significantly further from unity. However, it is not an
uncommon practice to also adopt � = 1 in computations for simple prismatic open channel
flows.
The specific head or specific energy (E) is the total head with respect to the channel invert
and is given by:
�2 (6.2.6)
�=�+�
2�
Specific energy is a concept which is useful for determining the water surface profile through
smooth transitions such as a channel narrowing, a smooth hump or smooth step in the
channel or a combination of these channel transitions. This equation is often differentiated to
give the minimum specific energy (critical flow) however the fact that � varies with depth is a
complication. In a compound channel (�) can be significantly greater than 1 and must be
considered.
The equations defining total head Equation (6.2.2) and specific energy Equation (6.2.6) have
two assumptions built into them:
i. The pressure distribution is hydrostatic since the streamlines are straight and parallel, and
ii. that depth measured vertically (y) which is a good approximation to the actual pressure
head (ycos2θ ) in an open channel flow with a bed inclined at θ to the horizontal. For
example, a bedslope (S = tanθ) which is as steep as 1V:10H (or θ = 5.7°) only gives rise
to an error which is close to 1% of the depth i.e. y. cos2(5.7° ) = 0.99y.
• in natural channels (as in-bank and overbank flows of rivers, streams, and creeks),
• in manmade channels (in stormwater and wastewater treatment plants, in urban drainage
systems),
• pipes and closed conduits (as in sewerage channels, pipelines and culverts).
11
Open Channel Hydraulics
• according to their spatial variation (as 1D, 2D or Three Dimensional (3D) - even though
the small scale turbulent motions are always in 3D),
• as gradually varying flow (as in backwater and drawdown longitudinal water surface
profiles) or rapidly varying flows (as in hydraulic jumps). There is no sharp line of
demarcation separating gradually varying flow and rapidly varying flow but the distinction
between them can be put in terms of:
i. a comparison between the radius of curvature of the streamlines and the depth of flow, or
ii. whether the pressure distribution through the vertical can be approximated as being
hydrostatic or not.
• critical flow.
From a theoretical perspective, these form important bounding conditions to water surface
profiles and allow the classification of the flow profiles. This water surface profile
classification allows for a greater understanding of the flow, what controls it and how should
it behave. For example, is the channel hydraulically mild or steep and where are the
controls. In addition, it also gives insight into where a hydraulic jump might occur if the flow is
constricted or controlled in some way downstream. A detailed discussion of flow profile
classification is given in Henderson (1966). Detailed knowledge and understanding of these
flow classifications and their implications is essential when interpreting outputs of water
surface profile calculation numerical models. It is also good practice to verify and validate
complex models with simpler water surface profile calculations before application to the
more complex topologic problems often encountered in real flood studies. For example, can
the model maintain a uniform flow, is the water surface profile correct in a variety of gradually
varied flow situations, how does it handle controls, can it compute both sub-critical and
super-critical flows in simple channels and can it locate hydraulic jumps correctly? These are
good checks to ensure that 1) the model is working correctly and 2) that the user is using the
model correctly.
• the weight or gravity force resolved down the channel (W sinθ ) which tends to accelerate
the flow, and
12
Open Channel Hydraulics
• the frictional shear force (��) which opposes the motion of the water and acts in the
longitudinal direction, tangential to the flow boundary of the channel in contact with the
water (see Figure 6.2.10). The shear force acts around the wetted perimeter of the
channel cross-section.
Figure 6.2.10 depicts the dominant forces acting on an elemental volume of water. For the
flow condition of uniform flow, all the forces cancel out and there is no net force acting on the
fluid element since W. sin θ= ��.
Because the hydrostatic pressure forces on the two ends of the control volume cancel each
other out, they do not enter the picture here. In uniform flow, the bed slope So, the slope of
the water surface Sws and the slope of the total head line or friction slope Sf are all equal.
While uniform flow rarely (if ever) occurs in nature, it may be a reasonable approximation for
the flow down a long, prismatic, manmade channel or as a first pass for estimating a flow
depth in a natural channel given a discharge.
Figure 6.2.10. Opposing forces acting on a fluid element down the channel cancel thereby
producing uniform flow.
13
Open Channel Hydraulics
ii. channel with a rectangular cross-section which is a particular but useful case.
�2�
�� = =1 (6.2.7)
��3
At critical flow, the value of the Froude number is unity and the resulting expressions are:
� �
�� = = =1 (6.2.8)
��3� ���
1
�2 3
(6.2.9)
�� =
�
where q = flow per unit width in channel with rectangular cross-section (m2/s) yc = critical
flow depth in channel with rectangular cross-section (m), V = flow velocity (m/s) when the
depth is yc.
Cross-sections at which the flow is critical are special in that there is a unique relation
between velocity (or rate of flow) and depth, irrespective of the channel roughness or
bedslope. Such cross-sections are technically known as controls and examples of locations
where they occur might are:
vii a hydraulic structure such as a broad crested weir. Some hydraulic structures that consist
. of control gates (such as sluice or radial gates) also provide a unique relationship
between depth and discharge, but critical depth is not present.
In an open channel with various features such those listed above, it is possible that if the
flow changes, so can the control(s). For a given flow, a channel can have more than one
14
Open Channel Hydraulics
control. The actual controls can only be identified through a trial and error process in which
channel feature(s) is/are assumed to be control(s), and the computations on the backwater
and drawdown curves completed. If the resulting water surface profile is compatible with the
known boundary conditions at both ends of the channel, then the assumed control(s) are
valid. If, on the other hand, an incompatibility is reached, it means that one of more of the
assumptions regarding the controls needs to be changed and the solution process of
calculating the water surface profiles is repeated until compatibility with all imposed
boundary conditions is achieved. In a channel with some potential controls, the above
solution process can be lengthy and require a number of trials.
Petryk and Grant (1978) were the first to propose an alternative Froude number for
compound channels. This Froude number was based on a discharge weighted average of
each subsections Froude number calculated from simple channel procedures. While this
method was simple in its application, it was shown to have limitations (Blalock and Sturm,
1981).
Blalock and Sturm (1981) developed a method of calculating the critical depth in a
compound channel based on the definition of minimum specific energy including the kinectic
energy corrections coefficient.
�2 (6.2.10)
�=� +�
2�
where, � is the kinetic energy correction factor, u is the average velocity within the cross
section and y is the depth of flow.
The minimum specific energy is found by setting the derivative with respect to the depth of
Equation (6.2.11) to zero. This yields the following general Froude number for open channel
flow:
1
��2� �2 �� 2
(6.2.11)
��� = −
��3 2��2 ��
where, Q is the total discharge, A is the total cross-sectional area and T is the channel top
width.
Blalock and Sturm (1981) stated that early works on compound channel sections assumed a
value of unity for the value of kinetic energy correction factor, �, which gives:
�2�
��� = (6.2.12)
��3
This is the equation that would be used in a prismatic channel where the average velocity is
a good representation of the flow velocity distribution. Equation (6.2.12) considers the
15
Open Channel Hydraulics
compound channel as a single unit. There are some similarities here to the Single Channel
Method (SCM) used to calculate uniform flow in compound channels (Lambert and Myers,
1998). Equation (6.2.12) would be appropriate in a compound channel if the velocities in the
different sub-areas were of a similar magnitude (Lee et al., 2002)
Blalock and Sturm (1981) found that the value of a is not a constant value, but varies as a
function of depth in compound channels. Blalock and Sturm (1981) evaluated the kinetic
energy correction factor using the traditional divided channel method with vertical divisions
(Chow, 1959). The channel was split into three independent sections, the main channel, and
the two adjacent floodplains. The assumption was that the non-uniformity in the velocity
profile occurs predominantly as a result of the difference in velocities between sections, and
the variation within each section was seen to be negligible. The Manning formula was used
to calculate the conveyance of each section and the boundary between the sub-sections
was not included in the wetted perimeter of the sub-sections. Using this method, the value of
� in a compound channel was derived to be:
3 �3�
∑ 2
� = 1 ��
�= (6.2.13)
�3�
�2�
2
1 3
where, �� is the conveyance of the ith subsection given by � �
� � �
, �� is the cross-sectional
area, �� is the Manning surface roughness coefficient, �� is the hydraulic radius of the cross-
section and KT, AT are the total conveyance and cross-sectional area for the entire channel
respectively.
where �� is the nth subsection property defined by Blalock and Sturm (1981) as:
3 �� ∂��
�1 = ∑ ��
3�� − 2��
∂�
(6.2.15)
�=1
3 �3�
�2 = ∑ �2�
(6.2.16)
�=1
3 �3� ∂��
�3 = ∑ �2�
5�� − 2��
∂�
(6.2.17)
�=1
and, Ti is the top width of flow and Pi is the wetted perimeter for the ith subsection.
The Froude number for a compound channel, based on the definition of critical flow as the
point of minimum specific energy is given by substituting Equation (6.2.13) and Equation
(6.2.14) into Equation (6.2.11):
16
Open Channel Hydraulics
1
�2 �2�1 2
��� = − �1 (6.2.18)
2�� ��
�3
This approach can result in multiple values for critical depth in a compound channel.
Typically, one crtitical depth is within the main channel and the other is above. Exactly which
value should be used is somewhat unclear and is often found by a numerical minimisation
method.
For any given overbank depth a discharge and velocity can be computed using the divided
channel method with vertical divisions, and a subsection Froude number could be computed
based on these values. This is consistent with treating these sub-sections as independent
channels. Further discussion of this is given in Lee et al. (2002) who suggested that there
may be a transition zone between subcritical and supercritical flow where a mixed flow
regime is possible and that the concept of specific value of critical depth given above may
not be meaningful.
i. the viscous skin drag between the moving fluid and the flow boundary, and
ii. the form drag on the roughness projections of the flow boundary.
While a shear force is also developed at the air-water interface, is generally small (except for
strong winds over large surface areas) in comparison to the shear force around the interface
of the water and channel (or water and pipe wall).
An expression for the bed shear stress for uniform flow (t) can be determined by equating
the two forces acting on the fluid element of length (L) in Figure 6.2.10.
�sin� = �� (6.2.19)
�
�� = �� sin� (6.2.21)
�
≈ ���ℎtan� (6.2.22)
= ����ℎ (6.2.23)
�
�ℎ���...�ℎ = (6.2.24)
�
The approximation sin� ≈ tan� is only in error by 1% when the channel slope is as steep as
sin 8°
about 8° or 7H:1V since tan 8°
= 0.99.
17
Open Channel Hydraulics
Equation (6.2.21) applies to uniform flow, but it can be generalised to include gradually
varying flow by replacing the slope, S by the friction slope, Sf. For gradually varying flow, the
bed shear stress is given by:
�� = ���ℎ�� (6.2.25)
The bed shear stress is important when considering the flow velocities necessary for scour
and deposition including:
• Manning equation
While the Chezy and Manning formulas are more commonly applied to open channel flows,
the Darcy-Weisbach and Colebrook-White equations are more commonly applied to pipe
flows.
Both the Chezy and Manning formulas relate the cross-sectional averaged velocity to the
channel slope, the hydraulic radius and an empirical parameter which is used to encapsulate
the effects of the resistance to flow. The two formulas have a different form to each other. A
discussion of the resistance equations is given in the American Society of Civil Engineers
Task Force on Friction Factors in Open Channels (ASCE Task Force, 1963).
2 1
�ℎ3 � 2 (6.2.26)
�=
�
18
Open Channel Hydraulics
The Manning’s equation Equation (6.2.26) is not dimensionally homogeneous and this has
led to confusion about the dimensions of the Manning roughness parameter n. There are
three interpretations of the dimensions of Manning’s n (See Section 5-6 in Chow (1959)):
• 1
−
3 1
� � -These dimensions result directly from the Manning equation Equation (6.2.26),
• �0�0 - i.e. dimensionless. Consequently, the unstated coefficient of unity at the front of the
right hand side of Equation (6.2.26) must have the dimensions of �−1/3 . �1 ,
• 1
� 6 - In this case, the dimensions of n are independent of time, as would be expected of a
roughness parameter and only involve length. As n is a measure of the absolute
roughness of the channel surface. In this case, it is assumed that there is a coefficient of
�
unity equal to 9.81
attached to the right hand side of the Manning equation Equation
(6.2.26). In engineering practice, this is perhaps the most commonly adopted version for
the units of Manning n.
1. With Manning n being held constant for a particular channel, irrespective of the flows
down the channel. This case corresponds to a direct relationship between the square of
the flow velocity and the slope (i.e. S~ v2) and is a characteristic of a fully rough turbulent
flow.
2. With Manning n varying in some prescribed fashion according to the depth of flow (or
stage or hydraulic radius). The actual variation of Manning n will normally have been
determined from measurements and back-calculated through Manning equation. When
used in numerical modelling of flood studies, the variation may be stored as a table of
values for varying depth or stage.
Estimates for the values of Manning n can be found from various sources. An example of a
table of Manning n is give in the table below.
Table 6.2.1. Values of Roughness Coefficient n for different channel conditions (Sellin 1961)
19
Open Channel Hydraulics
1. Tables of Manning n can be found in numerous references for different surfaces such as
asbestos cement, concrete (centrifugally spun), ductile iron and steel (e.g. Table 2 in AS
2200 (2006) and Table 5-6 in Chow (1959)). References often give a range of values for
minimum and maximum values corresponding to pipes in good to poor condition.
2. Twenty four black and white photographs of manmade and natural river channels can be
found in Figure 5-5 of reference Chow (1959) with corresponding values of Manning
roughness parameter ranging from n = 0.012 to n = 0.150. No information on the stream
or river geometry (i.e. plan view or cross-sections) is given in this reference. The main
difficulty with acquiring estimates of n from these photos is that one is relying on the brief
caption for each photo and the view of the exposed part of the bank or channel to gauge
the roughness (and hence the Manning’s n) of the submerged part of the channel or of
the channel under flood conditions.
3. Colour photos of fifty natural rivers, two to seven (but typically three or four) cross-
sections of each river channel and a plan view can be found in Barnes (1967). This
information was assembled by the U.S. Geological Survey over a 15 year period.
Although this reference is old with its quantitative information in imperial units, it is
perhaps the best known and complete compendium on Manning n for natural streams and
rivers. The Manning n values range from n = 0.024 through to n = 0.075 and all
measurements in this reference were made during the peaks of documented floods. The
locations of the cross-sections are indicated on an accompanying plan view of the stream
or river. It is clear from the plan views of all the channel reaches that where the Manning n
values refer to are straight or gently curving. Other information which can be found in this
20
Open Channel Hydraulics
reference includes the river name, geographical location, date of the flood, the flow, a
description of the bed material and condition of both river banks, and a table of cross-
sectional flow area, top width, mean depth, hydraulic radius, mean flow velocity, length
between cross-sections, and the slope between sections.
Interestingly, rivers and streams with sandy beds were not included in this reference
because Manning n values for streams and rivers of this type, which depend upon the size of
the bed material and bedforms, can be found elsewhere. The beds of the streams and rivers
included in this reference ranged from boulder strewn mountain streams to heavily vegetated
streams and rivers.
1. Other references with pictures of channels with varying channel roughness can be found
from various web sites on the internet (Phillips and Ingersoll, 1998) or other references
from the U.S. Geological Survey.
21
Open Channel Hydraulics
Figure 6.2.11. Roughness coefficient data for Esopus Creek with n = 0.030 (Page 34 in
Barnes (1967)).
22
Open Channel Hydraulics
As two dimensional models account for some non-boundary energy losses they often use
slightly lower Manning roughness values than one dimensionalmodels listed above. In a 1D
model roughness is typically assigned according to the cross-sections or along particular
branches. For 2D models, roughness is generally specified as a spatially varying grid/mesh
over the 2D model domain. It is important to note that the loss processes embedded in
hydraulic roughness parameters for 1D and 2D models, whilst closely related, are not
exactly the same.
In a 2D model, some of the above losses are to some degree accounted for by the numerical
scheme. For example, some aspects of bend losses due to change in directional momentum
are explicitly modelled in a full 2D solution. Similarly, part of the form loss due to variations in
23
Open Channel Hydraulics
In urban areas, the way in which buildings are represented in the model has a significant
bearing on the specification of roughness. In areas where buildings are explicitly represented
as obstructions in the model topography, roughness in the surrounding areas should only
account for the nature of the land-use (such as grass, paved, or vegetated areas).
Alternatively, where buildings or other major obstructions are not explicitly modelled, the
impact of these features on losses can be incorporated into the roughness parameter, using
a significantly higher value than would otherwise be the case. Book 6, Chapter 4 contains
further discussion on incorporating buildings, fences and other urban features within the 2D
domain.
Applicable ranges for hydraulic roughness in 1D models have been well established and
defined in numerous references over the last 50+ years, such as Chow et al. (1988). Values
that represent average conditions within and between cross-sections are applied either at
the cross-section or along a part or all of the branch.
Roughness maps have also been generated from an auto image or LiDAR processing in
some areas. However this is not a commonly adopted technique at present.
Table 6.2.2. Valid Manning ‘n’ Ranges for Different Land Use Types
24
Open Channel Hydraulics
• flow expansions and contractions which may cause the flow to separate from the flow
boundary and form a recirculation bubble in which energy is continuously dissipated in the
eddy,
• channel sinuosity which give rise to secondary currents. (No natural channel runs straight
for more than ten times its width.)
� = � ��ℎ (6.2.27)
The Chezy coefficient is a measure of the smoothness of the channel since the smoother the
channel, the greater is the value of the Chezy coefficient. However, in general, the
resistance to flow depends on the viscosity of the water, as well as the roughness of the
surface of the channel. The general semi-empirical expression for the Chezy coefficient is:
12�ℎ
� = 32�log (6.2.28)
2�� /7 + ��
12�ℎ
= 18log (6.2.29)
2�� /7 + ��
25
Open Channel Hydraulics
where. . . δs = viscous sublayer thickness (m) = 11.6 (v/ v*v*) = shear velocity (m.s−1) =
√gRhS = kinematic viscosity of water (m2.s−1) = 10−6 (m2.s−1) for water at 20°ks = boundary
roughness projection height (m)
A few more words on ks as it relates to pipe wall roughness and to bedforms are in order.
For commercial pipes (with their various pipe-to-pipe joints), the wall roughness will have a
different geometry and distribution of roughness projection heights compared to Nikuradse’s
sand coated pipes.
The commercial pipe with an equivalent sand grain roughness is the pipe with the same size
and length which yields the same head loss as a pipe coated with sand of a particular size.
Data on commercial pipes (or other manmade surfaces) will include the equivalent sand
grain roughness.
As pipes in service age, their walls become rougher through deterioration of the pipe wall
and/or build-up of algal slimes. The equivalent sand grain roughness of pipes tends to
increase with time and only through cleaning the pipes can the roughness (ks) be reduced.
Retardation of the flow in an open channel is due to two forces which both oppose the flow:
1. the viscous force between the fluid and the boundary, and
2. the drag force due to the protuberances of the channel (or wall) roughness projections.
The effect of the roughness of the boundary surface on the value of the Chezy coefficient is
quantified by the (wall) roughness (projection height) parameter (ks) while the effect of the
viscosity of the water is incorporated in the thickness of the viscous sublayer (��). The
viscous sublayer is the very thin region of chaotic flow dominated by the viscous force and
which separates the stationary fluid immediately in contact with the stationary flow boundary
and the overlying turbulent boundary layer.
It is evident from the denominator in Equation (6.2.28) that there are two extremes of
turbulent flow:
• the viscous force dominates i.e. �� ks; the roughness projection elements are fully
immersed in the viscous sublayer (see Figure 6.2.13) and the value of the Chezy
coefficient is independent of the wall roughness projection height ks. This extreme flow
condition is known as hydraulically smooth turbulent flow and is unlikely in most practical
situations.
26
Open Channel Hydraulics
• the drag forces on the roughness projection protuberances dominate i.e. ks >> ��; the
roughness projection elements are sufficiently long to break down or protrude through and
rupture the viscous sublayer and the value of the Chezy coefficient (and the uniform flow
velocity) is independent of the fluid viscosity. This extreme flow condition is known as
hydraulically rough turbulent flow and is normally the case when the Reynolds number Re
> 106 (approximately) and would commonly occur in earthen or natural channels. For
hydraulically rough flow, Equation (6.2.29) can be simplified and the value of the Chezy
coefficient can then be determined from:
12�ℎ
� = 18log (6.2.30)
��
Values of the boundary roughness (ks) vary according to the nature of the surface of the
channel boundary. Three examples are earthen channels, brickwork, and concrete. Amongst
other references, useful tabulations of the values for ks for various channel surfaces may be
found in Table 5-6 of Chow (1959) or Table 4-1 in Henderson (1966). The values in
Table 6.2.1 were extracted from Hydraulics Research Ltd. (1990) Table 3 in Charts for the
Hydraulic Design of Channels and Pipes.
Table 6.2.3. Values of the roughness projection height ks and Manning n for straight, clean
pipes concentrically jointed.
27
Open Channel Hydraulics
1. with the Chezy coefficient being held constant for a particular channel, irrespective of the
flows down the channel. This case corresponds to a direct relationship between the
square of the velocity and the slope (i.e. S ~ u2)and is a characteristic of a fully rough
turbulent flow.
Figure 6.2.13. Relative height of the roughness projection elements and the thickness of the
viscous sublayer.
1. with the Chezy coefficient varying according to Equation (6.2.28). In this case, the Chezy
coefficient depends on the flow down the channel. Consequently, the flows capable of
being simulated can range from hydraulically smooth turbulent flow to hydraulically rough
turbulent flow. In effect, Equation (6.2.28) is closely related to the Colebrook-White
equation of pipe flow.
1 �
= 2log + 2.34 (6.2.31)
� ��
1
1 � 6
(6.2.32)
= 2.9
� ��
28
Open Channel Hydraulics
1
1
1 6 �6
the Darcy-Weisbach friction factor by n = �
� or �
Substitution of Equation (6.2.32) into
this expression produces:
1
� = 0.039��6 (6.2.33)
where ks is expressed in metres. Equation (6.2.33) relates the Manning n to the typical
roughness height (ks) used in Equation (6.2.31). The roughness height ks is often taken to
be the D75 value (diameter which more than 75% material passes through) of the gravel bed
material. The Manning equation has also been applied to open channels which are not
hydraulically rough with some success. Ackers (1991) discusses reasons for this by
comparing the Manning equation to the Blasius equation (Streeter and Wylie, 1981) for the
friction factor in smooth wall turbulent flow.
Figure 6.2.14. Comparison between Equation (6.2.31) and the power law approximation
presented in Equation (6.2.32)
29
Open Channel Hydraulics
Figure 6.2.15. Typical compound channel with floodplains of greater roughness than the
main channel
Typically the floodplains may be rougher than the main channel presenting the additional
problem that the channel may have composite roughness as well as a complex or compound
geometry. These features tend to set compound channels apart from simple prismatic
channels for which the uniform flow depth can be predicted with some degree of accuracy.
Historically, compound channels were treated in the same manner as simple prismatic
channels in which the overall hydraulic characteristics were used to calculate the discharge.
This approach was formalised by Horton (1933) who gave the following relationship to
calculate an equivalent Manning n (ne) in simple channels where the roughness varied along
the wetted perimeter. The same approach can be used for compound channels by
employing Equation (6.2.34).
2
�
1.5 3
∑ � �� �
�� =
�=1 (6.2.34)
�
where P is the total wetted perimeter, Pj is the length of wetted perimeter associated with nj
and N is the number of different roughnesses. Both Horton (1933) and Einstein (1934)
assumed that the water area is divided imaginatively into N parts (see Figure 6.2.16) for
each different roughness. They then assumed that each part had the same velocity which is
also equal to the average velocity of the whole section (ie. u1= u2 = u3 = u).
30
Open Channel Hydraulics
Figure 6.2.16. Imaginary division of a compound channel assumed by Horton (1933) to give
the same average velocity on the floodplains and in the main channel.
While Horton's assumption is obviously invalid at low overbank depths when there is a large
difference between the velocities of the two areas, work by Myers (1987) and Ackers (1991)
shows that this approach produces satisfactory results for high overbank stages where the
compound channel is once again tending to act as a single unit. This approach will be
referred to as the Single Channel Method.
Lotter (1933) assumed that the total discharge is equal to the sum of the discharges in each
sub-area. Lotter's approach, like Horton's, was developed to predict the discharge in a
simple prismatic cross-section with varying roughness around the channel perimeter. In
simple prismatic channels, but not in compound channels, it may be assumed that the
hydraulic radius of each sub-area is equal. This however, does not restrict its application to
compound channel flows. When the method suggested by Lotter (1933) is applied to
compound channels some decisions need to be made regarding the subdivision of the
channel cross-section. Typically for compound channels, a vertical division is used to
separate the floodplain from the main channel as shown by the dashed lines in
Figure 6.2.17.
Figure 6.2.17. Vertical division of a compound channel into floodplain and main channel
subsections
In compound channels, this leads to the assumption that the different sub-areas act
independently of each other. As a result, the flood plain subsections and the main channel
are treated as individual simple prismatic channels, and the discharge is obtained for each
subsection by applying an appropriate resistance law, such as the Manning equation, to
each subsection in turn. An expression for an equivalent Manning roughness coefficient (ne)
can also be obtained by this approach.
31
Open Channel Hydraulics
5
3
��
�� = 5
(6.2.35)
� � �� �3
∑ ��
�=1
where P and R are the overall wetted perimeter and hydraulic radius respectively and Pj, Rj
and nj are the wetted perimeter, hydraulic radius and Manning roughness coefficient of the jth
sub-area. This approach, along with the use of vertical divisions at the edge of the main
channel, has become the most popular method of dealing with compound channels. The
methods used to divide the channel into the individual subsections is however, somewhat
arbitrary. While it does seem logical to separate the floodplains (a lower average velocity
typically) from the main channel, it does assume that they act independently of each other.
As a result of the need to subdivide a compound channel into different sub-areas, the
following horizontal and diagonal divisions as illustrated below are equally valid.
Since 1964 evidence has been presented which demonstrates that flows in the different sub-
areas do not act independently. The interaction between the faster moving water in the main
channel and the slower moving water on the floodplain has the effect of reducing the overall
discharge in the compound channel below the value that would be calculated assuming that
they act independently. The first papers describing this phenomenon were Sellin (1964) and
Zheleznyakov (1965).
Sellin (1964) provided photographic evidence of vortices which were believed to be the
source of the interaction between the main channel and the floodplain. Sellin also provided
experimental results which showed that the mean velocity in a channel with floodplains was
approximately 30 % less than the same channel with the flood plains removed. Additionally,
32
Open Channel Hydraulics
Sellin showed that the discharge in the channel was over-predicted by approximately 10 -
12 %. Even though these experiments were carried out at a relatively small scale they did
serve to illustrate that the prediction of the discharge capacity of a compound channel was
not as straightforward as originally thought.
In the time since 1964, a great deal of experimental research has been carried out on this
phenomenon in straight compound channels. A lot of the work has concentrated on
discharge assessment, boundary shear stress distribution, velocity distribution, momentum
transfer and apparent shear stress, as well as the structure of the turbulent flow.
Several other investigators have conducted field tests on compound channels including
Bhowmik and Demissie (1982), Sellin and Giles (1988), Myers and Lyness (1989) and
Martin and Myers (1991). Bhowmik and Demissie (1982) showed that above the bankfull
level the floodplain velocity increased with stage, but the main channel velocity first reduces
with increasing stage and later increases. Work carried out by Sellin and Giles (1988) on the
River Roding in the United Kingdom and (Myers and Lyness, 1989) along with (Martin and
Myers, 1991) on the River Main in Northern Ireland found similar reductions in discharge
capacity above the bankfull level.
Many of the above authors have attempted to quantify, by various means, the effect of the
interaction between the main channel and the floodplain on the overall discharge,
component discharges and boundary shear stress distribution. The methods that have been
used to date can be broadly classified as follows:
1. Using the single channel method with modified resistance coefficients or interaction
factors.
2. Adjusting the subdivision boundaries between the main channel and the floodplain,
sometimes coupled with the inclusion of the internal subdivision boundaries in the wetted
perimeter.
3. Applying correction factors to the discharge which are determined from experimental
research.
4. Using experimental research to assess the apparent shear force on the assumed
subdivision boundary. The discharge is then estimated by incorporating the apparent shear
stress in the external force balance required for uniform flow in the main channel and
floodplain sub-areas.
5. Using turbulence models to predict the lateral spread of the interaction zone in the
compound channel resulting in the determination of the lateral velocity profile.
In the absence of other information, the vertical divisions at the edge of the main channel are
still the favoured technique because it is easy to apply and calculate and divides the zones
in a practical way. It is used in many water surface profiles calculation packages. Lambert
and Sellin (Lambert and Sellin, 1996a) illustrate the use of (Point no. 5 above) and an
approach for determining the interactions between the different regions.
33
Open Channel Hydraulics
Similar experiments by Toebes and Sooky (1966) concluded that energy losses in the model
depended on both the Reynolds number and Froude number and that energy losses per unit
length for the meandering channel were up to 2.5 times as large as those for a uniform
channel of the same hydraulic radius and discharge. The work was extended by James and
Brown (1977) who also found that with increasing sinuosity, the resistance to flow increases
and the velocity profiles become more distorted.
Smith (1978) carried out an experimental investigation into the effect of channel meanders
on flood stage. In this investigation, he compared the stage-discharge relationship of a
meandering compound channel to the stage-discharge relationship of the floodplain alone
where the main channel was filled in and sealed with cement mortar. In doing this Smith
found that at high stages the floodplain without the main channel had a larger discharge
capacity than the combined meandering main channel and floodplain system. This
demonstrated, for this channel geometry, that at high stages (Dr > 0.41) the addition of a
main channel did not contribute to the discharge capacity, on the contrary, it decreased the
discharge capacity. At lower relative depths this was not the case but the meandering
compound channel still showed evidence of the interaction between the main channel and
the floodplain.
1. The main channel was not exclusively the location of the highest velocities in the section.
2. The maximum velocity filament (also observed by Toebes and Sooky (1966)) tended to
roughly follow the inner side walls of the main channel.
3. For the floodplain flow the velocity varied continuously with distance above the floodplain
whereas the main channel velocity remained almost constant with distance above the
floodplain level.
More recently, interest in meandering compound channels has provided the impetus for
more detailed experimental studies. One of these studies followed the Series A experiments
(Knight and Sellin, 1987) on straight compound channels at the SERC-Flood Channel
34
Open Channel Hydraulics
Subsequently, a more detailed record of this study was published by James and Wark (1992)
which considered both in-bank and overbank flows. For in-bank flow conditions, a
modification of an existing method (U.S. Department of Agriculture, 1963) was found to give
satisfactory results. For overbank flow, a new approach was adopted which quantified the
loss mechanisms which occur in meandering compound channels. The new method splits
the flow into four flow zones:
• The floodplains either side of the inner channel and outside the meander belt (3-4).
and then adopts an empirical approach but using, where possible, parameter groups to
represent the known flow mechanisms in each zone. The discharge is then calculated as the
sum of the zonal discharges. It should be noted however that this approach is similar to that
suggested by Ervine and Ellis (1987).
The increased use of 2D flood models now provides much more flexibility to capture the
complex nature of these flows and how they vary across the cross-section than previously
existed with 1D approaches. However, it should be remembered that meandering compound
channels flow can be highly three-dimensional in nature, particularly at the cross-over if the
floodplain switches sides of the main channel and water must flow across the main channel
to get to the downstream floodplain as shown in Lambert and Sellin (Lambert and Sellin,
1996b). The assumptions that form the basis of the depth-averaged 2D approaches break
down in this case and these assumptions need to be checked.
• 5 conditions which compare the normal depth (yo) with the critical depth (yc). This results
in the classification of 5 bedslopes.
• 3 conditions which compare the actual depth (y) with the normal depth (yo) and the critical
depth (yc). This results in 3 zones for the depth.
35
Open Channel Hydraulics
Table 6.2.4. Gradually varied flow classification system (modified from table on p35 of
Fenton (2007))
Bedslope Classification
S steep �� < ��
C critical �� = ��
M mild �� > �
H horizontal (�� = 0) �� = ��
A adverse ((�� < 0) �� does not exist
Depth Classification
Zone 1 � > �� and � > ��
Zone 2 y is between yo and yc
Zone 3 � < �� and � < ��
The 12 generic profiles are curves of increasing or decreasing curvature in either the
downstream or upstream direction. When some simplifying assumptions are made (IF2 <<1
and a linearisation about normal flow depth using a Taylor series has been employed), it has
been shown that the departure of the actual depth (y) from the normal flow depth (y0) follows
an exponential variation in space (Samuels, 1989; Fenton, 2007); this is of the form y −yo~e-
dSf /dx x provided IF << 1 and where Sr can be found from the Manning or Chezy equation.
A consequence of the nature of thewater surface profiles is that any errors introduced during
a backwater computation tend to be systematic. As the computations proceed in one
direction (be it downstream or upstream), the flow curvature continuously increases or
decreases.
All the water surface profiles schematised in Figure 6.2.19 can be classified as a backwater
curve or a drawdown curve:
• backwater curve - the flow depths continuously increase in the downstream direction; the
flow is one of deceleration.
• drawdown curve - the flow depths continuously decrease in the downstream direction; the
flow is one of acceleration.
36
Open Channel Hydraulics
37
The two main numerical techniques for calculating steady water surface profiles are:
Table 6.2.5. Comparison of the direct step and standard step methods
Direct Step Method Standard Step Method
Governing Equation �� �� − � �
��
= �� − � � ∂�
=
∂� 1 − ��2
Since the direct step and other direct integration methods are based on a first order
differential equation, a single boundary condition is required to initiate the computations.
Suitable boundary conditions could be near (but not at) a control where the water level is
known or can be approximated, or any location, reasonably well removed from any regions
of rapidly varying flow, where the water level is known.
If the flow is subcritical, the flow is controlled from the downstream end and computations
should advance in the upstream direction. On the other hand, for supercritical flows, the flow
is controlled from the upstream end and computations should proceed in the downstream
direction (McBean and Perkins, 1975; McBean and Perkins, 1970). If these guidelines
regarding the direction of the solution procedure are not observed, it has been stated
(McBean and Perkins, 1975; McBean and Perkins, 1970) that the calculations will eventually
depart from the true solution. This rule-of-thumb is not without contention. There is some
evidence which suggests that if an implicit finite difference method is employed to solve the
governing equations, then the direction of computation is immaterial (Samuels and
Chawdhary, 1992).
38
Open Channel Hydraulics
The direct step method of calculating gradually varied flow profiles is only applicable to
prismatic channels (which are more commonly manmade than natural). In this method, the
flow depths at those sections where computations are to be carried out along the waterway
are known (or specified) in advance. The (specified) depth increments or decrements
between these sections need not be constant. With the depths (y) along the waterway
known, the direct step method enables the distances between these sections (Δx) to be
calculated directly without the need for iteration or trial and error. The governing equation is
the first order ordinary differential equation:
∂�
= �� − �� (6.2.36)
∂�
�2 (6.2.37)
�=�+
2�
�2 − �1
= �� − �� (6.2.38)
��
�2 − �1
�� = (6.2.39)
�� − ��
�� = �2 − �1 (6.2.40)
∂�
(bedslope) �0 = − (6.2.41)
∂�
(��)2
(friction slope) �� = (6.2.42)
�4/3
ℎ
��1 + ��2
= (6.2.43)
2
39
Open Channel Hydraulics
The reason that the direct step method is only applicable to prismatic channels is that the
unknown in Equation (6.2.39) is Δx. In the case of a subcritical flow, the computations
proceed in the upstream direction; while x2 will be known (or specified), x1 will be unknown.
Unless the section properties at x1 are known, the computation of the specific energy and
friction slope at this location cannot (in principle) proceed. The requirement imposed by the
direct step method is therefore that the channel be prismatic.
Since the direct step method is explicit, no iteration is needed. The solution of Equation
(6.2.39) is straightforward and easily executed in tabular form on a spreadsheet.
Figure 6.2.20 contains a definition diagram for calculating the gradually varying flow water
surface profile for a subcritical flow.
In the standard step method, the aim is to satisfy the two expressions below (Equation
(6.2.44) and Equation (6.2.45)), and this will only happen when the correct unknown depth
40
Open Channel Hydraulics
y1 has been determined. The depth y2 is known for sub-critical flow (reversed for
sueprcritical flow). Because the unknown depth y1 is needed for the flow area A1 and the
friction slope Sf1, a trial and error process or some other iterative technique is needed to
arrive at a solution.
�2
�1 = �1 + �1 + (6.2.44)
2��21
2
��1 + ��2
�1 = �2 + �� (6.2.45)
2
(��)2
where... �� = (6.2.46)
�2�4/3
ℎ
��� ���
��
�1 = �1 −
�� 2�ℎ1 (6.2.47)
1 − ��21 + �1
1+ 3�1
��1
���
��
(for a wide channel with Rh1 ≈ �1) ≈ �1 − (6.2.48)
5 ��
1 − ��12 + 3
1+ �1
��1
where. . . HE = error or difference between the two values of H1 in Equation (6.2.44) and
Equation (6.2.45)
�2 ��1 + ��2
= �1 + �1 + − �2 + �� (6.2.49)
2��12 2
In the case of a supercritical flow, the equations above would need to be modified with the
unknown being y2.
2
�
(average conveyance) �� = (6.2.50)
� �
41
Open Channel Hydraulics
1
(arithmetic mean) �� = � + �� (6.2.51)
� 2 � 1 2
(geometric mean) �� �
= �� 1
�� 2
(6.2.52)
1
(harmonic mean) �� ℎ
= 1 (6.2.53)
2
�� 1 + �� 2
1
where... � = � + �2 (6.2.54)
2 1
Q = K.S1/2
K = conveyance
K = 1 /2 (K1 + K2)
The four averages listed above vary systematically (Laurenson (1986)) so that:
�� �
> �� �
> �� �
> �� ℎ
In addition to the above averages, there are many other equations which average reach-end
parameters A, P, Rh by arithmetic, geometric or harmonic methods, but the averaged friction
slope values have all been found to lie between the two extremes given by �� � and �� ℎ
(Cahdderton and Miller, 1980).
The effect of using the various estimates for the average friction slope have been explored
by various investigators. Laurenson’s conclusion (Laurenson, 1986) was that the best single
method of averaging appears to be the arithmetic average of reach-end friction slopes,
especially if this method is used in concert with the selection of representative cross-sections
of the channel.
It should be noted that the various forms of the equations below are not equivalent and
some forms may be preferred over others due to:
• choice of variables may be more accurate than others (Cunge et al., 1980), or
By considering unsteady flow in an open channel through the following control volume
shown in Figure 6.2.21 below:
42
Open Channel Hydraulics
Figure 6.2.21. Control volume used to derive the gradually varying unsteady flow equations
In dealing with the control volume it has been assumed that the flow is incompressible, one
dimensional and that the streamlines are straight and parallel. It has also been assumed that
the slope of the channel is small, so that sin� ≈ �0 and that there is no lateral inflow.
Additionally, it has been assumed that the geometry of the channel does not change with
time.
��
��
=
∂
∂� ∫ ��� + ∫ ����
�� ��
(6.2.55)
�� ∂ ∂� ∂�
= ��� + � � + ∂� � + ∂� − ��� (6.2.56)
�� ∂� ∂� ∂�
Expanding Equation (6.2.56), ignoring second order terms and dividing by �dx produces the
Unsteady Continuity Equation.
∂� ∂� ∂�
+� +� =0 (6.2.57)
∂� ∂� ∂�
where A is the cross-sectional area, U is the average velocity, t is time and x is the
longitudinal distance along the channel.
43
Open Channel Hydraulics
∑ �� = ∂
∂� ∫ �� �� + ∫ �� ��� = 0
��
�
��
� (6.2.58)
where ∑Fx is the sum of the external forces acting in the x-direction on the control volume in
Figure 6.2.21 and vx is the velocity in the x-direction. By considering all the external forces
acting on the system and evaluating the volume and surface integral terms on the right side
of Equation (6.2.58) yields:
∂� ∂ ∂
������0 − �� ��� − ����� = (��)��� + ����2 �� (6.2.59)
∂� ∂� ∂�
where � is the momentum correction coefficient. By dividing by ����� and expanding the
derivative terms on the right-hand side, Equation (6.2.59) produces:
∂� ��� 1 ∂� ∂� ∂ ∂�
�0 − − = � +� +� ��2 + ��2 (6.2.60)
∂� ��� �� ∂� ∂� ∂� ∂�
However, many of the individual terms in Equation (6.2.60) can be replaced by more
convenient forms. For example:
∂� 1 ∂�
= (6.2.61)
∂� � ∂�
∂� ∂� ∂� 1 ∂� ∂�
= = (6.2.62)
∂� ∂� ∂� � ∂� ∂�
and
∂ ∂� ∂� ∂� ∂� ∂�
��2 = �2 + 2�� =� � +� + �� (6.2.63)
∂� ∂� ∂� ∂� ∂� ∂�
∂� ∂� ∂�
� = �� + (1 − �)� (6.2.64)
∂� ∂� ∂�
1 ∂� 1 ∂� ∂� ∂� ∂� ∂� ∂�
�0 − − �� = � + (1 − �)� + �� +� +� + ���
� ∂� �� ∂� ∂� ∂� ∂� ∂� ∂�
(6.2.65)
��2 ∂� ∂�
+
� ∂� ∂�
The continuity equation (Equation (6.2.57)) can now be used to eliminate the third term on
the right-hand side giving the Unsteady Momentum Equation.
1 ∂� 1 ∂� (1 − �) ∂� �� ∂� �2 ∂� ∂� (6.2.66)
�0 − − �� = + � + +
� ∂� � ∂� �� ∂� � ∂� �� ∂� ∂�
where Sf is the friction slope, So is the longitudinal bed slope, V is the mean cross-sectional
velocity, A is the cross-sectional area, T is the channel top width, g is the gravitational
acceleration, t is time and x is the distance in the direction of flow. This partial differential
equation is the unsteady momentum equation for flow in open channels and includes the
momentum correction factor (b) to account for a non-uniform distribution of velocity in the
cross-section. Equation (6.2.66) is a more general form of the Saint-Venant equation. It was
44
Open Channel Hydraulics
Boussinesq in 1877 who first incorporated correction coefficients for the velocity distribution
in the momentum equation. While he originally proposed three coefficients only one (b) is
used in modern literature and is given by Chow (1959) as:
�=
∫� ��2
(6.2.67)
�2 �
where u is the local velocity through the elemental area dA, A is the cross-sectional area,
and V is the mean velocity.
The unsteady momentum equation can be converted into a total differential equation using
the method of characteristics. While the steady form of the momentum equation can be
obtained from Equation (6.2.66) the transformation to a system of total differential equations
will be carried out for later use.
The method of characteristics allows two partial differential equations to be combined using
an unknown multiplier (��) as shown below in Equation (6.2.68). For any two real and
distinct values of �� two equations in V and A are obtained that contain the properties of the
original two equations L1 and L2 and may replace them in any solution.
� = �1 + ���2 = 0 (6.2.68)
where L1 and L2 are equal to the unsteady momentum function (Equation (6.2.66)) and
continuity relation (Equation (6.2.57)) respectively as shown below:
1 ∂� (1 − �) ∂� �� ∂� 1 �2 ∂� ∂�
�1 = + � + + 1+ + � � − �0 (6.2.69)
� ∂� �� ∂� � ∂� � � ∂� ∂�
∂� ∂� ∂�
�2 = +� +� (6.2.70)
∂� ∂� ∂�
Substituting Equation (6.2.69) and Equation (6.2.70) into Equation (6.2.68) yields:
1 ∂� (1 − �) ∂� �� ∂� 1 �2 ∂� ∂� ∂� ∂�
+ � + + 1+ + � � − �0 + �� + � ��
� ∂� �� ∂� � ∂� � � ∂� ∂� ∂� ∂�
(6.2.71)
∂�
+ �� �
∂�
If Equation (6.2.71) is rearranged by collecting separately the derivatives of velocity and area
then:
45
Open Channel Hydraulics
1 ∂� ∂� 1 �2 ∂� ∂� (1 − �)� ∂�
�� + ���� + + �� � + 1+ + + ��
� ∂� ∂� ��� � ∂� ∂� ���� ∂�
(6.2.72)
− �0 = 0
�� ∂� ∂� �� �� ∂� ∂� ��
= + ��� = + (6.2.73)
�� ∂� ∂� �� �� ∂� ∂� ��
1 ∂� ∂� 1−� �
�� + ���� + + �� +
� ∂� ∂� ��
2 ���� (6.2.74)
1 � ∂� ∂� ∂�
�+ 1+ + + � � − �0 = 0
��� � ∂� ���� + 1 − � � ∂� ∂�
This leads to the following total differential equations by equating Equation (6.2.73) with
Equation (6.2.74).
1 �� (1 − �)� ��
+ �� + + � � − �0 = 0 (6.2.75)
� �� �� �
1 �2 ∂�
���� � + ���
1+ � ∂�
�� (6.2.76)
= �� + ���� =
�� ���� + (1 − �)�
Now solving for �� in Equation (6.2.76) to produce equal values of the term associated with
both the velocity and area derivatives produces:
1 �2 ∂�
���� + (1 − �)� �� + ���� = ���� � + + (6.2.77)
��� � ∂�
�� � ∂�
���� + �2��2�2 + (1 − �)��2 + ����(1 − �)� = ����� + + �2 (6.2.78)
� � ∂�
Collecting and cancelling terms in Equation (6.2.78) results in the following solution for ��
given in Equation (6.2.79):
2 �� �2� ∂�
���� = − �(1 − �)�2 + (6.2.79)
� � ∂�
�� �2� ∂�
���� = ± − �(1 − �)�2 + (6.2.80)
� � ∂�
�� � ∂�
���� = ± + ��2 − �2� + �2 (6.2.81)
� � ∂�
�� � ∂�
���� = ± + �2�2 − �2� + �2 (6.2.82)
� � ∂�
46
Open Channel Hydraulics
The method of characteristics when applied to Equation (6.2.57) and Equation (6.2.66)
transform these two partial differential equations into the following system of total differential
equations.
1 �� 1 �� � ∂� ��
+ 1−� �± + �2�2 − �2� + �2 + � � − �0 (6.2.83)
� �� �� � � ∂� ��
and
�� �� � ∂�
= �� ± + �2�2 − �2 � − = �� ± �� (6.2.84)
�� � � ∂�
Or
1 �� 1 − � � ± �� ��
+ + � � − �0 = 0 (6.2.85)
� �� �� ��
��
= �� ± �� (6.2.86)
��
where �� is the celerity of a small disturbance. When � is set equal to unity (a common
��
assumption) then �� = �
for a non-rectangular section or for rectangular sections. The
characteristic direction represents the direction along which a small disturbance travels is
equal to the absolute wave velocity of a disturbance. The positive sign is used for the
downstream direction, and the negative sign is used for the upstream direction. Referring to
the Equation (6.2.84), dx/dt is positive for both alternatives of Equation (6.2.84) when the
first term ( ��) on the right hand side of Equation (6.2.84) is greater than the square root
term. This represents supercritical flow and the disturbances can only travel in the
downstream direction. Similarly, dx/dtis negative for the upstream direction and positive for
the downstream direction when the first term (��) on the right hand side of Equation (6.2.84)
is smaller than the square root term. This case represents subcritical flow and the
disturbances travel in both upstream and downstream directions. For critical flow, both terms
of Equation (6.2.84) are equal. This physically based criterion may be used to determine the
occurrence of critical flow, since it shows whether the flow control is located at an upstream
or downstream location. The Froude number, which is so important in determining the flow
type (subcritical, supercritical, and critical) is the ratio of the first term on the right hand side
of Equation (6.2.84) to the second term on the right hand side. In a rectangular open
channel flow (where � is usually set to unity) the Froude number becomes:
�
�= (6.2.87)
��
This relationship between space (dx) and time (dt) that results is termed the Courant
number, and it seeks to ensure that the various disturbances or changes at one time level
are captured appropriately at the next advanced time level. While some numerical schemes
do not require this for stability, they will require it for accuracy. This is made even more
important in 2D and 3D flow computations where the disturbances now need to be captured
moving in the plane of the channel cross-section and not just along the channel.
47
Open Channel Hydraulics
1 � 1 − � ∂� �� ∂� �2 �� ∂� 1 ∂� (6.2.88)
+ � + + + + �� − �0 = 0
� �� �� ∂� � ∂� �� �� ∂� � ∂�
�� ∂� 1 �2 �� ∂�
+ + �� − �0 = 0 (6.2.89)
� ∂� � �� �� ∂�
��
Substituting ��
for using the equation of continuity for steady flow:
�� −� ��
= (6.2.90)
�� � ��
and by letting:
�� �� �� ��
= =� (6.2.91)
�� �� �� ��
��
an equation ��
determining for steady gradually varied flow using the momentum approach
(Equation (6.2.93)) is obtained by the substitution of Equation (6.2.90) and Equation (6.2.91)
into Equation (6.2.88) and rearranging to give:
−��2� �� 1 �2 �� ��
+ + � + � � − �0 = 0 (6.2.92)
�� �� � �� �� ��
�� �0 − ��
= (6.2.93)
�� �2 � ��
1− ��
�− � ��
�� �0 − � � �0 − � �
= = (6.2.94)
�� � 2
1 − �2
1 − ��
Note Equation (6.2.93) includes the derivative of the momentum flux correction coefficient.
Similarly for the unsteady energy equation (Equation (6.2.95)):
Eliminating the time derivative terms and substituting Equation (6.2.90) and Equation
��
(6.2.91) provides an equation determining ��
for steady gradually varied flow (Equation
(6.2.98)) after some rearrangement as shown in Equation (6.2.96) and Equation (6.2.97).
3 �� �� 1 �� ��2 �2 ∂� ��
+ + + + �� − �0 = 0 (6.2.96)
2 � �� � �� �� �� ∂� ��
�� �2� � ∂�
1− �− = �0 − �� (6.2.97)
�� �� 2� ∂�
48
Open Channel Hydraulics
�� �� − ��
= (6.2.98)
�� 2
� � � ∂�
1− ��
�− 2� ∂�
�� �0 − �� �0 − ��
= = (6.2.99)
�� 2
� � 1 − �2
1 − ��
It should be noted that Equation (6.2.98) includes an additional term for the derivative of the
kinetic energy correction coefficient, which is commonly neglected.
49
Open Channel Hydraulics
Where:
Additionally, the time varying water depth at any location d(x,y), can be expressed as:
where:
�=�−� (6.2.100)
∂� ∂(� . �) ∂(� . �)
+ + =0 (6.2.101)
∂� ∂� ∂�
Where:
∂�
∂�
is the rate of increase (or decrease) in water level, which for a fixed cell size is
representative of the rate of change of volume of water contained in the cell, and
∂(� . �) ∂(� . �)
∂�
+ ∂�
is the spatial variation in inflow (or outflow) across the cell in the x and y
directions.
Simply put, any increase (or decrease) in volume, must be balanced by a net inflow (or
outflow) of water.
∂� ∂� ∂� ∂�
+� +� +� =0 (6.2.102)
∂� ∂� ∂� ∂�
50
Open Channel Hydraulics
∂� ∂� ∂� ∂�
+� +� +� =0 (6.2.103)
∂� ∂� ∂� ∂�
where:
The equations presented above are in the primitive, Eulerian form. The same equations can
exist in other forms; e.g. the conservation law form (Abbott, 1979) and the conservative-
integral form (LeVeque, 2002).
Due to the symmetry between the two x and y momentum equations, further discussion will
be focused on the x-momentum equation only.
∂� ∂� ∂� ∂�
+� +� +� =0 (6.2.104)
∂� ∂� ∂� ∂�
where:
∂� ∂� ∂�
∂�
+ � ∂� + � ∂� is the partial differential form of the flow acceleration du/dt
∂�
� ∂� is the hydrostatic pressure gradient
2.9.3. Assumptions
In the derivation of these equations, it has been assumed that:
• The pressure is hydrostatic (i.e. vertical accelerations can be neglected and the local
pressure is dependent only on the local depth),
• The flow can be described by continuous (differentiable) functions of ς, u and v (that is, it
does not include step changes in ς, u and v),The flow is two-dimensional (that is, the
effects of vertical variations in the flow velocity can be neglected),
• The flow is nearly horizontal (that is, the average channel bed slope is small), and
• The effects of bed friction can be included through resistance laws (e.g., Manning
equation) that have been derived for steady flow conditions.
Problems associated with accurate modelling of transport equations have been highlighted
by Leonard (1979a). Simple first order schemes are inaccurate and diffusive, while second
order schemes (that are good for solving wave propagation) tend to be oscillatory and
unstable. This has lead to the use of more innovative approaches to modelling the
convective momentum terms (Abbott and Rasmussen, 1977), and the use of higher third
order solution schemes (Leonard, 1979b; Stelling, 1984).
51
Open Channel Hydraulics
For practical modelling applications, these equations need to be expanded to include the
additional effects of other phenomena of interest. The most important of these is probably
the inclusion of the dissipative effects of bed-friction in the momentum equation. The
inclusion of additional terms to form extended modelling equations is considered below.
∂� ∂(� . �) ∂(� . �)
+ + = Sources‐Sinks (6.2.105)
∂� ∂� ∂�
Where the Source terms can represent localised inflows such as may occur at stormwater or
pump outlets, or distributed inflows associated with rainfall, and the Sink terms can represent
localised outflows at drainage pits or pump intakes or distributed losses due to infiltration or,
in long-term simulations the evaporation.
Bed Friction
For flood modelling applications, the momentum equation must be coupled with a suitable
friction formulation. This is typically achieved by adding a Chezy-type friction term to the
momentum equation, which then becomes:
∂� ∂� ∂� ∂� �� �2 + �2
+� +� +� = − (6.2.106)
∂� ∂� ∂� ∂� � 2�
where:
For practical modelling applications, the Chezy coefficient can be related to the more usual
(for Australian applications) Manning ‘n’ roughness coefficient by the Strickler relation,
where:
1
6
� (6.2.107)
n=
�
In some European models, the friction coefficient is sometimes specified in terms of Manning
‘M’, where:
52
Open Channel Hydraulics
1
�= (6.2.108)
�
Eddy Viscosity
Most commercially available 2D models also include an “eddy viscosity” type term to allow
for the effects of sub-grid scale mixing processes. This can be important when modelling
flow separations and eddies, or in situations where it is necessary to model channel/
overbank interactions.
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + (6.2.109)
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
where:
If, for illustration purposes only, the hydrostatic pressure and friction terms are neglected, the
x momentum equation can be rearranged to the form:
∂� ∂� ∂� ∂ 2 � ∂2 �
+� +� −� + =0 (6.2.110)
∂� ∂� ∂� ∂�2 ∂�2
Eddy viscosity and its application to 2D flood models is discussed in more detail in Book 6,
∂�
Chapter 4. It is noted, however, that for eddy viscosity calculations to be meaningful, the � ∂�
∂�
and � ∂� convective momentum terms must be modelled with sufficient accuracy.
Other Terms
The early 2D flood models were originally derived from 2D coastal and estuarine models.
These models typically included additional terms to represent wind shear and Coriolis
effects. When these terms are included, along with additional source/sink terms to allow for
the addition or loss of momentum associated with any sources or sinks of mass, discussed
above, the x momentum equation becomes:
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + + ���� − ��
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
(6.2.111)
+ Source/Sink
where:
53
Open Channel Hydraulics
The wind and Coriolis terms are only likely to become important in wide open floodplains or
in lake or estuarine systems, and are not considered further in the present discussion.
Mass
∂� ∂(��) ∂(��)
+ + = Sources ‐ Sinks (6.2.112)
∂� ∂� ∂�
x-Momentum
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + + Source/Sink (6.2.113)
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
y-Momentum
∂� ∂� ∂� ∂� �� �2 + �2 ∂ 2 � ∂2 �
+� +� +� = − + � + + Source/Sink (6.2.114)
∂� ∂� ∂� ∂� �2� ∂�2 ∂�2
This coupled system of equations provides the three equations necessary to solve for the
three dependent variables; ς the free surface elevation, u the velocity in the x direction and v
the velocity in the y direction.
With the staggered grids used by most finite difference models, there can be issues with
achieving time and space centring of the non-linear spatial derivative terms. In this respect, it
is noted that Stelling et al. (1998) presented a numerical scheme that conserves mass and
maintains non-negative water levels. Nevertheless, modellers should be aware that any
errors in the mass equation, however small, can accumulate with time as the computation
progresses. If the mass equation is not modelled correctly, the error accumulation can
continue to the extent that the final solution may be compromised.
54
Open Channel Hydraulics
Ignoring wind, Coriolis and source/sink terms, the x-momentum equation developed above
can be expressed as:
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + (6.2.115)
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
Using this as a base, some of the more commonly used approximations to the momentum
equation are discussed below.
With this approximation, the convective momentum (momentum transport) terms are
neglected. When these terms are neglected, the eddy viscosity (momentum dispersion)
terms have little physical meaning and can also be neglected. With this approach, the x-
momentum equation reduces to:
∂� ∂� �� �2 + �2
+� = − (6.2.116)
∂� ∂� �2�
This approach should only be used in areas where the velocities are small, and the wave
propagation properties of the flow are dominant. This rarely happens in most practical flood
flow simulations. However, it is noted that the linearised momentum equation is sometimes
used for numerical expediency in order to maintain stability in high velocity flow areas,
including regions of supercritical flow. Although this approximation maintains the wave
propagation properties of the full momentum equation, it cannot model momentum
dominated effects, including flow separations and eddies, and main channel/overbank
momentum transfers.
With this approximation, the local acceleration term δu/δt is neglected and the x-momentum
equation reduces to:
∂� ∂� ∂� �� �2 + �2 ∂2 � ∂ 2 �
� +� +� = − +� + (6.2.117)
∂� ∂� ∂� 2
� � ∂�2 ∂�2
This approximation neglects the wave propagation properties of the momentum equation. It
can be used in reaches with moderate to steep slopes, where the flow is dominated by
friction. However, it should not be used for rapidly varying flows, such as in dam-breaks, or in
reaches with flat slopes and/or deep water where the local acceleration term (and wave
propagation properties of the equation) becomes more important.
55
Open Channel Hydraulics
With this approximation, the convective momentum and eddy viscosity terms are also
neglected and the x-momentum equation reduces to:
∂� � �2 + �2
= − (6.2.118)
∂� �2�
That is, the water surface slope is balanced by the friction slope.
As for the steady state momentum equation, this approximation can be used to describe
gradually varied flows in reaches with moderate to steep slopes. It includes backwater
effects, but has the added limitation that it cannot be used to simulate flow separations and
eddies, or main channel/overbank momentum transfers.
With this approximation, the surface slope of the water is assumed to be the same as the
bed slope the x-momentum equation further reduces to:
∂� � �2 + �2
= − (6.2.119)
∂� �2�
This approximation is effectively the same as solving for the flow properties using a steady
state friction law (such as the Manning equation). Backwater effects are not included, and
water can only flow downstream. As such, the kinematic wave approximation can only be
used to describe gradually varied flows in reaches with moderate to steep slopes where
backwater effects can be neglected.
2. Newton's law of motion, which must hold for every particle at every instant.
3. Boundary conditions, for example a real fluid has zero velocity relative to an adjacent
boundary.
Other relations such as Newton's law of viscosity or the Boussinesq eddy viscosity concept
are also necessary so that solutions can be obtained for the equations developed from these
relations.
An approach to 3D modelling of flows is achieved by integrating the point form (or more
precisely, infinitesimal unit volume) equations of continuity, momentum and energy over the
cross-section. This leads to the continuity equation and Navier-Stokes equations expressed
in tensor notation below (Rodi, 1980).
56
Open Channel Hydraulics
∂��
=0 (6.2.120)
∂��
∂�� ∂�� −1 ∂� ∂2 ��
+ �� = +� (6.2.121)
∂� ∂� � � ∂� � ∂� � ∂� �
where ui is the instantaneous velocity component in the direction xi, P is the instantaneous
static pressure and u is the molecular kinematic viscosity. The Navier-Stokes equations are
exact equations describing the turbulent motion, and numerical procedures are available to
solve these equations. However, the storage capacity and speed of present-day computers
are still not sufficient to allow a solution for practically relevant turbulent flow. The reason for
this is that turbulent motion contains elements which are much smaller than the extent of the
flow domain (typically of the order of 10-3 times smaller). Thus to resolve the motion of these
elements in a numerical procedure at least 109 grid points would be necessary to cover the
flow domain in three dimensions.
A statistical approach to turbulence suggested by Reynolds (1894) may be used to solve the
Navier-Stokes equations for mean values of velocity and pressure when the turbulence
correlations u'v' which result can be determined in some way. The determination of these
correlations is the main problem in calculating turbulent flows and a turbulence model must
be introduced which approximates the correlations and simulates the average character of
real turbulence. The instantaneous values of velocity ui and pressure P are separated into
mean and fluctuating components, as shown in Equation (6.2.123).
�� = �� + �′ and �
� = � + �′ (6.2.122)
�2 �2
�� =
1
�2 − �1 ∫
�1
���� and � =
1
�2 − �1 ∫
�1
��� (6.2.123)
and the averaging time t2-t1 is long compared with the time scale of the turbulent
fluctuations. This results in the following equations:
Continuity equation:
∂��
=0 (6.2.124)
∂��
Momentum equation written in terms of the cartesian coordinates for the x-direction,
Equation (6.2.121) becomes:
where �,�and � ,and are the local time-averaged velocity values in the x,y and z directions
and u', v' and w' are their fluctuating components. The terms on the left-hand side of the
57
Open Channel Hydraulics
equation represent the momentum flux through an element dx, dy, dz. The three terms on
the right-hand side are the external forces acting on the element.
2. The resulting turbulent shear forces on all surfaces (resulting from Reynolds stress
distributions).
Physically, the correlations when multiplied by the density represent the transport of
momentum due to the fluctuating motion as shown below:
The equation above represents the transport of x-direction momentum in the y-direction and
may be considered as a shear stress on the fluid called the turbulent or Reynolds stress.
Strelkoff (1969) integrated the equation of continuity and the Navier-Stokes equation to
obtain the one-dimensional open channel flow equations for an incompressible
homogeneous fluid. Further work was presented by Yen (1973) providing a more detailed
and unified view of the general open channel flow equations.
Often 3D models are applied to steady flows. One approach was described by Olsen (2003)
and Olsen (2004). He solves the three-dimensional Reynolds Averaged Navier-Stokes
equations (RANS) for each cell. The equations can be written in Cartesian form as:
∂��
=0 (6.2.127)
∂��
∂�� ∂�� 1 ∂
+ �� = −���� − ���� � (6.2.128)
∂� ∂� � � ∂� �
for momentum, where i and j represent standard tensor notation indicating the x, y and z
coordinate directions, Vi is the mean velocity component in the xi direction, p is the pressure,
ρ is the fluid density, ��� is the Kronecker delta and is the turbulent Reynolds stress, where ui
and uj are fluctuating velocities, and is Reynolds averaged value of uiuj. The first term is the
transient term which is neglected and the second term is the convective term. The third term
is the pressure term and the final term is the Reynolds stress term which requires a
turbulence model to be evaluated. The standard k-ε model was used for turbulence closure.
The model calculates the eddy-viscosity as:
�2 (6.2.129)
�� = ��
�
1
where cμ is a constant, � = 2 ��� � and is the turbulent kinetic energy and ε is the dissipation
rate of turbulent kinetic energy. The turbulent kinetic energy k is modelled as:
58
Open Channel Hydraulics
∂� ∂� ∂ �� ∂�
+ �� = + �� − � (6.2.130)
∂� ∂� � ∂� � �� ∂� �
∂� � ∂� � ∂��
where σk is a constant and �� = �� ∂� + which is a term for the production of
� ∂�� ∂� �
∂� ∂� ∂ �� ∂� � �2
+ �� = + ��1 �� − ��2 (6.2.131)
∂� ∂� � ∂� � �� ∂� � � �
where Cɛ1, Cɛ2 and σε are constants. Recommended values for the five constants in the k-ε
model given by Rodi (1980). The SIMPLE method (Patankar, 1980) can be used for the
pressure and velocity coupling, and an implicit solver was used to produce the velocity field
across the geometry. Model convergence was assumed when all residuals of the RANS and
turbulence equations between consecutive iterations were of the order of 10-4. An approach
for the application of these equations in compound open channels is given in Conway et al.
(2013).
Since the advent of numerical models, the domain of application of physical models has
been shrinking. However, certain problems remain which are still more appropriately
investigated through the use of physical models, and this is likely to remain the case for
some time yet.
Physical models have been around for hundreds of years, however, it was only in 1885 when
the first physical model study based on scientific principles, was undertaken. This model
study was conducted by Osborne Reynolds for investigating the tidal currents in the Mersey
Estuary near Liverpool, England. Reynolds is reputedly the first person to introduce the time
scale into physical modelling (Lawson and O'Neill, 1980; Allen, 1970). His first model was
distorted with the vertical and horizontal length scales differing by a factor of 33.1. The
model sides were vertical and initially, the bed of sand was flat. After a period of model
operation however, Reynolds observed that the bed was reshaped with the principal features
of the natural estuary. This early success provided the impetus for Reynolds to follow up on
this work with another bigger model, again of the Mersey Estuary.
• fixed bed - often made out of cement mortar. Such models yield information about the flow
patterns and velocity field in, for example, estuaries, river channels or tidal inlets.
• mobile bed - with the model bed typically consisting of one of the following: sand, particles
of coal or a granulated plastic. Mobile bed models yield (qualitative) information about the
sediment movement as well as the water motion; they are of interest for investigating
scour holes, or regions of sediment accretion or erosion.
59
Open Channel Hydraulics
• natural - where the floodplains and river channels of fixed bed models are constructed by
first locating a series of templates made out of metal or plywood into position. Vertically,
these sections are positioned using a theodolite. The channel bed and land form between
the templates is interpolated. Sand is used as a fill material between the templates, and
then a cement mortar capping is applied and frequently painted.
• manmade - in which case, physical models often incorporate a hydraulic structure such as
a culvert, pipe, drainage channel, basin, levee, spillway or outlet works. In the model,
hydraulic structures could be made out of (painted) timber, marine plywood, PVC pipes
and also Perspex when visibility is a consideration e.g. for tracing the flow patterns
through a structure with the use of a dye.
Accurate simulation of the flows in a physical model requires three kinds of similitude (or
similarity) and it will be noticed that the word geometry enters the description of each kind of
similitude:
• Geometric similitude- requires the geometry or shapes of the flow boundaries to be similar
in model and prototype. Lengths in the model are scaled versions of the corresponding
prototype lengths. Geometric similitude is secured by ensuring that the model is a scaled
reproduction of the prototype.
• dynamic similitude- requires that all forces (be they pressure forces, weight, boundary
friction forces, drag forces, surface tension forces, centripetal forces) at each point in the
flow domain are each scaled by the same factor between model and prototype. If this were
to be achieved, the force polygon acting on each elemental fluid parcel in the flow field,
would have the same geometry in model and prototype; this is referred to as complete
similitude. Complete similitude is the ideal situation, but in practice, is impossible to
achieve. The reasons for this are twofold.
Firstly, the scaling requirements of the various forces are incompatible because some forces
act through volumes (e.g. gravity and centripetal forces), other forces act over areas (e.g.
pressure, drag and viscous forces) and another force acts over lengths (the surface tension
force). The scales associated with volumes, areas and lengths are different. Consequently,
the forces associated with volumes, areas and lengths will, in general, scale differently. If a
model was built full scale, all forces would scale correctly. However, the smaller the model
compared to the prototype, the larger the length scale and the greater the discrepancy
between the scalings of the various forces which act through volumes or over areas or
lengths.
Secondly, the limited fluids available for use in models (in terms of their fluid properties of
density and viscosity, and cost and safety) restrict the range of length scales which can be
used in physical models. In nearly all cases in engineering practice, the fluids used in
hydraulic models are water (and occasionally air). Consequently, a compromise has to be
reached in which only the dominant forces are correctly scaled by the same factor, and the
incorrect scalings of the smaller forces are of negligible consequence in a well designed
model. This is termed incomplete similitude.
The task of the modeller is to identify the dominant forces, ensure that these forces are
scaled correctly, and disregard any insignificant forces. The effects (i.e. errors) due to those
incorrectly scaled, insignificant forces are known as scale effects. In current engineering
practice, most hydraulic investigations involve free surface flows and to a much lesser
extent, pressurised or closed conduit flows. These two types of problems require different
scaling criteria:
60
Open Channel Hydraulics
In free surface flows, the dominant forces are usually gravity, the associated pressure force,
and boundary friction. By asserting equality of the Froude number between model and
prototype at all points in the flow field, the gravity and pressure forces are scaled by the
same (desired) factor. By adjusting the model boundary roughness, consistent scaling of the
boundary friction force is then achieved. Dynamic similitude is achieved provided the scale
effects are negligible. A model which is based on point-to-point equality of the Froude
number between model and prototype is known as a Froude model. Effectively, what the
modeller is doing here is to ensure that gravity, pressure and boundary friction forces are
scaled consistently. All other forces are insignificant and so the momentum equation (or
equations if in 2D or 3D) for any elemental parcel of fluid in the model mimics (i.e. is a scaled
version of) the momentum equation(s) for the corresponding elemental parcel of fluid in the
prototype. Dynamic similitude means that the corresponding terms in the model and
prototype momentum equations for each of the dominant forces are all related by a constant
factor. In models of closed conduit or pressurised flows, such as flows through pipes, the
dynamic scaling requirement is that there is point-to-point equality of the Reynolds number in
model and prototype. The dominant forces are the viscous and pressure forces which are
correctly scaled in a Reynolds model.
Kinematic similitude requires the flow patterns in the model and prototype to be
geometrically similar. In other words, the velocities at all points in the model, bear the same
ratio between model and prototype. If this is achieved, then the model is in similitude with the
prototype.
If any two of the above three kinds of similitude are satisfied, then the remaining similitude is
inferred. In normal modelling practice, models are designed to satisfy both geometric and
dynamic similitude. It then follows that kinematic similitude is also satisfied. The usual
approach to model design is to satisfy geometric similitude through careful model
construction and dynamic similitude by adopting the appropriate modelling criterion. In
general, there are various modelling criteria involving various dimensionless numbers, but in
practice, the most common criterion which has been mentioned above, is based on the
Froude number (for free surface flows). The next most common criterion is based on the
Reynolds number (for closed conduit or pressurised flows).
The Froude number may be regarded as the ratio of the inertial (or resultant) force to the
gravity force, and the Reynolds number as the ratio of the inertial force to the viscous force.
In models based on the Froude criterion or the Reynolds criterion, the correct scaling of the
ubiquitous pressure forces is also achieved and the question arises as to how this comes
about. The reason is that if the dominant forces (including the resultant force) are all
correctly scaled, then the force polygon in model and prototype will also be correctly scaled.
Consequently, one of these forces is a dependent force and this is taken to be the pressure
force in Froude and Reynolds type models (Warnock, 1949).
Considering any small parcel of fluid in the prototype, what is required of the model is that
the corresponding parcel of fluid in the model moves along the corresponding path at a
scaled velocity of the prototype velocity. For this to happen, all forces (weight, pressure,
drag, viscous, surface tension, elastic) acting on that parcel of fluid in the prototype must be
in the same proportions to each other in the model. If this is the case, then there would be
complete similitude between model and prototype. However, the various forces scale in
different ways: some forces scale as the length cubed (e.g. weight, centripetal forces), other
forces scale as length squared (e.g. pressure) and one other scales as length (i.e. surface
tension). Therefore, as soon as the length scale departs from unity (i.e. full scale model), it is
impossible or very difficult for all these forces to scale together in the same proportion.
61
Open Channel Hydraulics
Fortunately, however, it is frequently the case, that there are one or two dominant forces
present in the flow field and it is therefore of no consequence if the minor forces are not
scaled correctly, so long as the dominant forces are. This is termed incomplete similitude.
• the results derived from the model will be less sensitive to errors in measurements.
On the other hand, there are a number of factors which tend to limit the size of the physical
model:
• model construction costs. Construction costs depend on the complexity of the bathymetry
and topography, and the detail in any hydraulic structures.
• the extent of the available floor area to accommodate the model. Models are preferably
housed under cover so that the model testing is weather independent i.e. free of wind and
rain and the model itself is sheltered. Moreover, to fit a model into a given area, a river
channel can be ’folded’ into a more compact form. If additional bends are introduced into
the model which do not exist in the prototype, it may be necessary to take the additional
head losses into consideration. Also, channel areas which in the prototype provide
storage, can have their effects simulated by having areas of different shape but equivalent
plan area.
• flows available from the water supply. The bigger the model, the larger the flow needed.
The length scales for most (undistorted) physical models are generally in the range of 1: 5 to
1: 2000.
The advantage of distorting a free surface flow model is that for the same plan area of
model, the depths of flow will be deeper in the distorted model. In nearly all prototype free
surface flows, the flow is turbulent; and prototype turbulent flows can only be simulated in a
model with turbulent flows. For example, if a model is distorted by a factor of 5, for the same
(available) plan area, the model depths would be deeper by a factor of 5 and the model
velocities would be greater by a factor of 5 when compared to an undistorted model.
Consequently, the Reynolds numbers in the distorted model will be greater by a factor of 53/2
= 11.2 compared to the corresponding undistorted model (with the same plan area) and
turbulent flow is more likely to be guaranteed.
62
Open Channel Hydraulics
There are various scales of interest in a physical model and the main ones in a distorted
Froude model are:
• discharge scale for flow through a vertical plane (Qr = Arvr = (xryr)zr1/2),
• pressure scale (pr = ρrvr2 = zr if ρr = 1 implying that the fluid in the model is the same as
the fluid in the prototype), and
If the model is undistorted, the various scales above can be determined by setting the length
scale Lr = xr = zr in the expressions above. For example, the flow scale would be Qr = LR5/2
and the velocity and time scales both become vr = tr = LR1/2 .
2 1
�ℎ � 3 ��2 (6.2.132)
�� =
��
2 1
�
3 � 2
1 �ℎ � � (6.2.133)
�
��2 =
��
(�ℎ)2/3
�
�� = (6.2.134)
�1/2
�
It is evident from Equation (6.2.134) that nr depends upon the hydraulic radius scale ((Rh)r)
and for a distorted model, this in turn depends upon both xr and zr. One consequence of
distortion is that the friction forces at the model bed are under-represented. Equation
(6.2.134) indicates that usually, the Manning n in the model should be greater than the
Manning’s n in the prototype. For a model made out of cement mortar, the model roughness
will be too smooth and additional artificial roughness has to be applied to the surface of the
model. This can take the form of coarse sand, pebbles or other roughness elements glued to
63
Open Channel Hydraulics
the model surface; sometimes a light gauge wire mesh is attached to the model. Model
roughness is adjusted empirically during the calibration phase of the model investigation.
Figure 16 is of a distorted physical flood model in which the model roughness was increased
in two ways:
• the over-bank areas were artificially roughened with appropriately spaced and patterned,
vertical dowels or roughness elements. The dowels exert a drag force on the water
moving over the model floodplain; this additional force is designed to make up for the
deficit of boundary friction at the overly smooth model bed.
• the in-bank areas were roughened with small pads of synthetic fibrous material. The
spacing of these pads was adjusted by trial and error until the slope of the water surface
matched the required slope.
If the roughness in the model is too smooth, not only will the magnitude of the velocities be
too large, but also, the flow paths will tend to be too straight. The opposite trends tend to
occur in a model which is too rough.
In the design of the distorted model depicted in Figure 16, it was found that while the model
simulation of floods with a 50 year and 100 year Average Recurrence Interval (2% and 1%
AEP) satisfied the requirement for turbulent flow in the model, the model flow corresponding
to a smaller flood with a return period of 25 years was transitional between laminar and
turbulent and therefore could not be tested in the model.
In an undistorted model with nr = Lr1/6 , the Manning n for the model surface is too high.
Model spillways are usually constructed of timber and marine plywood and the usual
approach is to try and achieve as smooth a surface as possible through sanding and
painting, and to tolerate the mismatch. As the boundary friction forces in this flow scenario
are not as important as the dominant gravity force, any scale effect in the model roughness
is not usually significant.
Mobile bed models are considerably more complex than fixed bed models, and only some
general considerations will be included here. With the introduction of a sediment, the
specifications for the sediment particles in the model need to be decided. If the model
sediment grain size was simply scaled down according to the length scale, the resulting
model sediment particles would be so fine that they may well exhibit cohesive behaviour,
and such a sediment could not be used to simulate the behaviour of a non-cohesive,
prototype sediment. This problem is circumvented by using sediment grains in the model
(which are oversized compared to the grain size which would be obtained from the length
scale), but made out of a material which is less dense than sand, for example coal or a
granulated plastic. By balancing sediment density and particle size for the model sediment
particles, their fall velocity can be scaled according to the velocity scale according to the
Froude criterion.
64
Open Channel Hydraulics
Mobile bed modelling of non-cohesive sediments is far from routine. The bed roughness of a
mobile bed is a combination of grain roughness and bedforms. The size and type of the
bedforms (ripples or dunes) cannot be specified a priori, but rather are determined by
interactions between the near-bed fluid dynamics and morphodynamics; this includes the
turbulence intensity and its distribution, the permeability of the bed and the shape of the
deformable bed.
With the introduction of sediment into a model, there is the need to establish a morphological
time scale which is best based on comparisons between the model and prototype bed
evolution. This may require distortion of the flow scale in the model to achieve.
The motion of the sediment in the prototype is a mixture of suspended sediment and bed
load. While the suspended sediment moves at approximately the flow velocity, the bed load
travels much slower. The simulation of sediment movement which consists of nearly all
suspended load or nearly all bed load is easier than the much more difficult problem of
modelling the total sediment load which consists of comparable proportions of both
suspended and bed load. It is possible also, that the proportions of bed load and suspended
load vary with time, such as during a flood hydrograph. When this happens, the
sedimentation time scale will also vary with time.
Some flow scenarios may involve a combination of free surface flow (which required a
Froude model) and pressurised flow (which requires a Reynolds model). The resulting
scales from applying the Froude and Reynolds criteria are in conflict. Therefore, some other
solution, probably involving some compromises, must be sought in such a situation.
• Model design - the modeling criterion is selected (IFr = 1), a decision on the model type is
made (distorted or undistorted model, fixed bed or mobile bed), the length scale/s is/are
selected taking cognisance of available space, flows and funding, drainage system and
model boundaries, which should be well removed from the study area.
• Model verification - the performance of the model is further checked on another event,
which is independent of the data used for calibration. While verification is a desirable
stage of model testing to perform, it is not often carried out due to a lack of appropriate
data or inadequate funding. In selecting the data for both the calibration and verification
phases, it good practice to use the data from events which are of comparable magnitude
to the scenarios to be tested. For example, it is not sound modelling practice to calibrate a
model on a small and frequent event when the purpose of building the model is to
undertake tests corresponding to rare events. The reason for this is that the reliability of
extrapolating the model performance cannot be taken for granted and should be
questioned.
65
Open Channel Hydraulics
• Model testing - the model is tested under one or more scenarios which may correspond to
real or synthesised events.
• Reporting of model results - a technical report is prepared which includes the investigation
methodology and the findings of the investigation. All results should be in terms of
prototype values rather than model values.
With the advent of widespread, powerful, and cheap computing facilities, numerical
modelling has advanced significantly. Physical modelling, however, is by no means obsolete.
Indeed, as discussed by Martins (1989)Martins (1989), the development of physical
modelling has kept pace with numerical modelling. Often the two are intertwined through the
concept of “hybrid modelling” where a physical model of a complex flow region provides the
boundary conditions for a numerical model covering a much larger area.
This section examines the particular application of physical modelling to the design of
hydraulic structures and identifies outstanding issues that remain to be solved. The field of
physical modelling is vast, both with regard to the range of problems tackled and the breadth
of literature in the field. Excellent reviews and texts include those of Martins (1989), Kobus
(1980), and Novak and Cábelka (1981).
Firstly general modelling criteria are reviewed. In particular, the need to supplement pure
dimensional analysis with process functions, based on sound analytical concepts, is
emphasised. Attention is then focussed on the modelling of hydraulic structures and the
potential implications of scale effects. Actual model studies are used to illustrate these
issues. Finally some outstanding issues for further development are then identified.
Channel width, W, L
Channel depth, y, L
66
Open Channel Hydraulics
Surface roughness, �, L
�2 �2 ��� � �
� , , , , =0 (6.2.135)
�� � � � �
��
Strict similitude would only be possible if all five groups are identical in model and prototype.
It can quickly be established, however, that this is not possible, especially if the same fluid is
used in model and prototype. Using physical understanding and a process function the first
term of Equation (6.2.135) represents the ratio of inertial forces to gravitational forces. Since
open channel flow phenomena in general, and most hydraulic structure flows in particular,
are gravity driven, this parameter must be retained. Requiring equality of the first term at
homologous points in the model and the prototype leads to the well-known Froude law of
modelling, appropriate to open channel flows, of:
�� = �� (6.2.136)
The second term of Equation (6.2.135) is a Weber number, representing the ratio of inertial
forces to surface tension forces. This ratio increases with model size because the inertial
forces act on a volume whereas the surface tension forces act on an area. Thus the surface
tension forces become negligible, provided the model is reasonably large, and the second
term can be disregarded.
Turning now to the third term of Equation (6.2.135), we identify a Reynolds number, Re,
representing the ratio of inertia forces to viscous forces. In the context of an open channel
flow, viscous forces affect the surface resistance, apparently requiring Reynolds number
equality between model and prototype for full similarity.
If the same fluid is used in model and prototype, Reynolds number equality at homologous
points would require that:
1
�� = (6.2.137)
��
and this condition is clearly incompatible with Equation (6.2.136). Indeed, it is readily shown
that, if the velocity scale is based on Equation (6.2.136):
3
2 (6.2.138)
�Re = ��
This can be resolved by making use of a process function for flow resistance which links the
friction factor, Reynolds number, and relative roughness through the well-known Colebrook-
67
Open Channel Hydraulics
White equation. This function is conveniently plotted as a Moody diagram and is reproduced
in Figure 6.2.23.
The equation for friction factor, the ordinate of Figure 6.2.23, shows that the Froude criterion
of Equation (6.2.136) can only be satisfied if the friction factor is the same in model and
prototype. Superimposed on Figure 6.2.23 is a hypothetical range of prototype Reynolds
numbers and a corresponding range of model operation, assuming a model scale of 1:25.
For the example given, it is evident that equality of friction factor between model and
prototype can only be obtained if the model is relatively smoother than the prototype. It is
noted, further, that this equality is only possible for one particular operating condition
(characterised by Reynolds number). For other operating conditions, the model friction factor
will be different from that in the prototype, introducing a friction scale effect. The scale effect
can be calculated, however, and model results adjusted when scaling up to prototype
values.
Figure 6.2.23 also demonstrates that, if the prototype is relatively smooth, it may not be
possible to build a model with a low enough friction factor to match that of the prototype. In
this situation, the higher model friction factor may be accepted as at least conservative with
respect to predicted flow depths, or, again, the scale effect can be calculated and used to
adjust the predicted prototype values.
The discussion above has demonstrated that dimensional analysis is insufficient on its own
to provide a basis for the modelling of open channel flow. Indeed, relying solely on
dimensional analysis, it would be concluded that accurate modelling is not possible. It is only
by using knowledge of flow resistance and its corresponding process function to dimensional
analysis that an appropriate modelling procedure is possible.
Other examples of the necessity for process functions, in addition to dimensional analysis,
for physical modelling of weir flows and vortex drop shafts have been discussed by Ackers
(1987).
68
Open Channel Hydraulics
The discussion above applies to undistorted models only - ie those for which the horizontal
and vertical scales are identical. Undistorted models are common in hydraulic structure
investigations, but are often impractical for large rivers because of their typically large
width:depth ratios. A typical river may have a width of 500 m and a depth of perhaps 2 m.
The corresponding undistorted model of a scale of, say, 1:250 would be 2 m wide and 8 mm
deep. The model flow is then likely to be totally different in character to the prototype flow
due to surface tension effects and the likelihood of laminar model flow.
This situation is resolved by utilising a vertical scale that is larger than the horizontal. The
Froude relationship is still expressed in the form of Equation (6.2.136), where, however, ly
represents the vertical scale because it is vertical, rather than horizontal distances which
measure the effect of gravity on velocity.
� �2 (6.2.139)
ℎ� = �
4� 2�
�� ��
�� = = �� (6.2.140)
�� ��
or
�� ��
�� = = (6.2.141)
�� ��
Because �y is always greater than �x, Equation (6.2.141) shows that the model must be
rougher than the prototype. Indeed, in direct contrast to undistorted models, significant effort
is often required to make the model rough enough!
Mobile bed models introduce an additional degree of complexity because the roughness of
the bed is largely dependent on the form losses associated with the bed features. Because
the bed features are formed by the flow conditions, and hence cannot be directly established
by the modeller, it is important that the model flow conditions are such that the model bed
simulates closely the bed of the prototype.
Resolution of these complexities is beyond the scope of this chapter. Further details are
provided in Keller (1998).
An undistorted geometric scale model is normally built and operated under conditions of
Froudian similarity. The model head (stage)-discharge data are then simply scaled up to
prototype values, utilizing the equations:
69
Open Channel Hydraulics
�ℎ = �� (6.2.142)
5
2 (6.2.143)
�� = ��
The characteristics of boundary layer growth are such that model scaling for reproduction of
free surface effects (Froudian scale) will introduce dissimilarities. We illustrate this by
considering the equations for turbulent boundary layer growth on a flat plate Streeter and
Wylie (1979):
0.38�
�= (6.2.144)
Re0.2
1
�* = � (6.2.145)
8
Equation (6.2.145) is developed from the one-seventh power law velocity distribution:
1
� � 7 (6.2.146)
=
� �
In the above equations, � and �* are boundary layer thickness and boundary layer
displacement thickness respectively, Re is the flow Reynolds number defined with respect to
the length x of boundary layer development, u is the velocity at elevation y above the bed,
and U is the free stream velocity.
Equation (6.2.144) and Equation (6.2.145) indicate that similarity of boundary layer growth
will only be possible if the Reynolds numbers are the same in model and prototype. This
criterion is not met in a Froudian model, for which �Re = ��1.5.
The measured water surface elevation upstream of the flume control is affected by the
boundary layer growth on the flume floor in that the water surface is displaced by a distance
equal to the boundary layer displacement thickness. Accordingly, any dissimilarities in the
modelling of the boundary layer displacement thickness will reflect as a dissimilarity in the
position of the water surface, and a consequent scale effect in the measurement of pressure
head.
Keller (1984a) has developed a procedure for determining the magnitude of the scale effect
and for adjusting the model data to correctly predict the prototype behaviour. For details, the
reader is referred to the original paper. The procedure relies on the use of the process
function for boundary layer growth embodied in Equation (6.2.144) to Equation (6.2.146).
Keller (1984b) has applied the procedure to undrowned cut-throat flumes and typical results
are presented in Figure 6.2.24. These data were obtained from a study involving three
geometrically similar flumes to scale ratios of 1:2:4. Flume 1 (x) is the smallest, Flume 2 (⊙)
is twice as large as Flume 1, and Flume 3 (+) is four times as large as Flume 1. The ordinate
is the piezometric head, ha, normalised with the throat width, BT. The abscissa is the non-
dimensional discharge parameter.
The data in Figure 6.2.24(a) are uncorrected and show a tendency at values of ha/BT below
about 0.8 to plot progressively to the right with flume size - ie for ha to be slightly less for the
large flume than would be predicted from tests on the small flumes. Expressed in terms of
more relevance to the practising engineer, an uncorrected model rating would result in an
under-prediction in the discharge through a four times larger prototype structure by up to
70
Open Channel Hydraulics
10%. Figure 6.2.24(b) shows the data adjusted for dissimilar boundary layer growth. It is
clear that the small trend with flume size has been completely eliminated.
The point of this example has been to demonstrate that scale effects arise in many model
studies. However, with an understanding of the physical processes which govern the
phenomena and with a knowledge of the appropriate process functions, the scale effects can
be assessed and, in many cases, explicitly determined.
Figure 6.2.24. Data for Cut-throat flumes (a) Uncorrected, (b) Corrected for Scale Effects
(after Keller (1984b))
Developments over the next few years are likely to concentrate on hybrid models – ie model
approaches where physical modelling and numerical modelling are applied in tandem to the
71
Open Channel Hydraulics
solution of complex hydraulic problems. There is evidence of this already, and two examples
are given in the following.
The mixing processes downstream of a pollutant outfall are often classified as “near field”
and “far field”. Sometimes an additional “mid-field” may be introduced. Rutherford (1994)
provides an excellent review of mixing processes.
In all but the most simple of cases, near-field mixing is extremely difficult to model
numerically. The flow field is very strongly three-dimensional and the mixing processes may
be dominated by vertical mixing, transverse mixing, or both. In the far-field region, full
vertical mixing may well have been achieved, and a numerical two dimensional mixing model
may be adequate to describe the continuing diffusion of the pollutant. In this situation, the
most efficient modelling framework may be to build an undistorted physical model to
simulate the near-field region and to use its measured downstream parameters as the
upstream boundary conditions for the numerical model.
The second example has been described by Ackers (1987) and concerns prototype
phenomena where air entrainment is a primary parameter. On spillways, self-aeration
through floor slots is commonly permitted in order to control cavitation damage on the
spillway. The amount of air that is entrained depends on the length of the trajectory of the
nappe which springs from the upstream edge of the spillway slot. However, the length of the
trajectory depends on the air pressure beneath the nappe. This depends on the rate of air
entrainment, which, in turn, is a function of the resistance of the air supply ducts. Neither of
these features can be properly simulated in an undistorted scale model. The trick is, in fact,
to link the general spillway model with a separate computational study of the performance of
the ducts with a design value of air demand. The aerodynamic resistance of the prototype
supply duct determines the pressure to be expected beneath the nappe and this (sub-
atmospheric) pressure must then be reproduced artificially in the model.
There are many other areas that will keep the physical modeller busy for many years to
come. Some examples are:
2.13. References
AS Committee (2006), PL-045. Australian Standard AS 2200, Design Charts for Water
Supply and Sewerage. Standards Australia, Sydney, January 2006. ISBN 0 7337 7084 3.
ASCE Task Force (1963), Friction Factors in Open Channels. Journal of Hydraulics Division,
ASCE, 89(2), 97-143.
Abbott, M.B. (1979), Computational Hydraulics - 's of the Theory of Free Surface Flows,
Pitman, London.
72
Open Channel Hydraulics
Abbott, M.B. and Rasmussen, C.H. (1977), On the Numerical Modelling of Rapid Contraction
and Expansion in Models that are Two-dimensional in Plan, Proc. XV11 Congress, IAHR,
Baden-Baden.
Ackers, P. (1992), Hydraulic design of two stage channels. Proc. of the Institution of Civil
Engineers, Water, Maritime and Energy, Dec. 1992, 96: 247-257.
Ackers, P. (1987), 'Scale Models. Examples of how, why, and when - with some ifs and buts',
Proceedings of XXII Congress, International Association for Hydraulic Research, Technical
Session B, Lausanne, Switzerland, pp: 1-15
Allen, J. (1968), The life and work of Osborne Reynolds, chapter 1, pages 1-82. Manchester
University Press and Barnes and Noble, Inc., Manchester and New York, 1970. Osborne
Reynolds and Engineering Science Today, edited by McDowell, D.M. and Jackson, J.D.
Papers presented at the Osborne Reynolds Centenary Symposium, University of
Manchester, September 1968, ISBN 0 7190 0376 8.
Bhowmik, N.G. and Demissie, M. (1982), Carrying capacity of flood plains. Proc. ASCE,
Journal of the Hydraulics Division, Mar. 1982, 108(3), 443-452.
Blalock, M.E. and Sturm, T.W. (1981), Minimum Specific Energy in Compound Open
Channel. Journal of the Hydraulics Division, ASCE, Jun. 1981, 107(6), 699-717.
Cahdderton, R.A. and Miller, A.C. Friction slope models for M2 profiles. Water Resources
Bulletin, American Water Resources Association, 16(2), 235-242, April 1980. Paper No.
79037.
Chow, V.T. (1959), Open-Channel Hydraulics. McGraw-Hill Book Company, New York.
Chow, V., Maidment, D. and Mays, L. (1988), Applied hydrology. New York: McGraw-Hill.
Conway, P., O'Sullivan, J.J., Lambert M.F. (2013), Stage-discharge prediction in straight
compound channels using 3D numerical models, Proceedings of the Institution of Civil
Engineers: Water Management, Vol.166, No.1, January, 3-15.
Cunge, J.A., Holly, F.M. and Verwey, A. (1980), Practical Aspects of Computational River
Hydraulics, Pitman, London (Reprinted by University of Iowa).
Einstein, H.A. (1934), Der hydraulische oder profile-radius [The hydraulic or cross-section
radius]. Schweizerische Bauzeitung, Zurich, 103(8), 89-91 (in German).
Elliot, S.C.A. and Sellin, R.H.J. (1990), SERC flood channel facility: skewed flow
experiments. IAHR, Journal of Hydraulic Research, Feb. 1990, 28(2), 197-214.
Ervine, D.A. and Ellis, J. (1987), Experimental and computational aspects of overbank flood
plain flow. Transactions of the Royal Society of Edinburgh: Earth Sciences, 1987, 78:
315-325.
73
Open Channel Hydraulics
Henderson, F.M. (1966), Open Channel Flow, Macmillan Publishing Co., New York.
Horton, R.E. (1933), Separate roughness coefficients for channel bottom and sides.
Engineering News-Record, Nov. 1933, III(22), 652-653.
Hydraulics Research Ltd, Wallingford, Thomas Telford, London. (1990), Charts for the
Hydraulic Design of Channels and Pipes, sixth edition, 1990. ISBN 0 946466 02 5.
James, M. and Brown, R.J. (1977), Geometric parameters that influence flood plain flow.
U.S. Army Engineer Waterways Experiment Station, Vicksburg, Miss., Jun. 1977, Research
Report H-77-1.
James, C.S. and Wark, J.B. (1992), Conveyance Estimation for Meandering Channels,
Report SR329, HR Wallingford, U.K., Dec. 1992.
Jenkins, B.S. (1987), Aspects of Hydraulic Calculations, volume 1 of Australian Rainfall and
Runoff, A Guide to Estimation, D. H. Pilgrim (editor-in-chief ), chapter 4, pp. 53-91. The
Institution of Engineers, Australia, Barton, ACT, revised edition, 1987. ISBN 085825 434 4.
Keller, R.J. (1998), The Continuing Application of Physical Models to Hydraulic Engineering,
Keynote Address to 6th International Conference on Hydraulics in Civil Engineering,
Proceedings of HYDRASTORM '98 combining 3rd International Symposium on Stormwater
Management and 6th International Conference on Hydraulics in Civil Engineering, Adelaide,
Australia, 27-30 September 1998, Institution of Engineers, Australia, pp: 11-21.
Keller, R.J. (1984a),'Boundary Layer Scale Effects in Hydraulic Model Studies of Discharge
Measuring Flumes', in KOBUS, H. (ed),'Symposium on Scale Effects in Modelling Hydraulic
Structures', International Association for Hydraulic Research, Esslingen, Germany, 1984, pp:
2.10-1 to 2.10-4.
Knight, D.W. and Sellin, R.H.J. (1987), The SERC Flood Channel Facility. J. Inst. Water and
Environmental Mgmt, 1(2), 198-204.
Kobus, H. (1980) (ed.),'Hydraulic Modelling', German Association for Water Resources and
Land Improvement, Bonn.
Lambert, M.F. and Myers, W.R.C. (1998),'Estimating the discharge capacity in compound
channels.'Water, Energy and Maritime, Journal of Institution of Civil Engineers, London, 130:
84-94.
74
Open Channel Hydraulics
Lambert, M.F. and Sellin, R.H.J. (1996b),'Discharge prediction in compound channels using
the mixing length concept.'Journal of Hydraulic Research, International Association of
Hydraulic Research, 34(3), 381-394.
Lawson, J. D. and O'Neill, I. C. (1980), The role of physical and mathematical models in
hydraulic engineering, June 1981. ACADS Publication no. U217, Presentation to the
Association for Computer Aided Design Limited (ACADS), Water Engineering Technical
(WET) Committee, Wednesday 12 November, 1980.
LeVeque, R.J (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University
Press, Cambridge.
Lee P., Lambert M.F. and Simpson A.R. (2002),'Critical depth prediction in straight
compound channels.'Water and Maritime Engineering, Journal of Institution of Civil
Engineers, London, 154(4), 317 - 332.
Leonard, B.P. (1979b), A Stable and Accurate Convective Modelling Procedure Based on
Quadratic Upstream Interpolation. Computer Methods in Applied Mechanics and
Engineering, 19(1), 59-98.
Martin, L.A. and Myers, W.R.C. (1991), Measurement of overbank flow in a compound river
channel. Proc. of the Institution of Civil Engineers, London, Dec. 1991, Part 2, 91(9729),
645-657.
McBean, E. and Perkins, F. (1975), Numerical errors in water profile computation. Journal of
the Hydraulics Division, American Society of Civil Engineers, 101(HY11):1389-1403.
McBean, E. A. and Perkins, F. E. (1970), Error criteria in water surface profile computations.
Technical Report 124, Massachusetts Institute of Technology Hydrodynamics Laboratory,
Cambridge, Masachusetts, June 1970.
Myers, W.R.C. (1987). Velocity and discharge in compound channels. Proc. ASCE, Journal
of Hydraulic Engineering, Jun. 1987, 113(6), 753-766.
Myers, W.R.C. and Lyness, J.F. (1989), Flow resistance in rivers with flood plains. Final
Report on Research Grant GR5/D/45437, University of Ulster.
75
Open Channel Hydraulics
Patankar, Suhas. (1980), Numerical heat transfer and fluid flow. CRC Press.
Petryk, S. and Grant, E.U. (1978), Critical Flow in Rivers with Flood Plains, Journal of the
Hydraulics Division, May, 1978, 104(5), 583-594.
Phillips, J.T. and Ingersoll, T.L. (1998) Verification of roughness coefficients for selected
natural and constructed stream channels in Arizona. U.S. Geological Survey Professional
Paper, (1584).
Rajaratnam, N. and Ahmadi, R.M. (1983), Meandering Channels with Flood plains.
Unpublished Paper.
Reynolds, O. (1894), On the Dynamic Theory of Incompressible fluids and the Determination
of Criterion. Phil. Trans., Roy. Soc., 186: 123-164.
Rodi, W. (1980), Turbulence models and their applications in hydraulics - a state of the art
review. Delft: IAHR book publications.
Rutherford, J.C. (1994),'River Mixing', John Wiley and Sons, Inc., New York.
Samuels, P.G. (1989), Backwater lengths in rivers. Proceedings of the Institution of Civil
Engineers, Part 2, 87: 571-582, December 1989. Paper 9479, Water Engineering Group.
Samuels, P.G. and Chawdhary, K.S. (1992), A backwater method for trans-critical flows. In
R. A. Falconer, K. Shiono, and R. G. S. Matthew, editors, Proceedings of the Second
International Conference on Hydraulic and Environmental Modelling of Coastal, Estuarine
and River Waters, volume two, pp.79-89, Aldershot, U.K., 1992. Ashgate Publishing Limited.
Conference organised and sponsored by Hydraulics Research Limited, ISBN 1 85742 085 3.
Sellin, R.H.J. (1964), A laboratory investigation into the interaction between flow in the
channel of a river and that over its flood plain. La Houille Blanche, Nov. 1964, 19(7),
793-801.
Sellin, R.H.J. and Giles, A. (1988), Two Stage Channel Flow, Final Report for the Thames
Water Authority, Department of Civil Engineering, University of Bristol.
Sellin, R.H.J., Ervine, D.A. and Willets, B.B. (1993), Behaviour of meandering two-stage
channels. Proc. of the Institution of Civil Engineers, Water, Maritime and Energy, June 1993,
101(2), 99-111.
Smith, C.D. (1978), Effect of channel meanders on flood stage in valley. Proc. ASCE,
Journal of the Hydraulics Division, Jan. 1978, 104(1), 49-58.
76
Open Channel Hydraulics
Stelling, G.S. (1984), On the Construction of Computational Methods for Shallow Water Flow
Problems, Rijkswaterstaat Communications, No.35/1984, The Hague.
Stelling, G.S., Kernkamp, H.W.J and Laguzzi, M.M. (1998), Delft Flooding System: A
Powerful Tool for Inundation Assessment based upon a Positive Flow Simulation, in V.
Babovic and L.C. Larsen (eds) Hydroinformatics '98, A.A. Balkema, Brookfield.
Streeter, V.L. and Wylie E.B. (1981), Fluid Mechanics. McGraw-Hill Ryerson Ltd., Singapore.
Toebes, G.H. and Sooky, A.A. (1966), Hydraulics of meandering rivers with flood plains.
ASCE Water Resource Engineering Conference, Denver, Colorado.
U.S. Department of Agriculture. (1963), Guide for selecting roughness coefficient (n) values
for channels, Soil Conservation Service, Washington D.C.
Wallingford, H.R. (1992), SERC Flood Channel Facility Experimental Data - Phase A, Report
SR 314, Wallingford, United Kingdom, May 1992, Vol.1-14.
Warnock, J.E. (1950), Hydraulic Similitude, chapter II, pages 136-176. John Wiley and Sons,
Inc. and Chapman and Hall, Ltd., New York, London, 1950. Engineering Hydraulics, H.
Rouse (editor), Proceedings of the Fourth Hydraulics Conference, Iowa Institute of Hydraulic
Research, June 12-15, 1949.
Yen, B.C. (1973), Open Channel Flow Equation Revisited. Journal of the Engineering
Mechanics Division, ASCE, Oct. 1973, 99(EM5), 979-1009.
Zheleznyakov, G.V. (1965), Relative deficit of mean velocity of unstable river flow, kinematic
effect in river beds with flood plains. Proc. of the 11th International Congress IAHR,
Leningrad, USSR.
77
Chapter 3. Hydraulic Structures
Robert Keller, William Weeks
3.1. Introduction
Hydraulic structures are used to guide and control water flow velocities, directions and
depths, elevation and slope of the streambed, general configuration of the waterway, and its
stability and maintenance characteristics.
Careful and thorough hydraulic engineering is justified for hydraulic structures and
consideration of environmental, ecological and public safety objectives should be integrated
with hydraulic engineering design. The correct application of hydraulic structures can reduce
maintenance costs by managing the character of the flow to fit the environmental and project
needs.
This review deliberately focusses on few the most important structures in urban and rural
setting, as the general field of hydraulic structures is extremely vast. . Therefore, structures
such as large dams are not considered. Hydraulic structures covered include flood bypass
channels, control structures (gates, weirs and flumes, and spillways), levees, culverts, bridge
waterways, floodways on roads, scour, rock chutes, rock riprap, and flow measurement
structures.
On large river systems, flood bypass channels may simply comprise of adjacent low-lying
areas or old river courses. Typically, control structures may be located at the head of the
diversion channel to divert flows during periods of high water and return flows after the flood
has passed. Flood bypass channels are often used in urban areas where it is not possible to
widen the existing channel due to development.
78
Hydraulic Structures
• Determination of the percentage of the flood flow that should be carried by the bypass
channel;
• Determination of the size of the channel to convey the design discharge; and
For effective reduction in the flood stage, the distance between the point of diversion and
point of return to the main channel must be of sufficient length to prevent backwater effects.
Additionally, it is essential to consider potential morphologic effects on both the main channel
and receiving channel.
Flood bypass channels generally have steeper slopes than the main channel and this may
lead to stability problems such as erosion of the channel bed and banks. The bed of tributary
channels may be higher than that of the floodway channel, and bed degradation may
migrate upstream of the tributary, resulting in excessive sediment transport and deposition in
the floodway. Methods to mitigate channel instability such as grade control, channel lining
and bank stabilisation may be required on diversion projects.
Additionally, diversion flows can adversely impact the main channel. If the flow rate is
reduced in the main channel due to a diversion then, noting that the main channel slope and
particle size remain constant, the sediment transport capacity of the main channel will
decrease. This, in turn, could lead to aggradation in the main channel between the point of
diversion and the point of re-entry. However, if excess bed material is diverted, the sediment
transport capability of the stream may increase with the resultant rise in channel instability.
Flow returning to the main channel from a diversion can also result in accelerated erosion of
the channel and banks around the point of re-entry. Therefore, it is essential to conduct a
detailed geomorphic and sediment transport analysis at the design stage of a diversion
project, accounting for potential problems.
There are many environmental benefits of using a flood bypass channel as an alternative to
modifying the main channel to convey flood flows. The original stream substrates and
meanders are maintained, as well as in-stream cover and riparian vegetation. If designed
only for occasional flood flows, the bypass channel can have multiple social and lifestyle
benefits such as an urban greenbelt or sports and recreation areas.
One major application of flood bypass channels lies in reinstating meandering channels.
Many previously meandering rivers in Australia are artificially straightened, thereby
increasing the gradient of the river channel. The effect of this is to increase the conveyance
of the river channel, thereby improving the drainage of the land and reducing the frequency
and duration of overbank flooding. The consequences include deepening and widening and
consequent instability of the main channel and a major decline of ecological function.
Where the original meanders are still available, there is a significant focus on redirecting the
river channel flow back into the meanders, thereby renaturalising the river. Normally,
however, the meanders have a significantly reduced flow capacity since they were filled up
significantly during the years that the river has been straightened. For this reason, the
straight alignment of the river can be treated as a flood bypass channel during the passage
of large floods. This is achieved by introducing a weir into the straight bypass channel that
overtops when the flow exceeds a predetermined value.
Figure 6.3.1 shows a schematic of the arrangement of a meander, floodway and weir.
79
Hydraulic Structures
The design of this system requires careful consideration of the weir under drowned
conditions. Under low flow conditions the weir directs all river flow around the meander.
However, for a given design flood, the floodway and weir must pass all of the flow in excess
of the capacity of the meander.
Keller (1995) has developed the theoretical analysis of the drowned weir. The analysis has
been verified by experimental studies in the laboratory and with limited field data by Keller et
al. (2012). The design process is assisted by the use of a spreadsheet based program
described by Keller et al. (2012).
80
Hydraulic Structures
In summary, the design of a flood bypass channel must be aimed at preventing channel
instability in the main channel and the diversion channel. Channel design must take into
account the design flows and sediment transport to ensure bed and bank stability. The
hydraulic design of flood bypass channels can be accomplished with standard hydrology and
hydraulics analysis techniques, while determinations of sediment transport through the
diversion are much more difficult.
81
Hydraulic Structures
82
Hydraulic Structures
The choice of gate in a particular situation depends on a number of factors. The vertical
sluice gate is the simplest to construct, but has the disadvantage of requiring an expensive
guide system to transmit the hydraulic thrust to the side-walls. The radial gate is better in this
case because the thrust is carried through the radial arms upto the hinge. Drum gates are
hollow gate sections that float on water and are pinned to rotate up or down. Water is
allowed into or out of the flotation chamber to allow the gate to, respectively, fall or rise.
Flow through an underflow gate may be classified as free outflow or submerged outflow and
the analysis for each is different.
The sluice gate is a case of rapidly varied flow in which large variations in depth and velocity
occur over a short length of channel. Furthermore, because the flow contracts smoothly
under the gate with a minimum of turbulence, energy losses are negligible and the energy
level can be assumed to be the same on both sides of the gate.
For a rectangular channel with a horizontal bed and of uniform width, it can be shown –
Henderson (1966) that:
83
Hydraulic Structures
�1
� = ���� 2�� (6.3.1)
�1 + �2
�2
where �� = �
��
where �� = ���
1+
�1
The contraction coefficient, Cc typically has the value of 0.611. However, for increased
values of the ratio w/y1, the value decreases slightly.
The same analysis can be undertaken for the radial gate. In this case, however, the
contraction coefficient, Cc, varies significantly depending on the angle that the gate lip
makes with the horizontal. To an accuracy of 5%, the contraction coefficient is given by:
Equation (6.3.3) shows that Cc has the value of 0.61 for θ = 1 (900), which is the value used
for the vertical sluice gate.
84
Hydraulic Structures
An approximate analysis can be made by treating the case as one of divided flow, in which
part of the flow section is occupied by moving water, and part by stagnant water. While there
will be some energy loss between Sections 1 and 2, a much greater proportion of the total
loss will occur in the expanding flow between Sections 2 and 3. The approximation enters
when it is assumed that all of the energy loss occurs between Sections 2 and 3, where the
momentum equation is utilised.
This procedure, developed in Henderson (1966), leads to two independent equations with
two unknowns – q and y.
The solution for drowned gates in non-rectangular channels follows the same basic
methodology although the computations are more complex.
The methodology has been tested experimentally and shown to predict the flow rate within
5%.
In this context, the structure design follows the same procedure as developed earlier. In
particular, broad-crested weirs are robust structures that span the full width of the channel
and are normally constructed of reinforced concrete. Especially for flow control in relatively
large rivers, they are preferred over sharp-crested weirs, which can be easily damaged.
Furthermore, because it is a critical depth meter, the broad-crested weir has the advantage
that it operates effectively with higher downstream water levels than a sharp-crested weir.
For relatively small open channels, the long-throated flume is a better alternative than a weir
and is capable of measuring relatively large flows. It is basically a width constriction that, in
85
Hydraulic Structures
plan, a rounded converging section, a parallel throat section, and a diverging downstream
section. It is typically constructed of concrete.
The design of a long-throated flume requires some compromise between ensuring that the
throat is narrow enough to control the flow without submergence, but not so narrow that it
creates unacceptable afflux.
As noted in this chapter on Flow Measurement Structures, the analysis of both the broad-
crested weir and the long-throated flume is identical as both rely on the relationship between
the upstream water level (which may be measured) and critical depth within the constricted
section, which is a known function of the flow rate. Thus, a unique relationship between flow
rate and upstream water surface elevation can be determined.
3.3.5. Spillways
Flow behaviour on spillways has been investigated extensively by the US Army Corps of
Engineers Waterways Experiment Station since the early 1950s (USACE: Water Ways
Experiment Station, 1952). Hydraulic design charts and a Manual of Practice (USACE, 1995)
have been prepared, enabling the design of a spillway profile and a water surface profile for
a given design flood condition. However, the design charts are only applicable for certain
types of spillway profiles and pier configurations and cover a limited range of flood levels.
In the past, this limitation was overcome by building scaled physical models to investigate
the flow behaviour. These models tended to include both the spillway and any associated
energy dissipation structure. Physical models are considered later in the chapter. More
advanced mathematical models may also be appropriate, which are also discussed later in
this Book.
A spillway is ideally designed so that, when operating at its design head, the pressure at the
spillway surface is atmospheric. Consequentially, when the reservoir level is below or above
the design flood level, the pressure over the spillway will be above or below atmospheric
respectively. In the latter case, the negative pressures may create unstable conditions on the
spillway surface and damage due to cavitation.
Existing dams and spillways in Australia were designed and constructed to handle estimated
design floods. Since their construction, the increase to and reanalysis of, hydrological data
have led in many cases to a revision upwards of the design floods, requiring major upgrades
to spillway capacity.
To select optimum upgrade design, many dam owners have needed to consider the most
cost-effective way to analyse the behaviour of the spillway flow under conditions of
increased maximum flood. In many cases, as in the original design, use has been made of
physical scale models.
With appropriate recognition of scale effects, physical models have been the upgrade design
method of choice. However, the use of numerical methods is attractive in terms of lower cost
and substantially reduced preparation time. Additionally, results can be obtained throughout
the flow domain rather than at selected monitoring locations.
3.3.5.1. Design
A spillway is sized to provide the required capacity, usually the entire spillway design flood,
at a specific reservoir elevation. This elevation is normally at the maximum operating level or
at a surcharge elevation greater than the maximum operating level. Hydraulic design of a
86
Hydraulic Structures
spillway usually involves four conditions of flow, each occurring at a different location as
follows:
1. Subcritical flow in the spillway approach, initially at a low velocity, accelerating, however,
as it approaches the crest.
4. Transitional flow at or near the downstream end of the chute where the flow must
transition back to subcritical, typically with the dissipation of large amounts of energy.
When a relatively large storage capacity can be obtained above the normal maximum
reservoir elevation by increasing the dam height, a portion of the flood volume can be stored
in this reservoir surcharge space and the size of the spillway can be reduced. The use of a
surcharge pool for passing the spillway design flood involves an economic analysis that
considers the added cost of a dam height compared to the cost of a wider and/or deeper
spillway. When a gated spillway is considered, the added cost of higher and/or additional
gates and piers must be compared to the cost of additional dam height.
When an un-gated spillway is considered, the cost of reduced flood-control benefits due to a
reduction in reservoir storage must be compared to the cost of additional dam height.
Chute design and stilling basin design are considered in particular in the following sections.
��� = � + �1 (6.3.4)
Strictly speaking the second term on the right hand side should be replaced by y1cosθ where
θ is the slope of the channel bottom. Additionally, the form of Equation (6.3.4) assumes that
the pressure distribution at the point under consideration must be hydrostatic. This is a valid
assumption if vertical accelerations are small and the bed slope is mild. A non-hydrostatic
pressure distribution will occur whenever the value of cos2θ departs materially from unity,
such as on a steep spillway slope. This does not mean that the energy equation cannot be
used on a steep slope. It does mean, however, that the designer must recognise that the
values derived from the energy equation become increasingly inaccurate as the value of
cos2θ departs further from unity. This conditions describes one of the basic reasons that
physical model studies may be required when designing a spillway.
When applying Equation (6.3.4) to spillway design, correct account should be taken of
energy loss on the spillway surface. This has three components - boundary roughness
(friction), turbulence resulting from boundary alignment changes (form loss), and boundary
87
Hydraulic Structures
layer development. Boundary roughness is normally dealt with using a standard friction loss
equation such as Manning’s equation. For information on the other two loss terms, reference
should be made to (USACE, 1995).
Hydraulic jump stilling basins are structures located downstream of chutes, gates and
spillways to dissipate excess kinetic energy. The dimensions of these structures depend on
the length of the hydraulic jump and the conjugate depth of the jump.
Peterka (1978) classified the hydraulic jump into five categories based on the value of the
upstream Froude number. On the basis of extensive experimental studies, he developed four
types of hydraulic jump stilling basins. These are now known as the USBR Type 1, 2, 3, and
4 Basins. A major focus of the development of these basins was to reduce the size of the
structure by forcing the jump to occur using blocks and end sills within the basin. In addition
to localising the hydraulic jump, the Type 4 basin is designed for the special purpose of wave
suppression and is not considered herein.
The Type 1 Basin is a classic hydraulic jump basin without baffle blocks or an end sill. It is a
relatively large structure and is suitable only for small upstream Froude numbers.
When the Froude number is greater than 4.5, Type 2 or Type 3 Stilling Basins are
recommended. The Type 2 basin incorporates a series of chute blocks at the upstream end
of the basin to stabilise the start of the jump and to feather the incoming jet into several jets.
At the downstream end, a continuous or dentated sill is present, designed to force the jump
to occur within the basin and to prevent it from moving downstream. Figure 6.3.5 shows a
schematic of the Type 2 Basin. In this figure, y2 is the required conjugate downstream depth.
88
Hydraulic Structures
The length of the Type 2 basin is less than the length for the Type 1 basin.
The Type 3 stilling basin is similar to the Type 2 basin but baffle blocks are included to
provide additional energy dissipation by direct impact, increased turbulence and consequent
mixing of the high velocity incoming jets into the water body of the basin. This results in a
required basin length that is up to 60% shorter than a Type 1 basin for the same flow
conditions. However, it should be noted that the presence of the baffle blocks can create
conditions of cavitation in their vicinity with consequent severe structural damage. For this
reason the Type 3 basin should not be used in conditions where the incoming velocity
exceeds 16m/s. Figure 6.3.6 shows a schematic of a Type 3 Basin.
89
Hydraulic Structures
3.4. Levees
Levees are embankments that are constructed to artificially increase the capacity of a
channel, confining high flows that otherwise would overtop the banks and spread over the
floodplain. Levees are key components of a flood control plan to protect communities and
agricultural areas within the floodplain. Levees are used in conjunction with reservoirs,
floodways, control structures and various channel modification activities to reduce and
control the extent and duration of flooding.
The design elevation of levees is based on containing a design discharge, generally for a
short period of time. The levee cross section is generally designed as a trapezoid, with an
access road running along the levee crown. To control seepage, a long, tapering berm may
be extended on the landside of the levee as subsequently discussed. Fill material for levees
is generally obtained locally from borrow areas adjacent to the riverside of the embankment.
Although the local materials may not be ideally suitable for construction, economic necessity
normally dictates its use.
On streams without levees, flood flows spread out over the floodplain. The floodplain acts as
storage for the additional flows, lowering the peak of the flood hydrograph. The construction
of levees decreases the floodplain storage, resulting in an increase of the peak of the
hydrograph. Furthermore, because levees typically confine river flows to a narrower cross
section, water elevations are higher during flood flows. If levees are not set back from the
main channel, the hydraulic connectivity between the river and the floodplain is lost, thus
confining flows and putting more energy into the flow. The required levee height can be
90
Hydraulic Structures
Channel instabilities may arise from streams with levees because degradation of the bed
and banks may occur. Sometimes, aggradation may occur due to the increased sediment
load in the main channel and the lack of available floodplain sediment storage. The precise
response is complex and is a function of the width of levees, the effects on duration of flows,
and other factors.
Seepage is a major problem with levees during time of high water. When water is contained
on one side with the other side being dry, a head differential exists across the levee. This
tends to force water through the porous soil, eventually seeping out to the landward side of
the levee. This seepage carries both fine and coarse particles through the levee. This
internal erosion of the levees can lead to piping through the levee and catastrophic failure.
To prevent excessive seepage, impervious barrier materials such as clay can be built into
the levee. Flows from tributaries that are cut off from the river system due to levees must be
carefully assessed to prevent flooding on the landward side of the levee. Pumping stations
can be applied to divert tributary flows.
On streams without levees, flows periodically flow onto the floodplain depositing sediment,
flushing riparian aquatic environments, and generally providing valuable habitat for aquatic
organisms and waterfowl. The flora and fauna are adapted to periodic flooding and the
unique environment that it creates. As noted above, levees act as a barrier for overbank
flows. Confining stream flows within a levee system creates a dryer environment on the
landside of the levee system and a wetter environment on the streamside.
The dryer environment results in changes in both flora and fauna that occupy the floodplain.
After a levee system is constructed, upland trees and vegetation colonise the floodplain. The
lands between the levee and the stream bank will experience more prolonged flooding with
more extreme fluctuations in water level. This may inhibit the growth of ground cover, thus
reducing the available habitat for ground-dwelling mammals (Fredrickson, 1979). Frequently,
for reasons of economy, material used to construct the levees is sourced from areas within
the floodplain, resulting in vegetation removal and loss of habitat. The flat slopes used for
levees in rural areas require large land requirements for the embankments and berms.
To offset changes in riparian habitat, consideration can be given to the habitat provided by
the levees themselves and the adjacent borrow pits. Traditionally, the vegetation on levees is
kept to a minimum. However, with proper maintenance, certain species of shrubs and plants
can be allowed to grow without affecting the integrity of the levee. However, this vegetation
may provide habitat for burrowing animals that must be controlled.
Borrow pits remaining from levee construction can serve as valuable aquatic habitat.
Normally, the pits will fill with rainwater or groundwater after construction. Riverside borrow
pits will exchange water with the river system, thus recharging the pit with fish and other
aquatic organisms. In this way, borrow pits partially compensate for the loss of aquatic
habitat in the floodplain. Additionally, siting levees further from the channel will conserve
wetland environments between the levee and the river.
Levees must be periodically inspected and maintained to provide the designed degree of
flood protection. Conditions affecting the integrity of the levee include erosion of the banks,
seepage, and damage from burrowing animals. Vegetation planted on the levees for
aesthetic reasons should be well maintained. Vegetation that may affect the integrity of the
levee should be removed.
91
Hydraulic Structures
Seepage beneath the levee foundations is one of the principal causes of levee failure.
Without control, this seepage may result in excessive hydrostatic pressures beneath an
impervious top stratum on the landside, sand boils, and/or piping beneath the levee itself.
Seepage problems tend to be most acute in situations where the levee is built above a
pervious substratum, which extends both landward, and riverward of the levee and where a
relatively thin top stratum exists on the landside of the levee.
Among seepage control measures are cutoffs, riverside blankets, and landside seepage
berms.
3.4.1. Cutoffs
A cutoff beneath a levee to block seepage through pervious foundation strata is the most
positive means of eliminating seepage problems. A cutoff may consist of an excavated
trench backfilled with compacted earth or slurry. Trenches are usually located near the
riverside toe.
To be effective, a cutoff must penetrate at least 95 percent of the thickness of the pervious
strata to be effective. For this reason cutoffs are rarely economical where they must
penetrate more than about 12 m. Steel sheet piling can significantly reduce the possibility of
piping of sand strata in the foundation, but is not always entirely watertight due to leakage at
the interlocks between individual sheet piles.
Where seepage beneath the levee is expected to be a problem, riverside borrow operations
should be limited in depth to prevent breaching the impervious blanket. If there are limited
areas where the blanket becomes thin or disappears entirely, the blanket can be remediated
by placing impervious materials in these areas. The effectiveness of the blanket depends on
its thickness, length, distance to the levee riverside toe, and permeability and can be
evaluated by flow-net or approximate mathematical solutions (USACE, 2000). Protection of
the riverside blanket against erosion is important.
Berms are relatively simple to construct and require very little maintenance. They frequently
improve and reclaim land as areas requiring remediation treatment for seepage are often low
92
Hydraulic Structures
and wet. Because they require additional fill material and space, they are used primarily with
agricultural levees where land use pressures are less severe than in urban areas.
Subsurface profiles must be carefully studied in selecting berm widths. For example, where
a levee is founded on a thin top stratum and thicker clay deposits lie a short distance
landward, as shown in Figure 6.3.7, the berm should extend far enough landward to lap the
thick clay deposit, regardless of the computed required length. Otherwise, a concentration of
seepage and high exit gradients may occur between the berm toe and the landward edge of
the thick clay deposit.
Figure 6.3.7. Example of incorrect and correct berm length according to existing foundation
conditions (USACE, 2000)
In summary, levees are embankments that artificially increase the capacity of a channel,
confining high flows that otherwise would overtop the banks and spread over the floodplain.
They are key components of a flood control plan to protect communities and agricultural
areas within the floodplain.
Seepage is a major problem with levees during high water and is one of the principal causes
of levee failure. When water is contained on one side with the other side being dry, a head
differential exists across the levee. Without control, this seepage may result in excessive
hydrostatic pressures beneath an impervious top stratum on the landside, sand boils, and/or
piping beneath the levee itself.
Among seepage control measures are cutoffs, riverside blankets, and landside seepage
berms.
3.5. Culverts
3.5.1. Culvert Flow Principles
The term culvert is normally applied in engineering practice to any large underground pipe
especially where used in relatively short lengths to convey streams or flood water under an
embankment. The design of culverts has been the subject of considerable research and
considerable misunderstanding. Despite this, the culvert is such a common structure that
93
Hydraulic Structures
analysis and design have become quite standardised. The hydraulic analysis and
subsequent selection of the proper culvert size is aided by charts and nomographs prepared
for the specific shape and type of culvert. These design procedures incorporate directly such
factors as the entrance loss coefficient for a particular pipe shape and inlet configuration.
The emphasis herein is on the basic analysis of culverts and is, thus, more general than
direct recourse to design charts. It is accepted, of course, that engineers will continue to use
the available charts and nomographs. However, the material presented herein is aimed at
giving a better understanding of culvert flow principles and will make the use of standard
charts and nomographs clearer and more reliable.
Because a culvert is a closed conduit, it has a larger wetted perimeter than a channel.
Accordingly, the average energy gradient through the culvert will be steeper than in the
equivalent length of channel. In general, the only way that the steepening of the hydraulic
gradient through the culvert can occur is by raising the water surface elevation at the
upstream side of the embankment.
It is apparent, then, that the culvert cross-sectional area and hydraulic properties are of great
importance. It may be possible to evaluate economically the consequences of a headwater
rise against the cost of the culvert and embankment height. Under these circumstances the
system with the least total cost of structure and flooding should be selected.
The factor subject to most misunderstanding in culvert design is that arising from the
determination of the point of control – either inlet or outlet control. In some cases, the
operating control is not clear and careful calculations are necessary to determine both the
type of control and the various hydraulic characteristics.
The hydraulic operation of culverts is complex and often difficult to predict. However, once
the type of operation is established, the analysis may proceed according to well-defined
principles. The factors affecting the discharge in a culvert are the following:
b. The combined effect of entrance, length, slope and roughness of the culvert barrel.
The flow characteristics and, hence, the discharge capacity of a culvert are determined by
the location of the control section. In general, the discharge is controlled either at the culvert
entrance or at the outlet and is designated inlet control and outlet control respectively. Inlet
control will exist as long as the ability of the culvert barrel to carry the flow exceeds the ability
of water to enter the culvert through the inlet. Outlet control will exist when the ability of the
culvert barrel to carry water away from the entrance is less than the flow than can enter the
inlet. The location of the control section may shift as the relative capacities of the entrance
and barrel sections change with increasing or decreasing discharge.
Inlet control: With the inlet control operation, the discharge is independent of the pipe
length, slope and roughness of the pipe wall. The discharge depends only upon the
94
Hydraulic Structures
headwater elevation above the invert at the entrance, the inlet size, and the inlet geometry.
Although a variation in factors affecting the culvert barrel will affect flow characteristics within
a barrel, they will normally have no effect on the total discharge. The only exception occurs if
the variations in barrel design are sufficiently severe to cause the control section to shift to
the outlet.
A culvert operating under inlet control will always flow part full for at least part of the culvert
length. In many cases, particularly at high discharges, the headwater will submerge the
entrance of the culvert. In these cases, flow contraction occurring at the entrance will limit
the discharge. It should be noted that roughness, slope and length are not influential in
determining the discharge capacity of a culvert operating with inlet control, but are important
in determining outlet velocities and the discharge at which the operation mode changes from
inlet control to outlet control.
Outlet control: Under outlet control, the total discharge is affected by all hydraulic factors
upstream of the outlet. These factors include the headwater elevation, entrance geometry,
barrel size, wall roughness, barrel length and slope. The tailwater elevation is a factor as
long as it is above the pipe outlet.
Culverts flowing full throughout their length are always under outlet control. However, as will
be shown, a culvert flowing part full may operate under either inlet control or outlet control.
Hydraulic Analysis: For computational convenience, flow through culverts is divided into
size categories based on the relative heights of the head and tailwater and, for three of the
categories, on barrel slope. The six types of flow are shown schematically in Figure 6.3.8
and their respective characteristics are summarised in Table 6.3.1. In the table, D is the
maximum vertical dimension of the culvert, �1 is the depth of flow in the approach section, �c
is the critical depth of flow, and �4 is the tailwater depth of flow.
�1
The limit for �
of 1.5 recommended in Table 6.3.1 is not universally accepted and several
texts suggest a limiting value of 1.2. This lower limit is probably more appropriate since it
allows for effects of factors such as wave motion or transitory debris blockage for example.
95
Hydraulic Structures
96
Hydraulic Structures
Full details of the analysis of each flow type are presented by French (1985).
There are several practical factors associated with culvert design which may be of equal
importance as the hydraulic analysis. Some of these aspects are discussed in the following.
A variety of methods are available for improving the inlet conditions and these include a
steep throat, a drop inlet, wingwalls, a hood and bevelled edges. The shape of the soffit is
the most important and of the invert, the least important, because the flow at the invert is
horizontal.
Some of these inlet improvements are illustrated schematically in Figure 6.3.9. The simplest
improvement is a vertical headwall above the culvert entrance, thereby eliminating the re-
entrant angle in the case of a battered embankment. The soffit of the inlet can be bevelled
as shown in Figure 6.3.9(a). It is recommended that the bevel be at least 10% of the culvert
height and at between 33° and 45° to the culvert axis (Portland Cement Association, 1964).
This can increase the flow by up to 20%.
Full details and design charts and tables for improved culverts inlets may be found in a
number of publications (e.g. Portland Cement Association (1964), U.S. Department of
Transportation (1972)).
Despite the common practice of making inlet and outlet structures identical, it should be
noted that the two structures serve different purposes and, therefore, logically should be
treated separately.
The simple projecting outlet is sufficient when flow velocities are low and the filldoes not
require special protection. The endwall structure alone acts to support the end of the culvert
97
Hydraulic Structures
and as a retaining wall for the embankment. The wingwall helps to transition the culvert flow
smoothly into the downstream channel and protects the endwall so that it may continue to
function in its original capacity. A concrete apron serves to provide protection to the endwall
structure by removing the point of potential erosion well away from the endwall foundation,
thereby ensuring the stability of the structure.
Where wingwalls are used for bank protection and not merely as retaining walls, a concrete
apron should always be provided. The absence of such an apron may encourage
channelling and undercutting along the wingwall.
Where outlet velocities are particularly high, special energy dissipation structures may be
required.
Scour at culvert outlets is not necessarily only due to concentrated flow issuing from the
culvert barrel. Recirculating eddies, associated with a downstream channel which is
significantly wider than the culvert, can cause potentially serious scour damage to the
embankment fill.A further form of scour at culvert outlets is channel degradation which may
occur if the culvert does not permit the passage of sediment from upstream.
Whatever the cause, the process of erosion may be associated with the excavated material
being redeposited in the channel some distance downstream from the point of scour. It is
entirely possible that with time, a shoal will form capable of causing excessively high
tailwater depths during periods of high flow. Such a process should always be considered
because high tailwater depths may not necessarily work to the advantage of culvert
operation.
98
Hydraulic Structures
99
Hydraulic Structures
Erosive velocities vary widely, depending upon the characteristics of the channel material,
the depth of flow in the channel, and the velocity distribution. Erosive velocity limits for
various types of soils are published in a number of texts (e.g. Portland Cement Association
(1964)). Such published values should, however, be treated with caution because of the vast
variations in naturally occurring materials.
It is apparent that the discharging jet will spread beyond the culvert and energy dissipation
additional to bed friction will result from the interaction between the jet and the tailwater. The
latter phenomenon is manifested in the zones of recirculation shown in Figure 6.3.11.
One method of analysis for this case has been proposed by Keller (1986). He drew a
comparison between this phenomenon and that due to the interaction between shallow flood
plain and deep main channel flows where the turbulent shear stresses are of the same order
of magnitude as an equivalent wall shear stress if the interaction region were replaced by a
solid wall. The present case can then be solved by assuming that the culvert outflow is
contained within diverging vertical walls of roughness equal to that of the bed. A divergence
angle of about 20° is indicated by the work of Rouse et al. (1951) and List and Imberger
(1973). The modification of the outlet velocity with distance from the outlet can then be
calculated using a water surface profile program such as HEC-RAS and appropriate invert
protection determined.
100
Hydraulic Structures
HEC-RAS incorporates a module for the accurate design of culverts. Although the detail is
outside the scope of this section, some comments are provided in the following.
Data for the culvert structure is simply entered on two templates in HEC-RAS – the Deck/
Roadway Editor for roadway information and the Culvert Data Editor for the physical data
defining the culvert.
Although not required for culvert computations, the modeller may choose to enter
embankment side slopes for the upstream and downstream embankment faces in the Deck/
Roadway Data Editor. The sloping embankment is used for graphical purposes only on the
cross-section plots.
The primary information for inlet and outlet control analyses is entered in the Culvert Data
Editor. For inlet control, these data are the inlet geometry with the corresponding chart and
scale numbers. For outlet control computations, the entrance loss coefficient is required
along with the Manning’s n values for different portions of the culvert cross-section. A table is
available to assist the modeller in choosing an appropriate entrance loss coefficient.
The exit loss coefficient defaults to the value 1, but the modeller has the option to adjust this
parameter. A tail water elevation is not required because it is computed by HEC-RAS as part
of the downstream water surface profile calculations.
101
Hydraulic Structures
Despite the simple appearance of a bridge, its hydraulics is by no means simple. In addition
to the potential for the constricted flow through a bridge site to cause flooding, there is a
second issue of importance in the assessment of bridges and this is the issue of scour.
Bridges continue to fail through scour of piers and/or abutments and it is vital to be able to
determine the magnitude of this scour. Both the energy loss at a bridge site and the potential
for scour are complex topics and special techniques have been developed to cope with their
difficulties.
It should be noted that conservatism in the design of bridge waterways for their flooding
potential requires an under-estimate of the magnitude of scour since this will minimise the
size of the bridge opening, maximise the velocity through the bridge, and maximise the
energy loss across the bridge. Conversely, conservatism in assessing the structural integrity
of the bridge requires an over-estimate of the magnitude of scour. Thus, it is important to
keep in mind the reason for undertaking the hydraulic analysis of the bridge site when
assessing the results of such an analysis.
In this chapter, the hydraulics of flow through bridges is discussed, with special emphasis on
energy losses and scour. A brief discussion is also presented on the bridge analysis routines
contained within the HEC-RAS computer program.
102
Hydraulic Structures
Bridge losses within the expansion reach downstream (sections 1 to 2) can be relatively
large. The region is characterised by recirculation zones on either side that are maintained
by extracting energy from the mean flow. Energy losses through the contraction (sections 3
to 4) are relatively smaller because the channel length over which the change in cross-
section occurs is less and the recirculation zones are smaller.
Within each of these regions the energy loss is normally calculated as the sum of friction
losses and expansion or contraction losses. Friction and contraction losses between
sections 3 and 4 are calculated the same as friction and expansion losses between sections
1 and 2. Friction losses are typically determined using standard step profile equations.
Contraction and expansion losses are described in terms of a coefficient times the absolute
value of the change in velocity head between adjacent cross sections. For a detailed
discussion on selecting contraction and expansion coefficients at bridges, the user is
referred to Chapter 5 of the HEC-RAS Hydraulic Reference Manual.
Within the bridge structure itself (between sections 2 and 3) the computation of the energy
loss can be simple or complex, depending on the flow characteristics. A low flow, where the
water surface does not interact with the bottom chord of the bridge, may be analysed using a
simple standard step procedure. On the other hand, interaction of the water surface with the
bridge deck structure may lead to a combination of low flow and weir flow or pressure flow
and weir flow. These cases require more complex modelling techniques that are outside the
scope of this section. Modelling details may be found in the HEC-RAS Hydraulic Reference
Manual.
In most cases of flow through bridge sites, the river flow downstream and upstream is sub-
critical. Under these circumstances, the hydraulic effect of the bridge is to increase the water
level upstream of the bridge, with no effect downstream. On the other hand, in the rare
cases where the natural flow is characterised 103by super-critical conditions, the effect of the
bridge manifests downstream. In such cases, upstream water levels are affected only if the
bridge constriction is sufficiently severe to transform the supercritical flow to subcritical.
Hydraulic Structures
A perched bridge is one for which the road approaching the bridge is at the flood plain
ground level, and only in the immediate area of the bridge does the road rise above ground
level to span the water course. This condition is shown schematically in Figure 6.3.13.
A typical flow situation with this type of bridge is low flow under the bridge and overbank flow
around the bridge. Because the road approaching the bridge is usually not much higher than
the surrounding ground, the assumption of weir flow is usually not justified.
For this reason, perched bridges should generally be modelled using the energy-based
method, especially when a large percentage of the total flow rate is carried in the overbank
areas.
A submerged bridge (or low water bridge) is designed to accommodate only low flows under
the bridge. Flood flows are carried over the bridge and road. A typical example of a
submerged bridge is shown in Figure 6.3.14.
104
Hydraulic Structures
When modelling this bridge for flood flows, the anticipated solution would be a combination
of pressure and weir flow. However, with most of the flow passing over the top of the bridge,
the correction for submergence can introduce considerable error. For this reason, if the
tailwater level is likely to be relatively high, the energy based method of analysis is
recommended.
105
Hydraulic Structures
For low flow, skewed crossings with angles up to 20 degrees show no objectionable flow
patterns. However, for larger angles of skew, the flow efficiency decreases. For reasonably
small flow contractions, the projected length is adequate for assessing the impact of skew up
to skew angles of 30 degrees.
106
Hydraulic Structures
With reference to Figure 6.3.15, the projected width of the bridge opening, perpendicular to
the flow lines, is given by:
�� = �cos� (6.3.5)
The pier information must also be adjusted to account for the skew of the bridge. The
program HEC-RAS assumes that the piers are continuous, as shown in Figure 6.3.15.
Thus, the projected width of the piers, perpendicular to the flow lines, is given by:
The construction of divided highways often leads to the common modelling problem of
parallel bridges. The situation is shown schematically in Figure 6.3.16.
Depending on the spacing between the two bridges, the loss may be between 1.3 and 2
times the loss for a single bridge. If the bridges are very close to each other and the flow
cannot expand between the bridges, the system can be dealt with as a single structure. If
both bridges are modelled, care should be taken in depicting the expansion and contraction
of flow between the bridges. Expansion and contraction rates should be based on the same
procedures as single bridges.
Some bridges are characterised by more than one opening for flood flow, especially over a
very wide flood plain. Figure 6.3.17 shows the situation schematically and illustrates the
nomenclature used in the following discussion.
107
Hydraulic Structures
�� = ∑ �� (6.3.7)
and
Mutual satisfaction of Equation (6.3.7) and Equation (6.3.8) ensures that the computed
energies at the upstream point where flow separates or stagnation point are equal – this is
defined by the correct apportioning of flow QI through each opening.
The downstream stagnation point defines where flow merges and the flow path of all the
openings are assumed to have equal energy level at this point, ie. downstream boundary.
In HEC-RAS, Up to seven openings (of combinations of open conveyance area, bridges and
culverts) can be defined at any one river crossing. The program automatically locates the
stagnation points within the range defined by the user unless there are physical stagnation
points such as bridge abutments or islands for example.
108
Hydraulic Structures
Alternatively, the divided flow approach can be used whereby the flow paths of each opening
are modelled separately with manual adjustment of flow distribution (similar to flow split
modelling at upstream of opening and flow combining modelling at downstream of opening).
With this method, the cross section will need to be physically split along assumed stagnation
points at each cross sections using the ineffective area option. This method is most suited to
anabranches and breakaway flow paths in wide floodplain with multiple road crossings. A
typical example is shown in Figure 6.3.18.
109
Hydraulic Structures
Scour occurs at bridges because of changes to the natural flow conditions and is a serious
concern. It can be defined simply as the excavation and removal of material from the bed
and banks of streams as a result of the erosive action of flowing water. In the context of this
book, it is assumed that this erosive action may potentially expose the foundations of a
bridge. Scour is usually considered to be a local phenomenon, but includes degradation that
can cause erosion over a considerable length of a river.
Floodways are therefore planned for locations where flood immunity is not a serious concern
or the duration of road closure is low, and are suitable for locations where extensive
floodplain width and shallow flows make bridge or culvert construction difficult or expensive.
Floodways are often a preferred approach for locations where a relatively cheap floodplain
crossing is needed and where flood immunity or flood closures are not a significant concern.
They are therefore often preferred for roads with low traffic volumes in arid regions where
flood events are infrequent and short duration.
The Queensland Department of Transport and Main Roads Road Drainage Manual
(QDTMR, 1986) and the Austroads Guide to Road Design Part 5 (Austroads, 2013) has
detailed guidance on floodway design.
110
Hydraulic Structures
Floodways may require costly batter protection and therefore a higher level road together
with a larger culvert or bridge option may be more cost effective. Floodways also have
smaller waterway (under road) requirements and may be more prone to blockage by debris.
These cost related performance factors should be considered as well as trafficability and
other requirements in the selection of final road level. Floodways may offer environmental
advantages over culverts or bridges, since they will tend to spread flows more widely. This
means that the risk of scour to waterway and surrounding land is generally reduced because
flow is less concentrated. It is also important that a floodway be designed so that it is not
covered by water from ponding or backwater for any significant period of time after a flood
event.
• May offer environmental advantages over culverts and bridges, since they will tend to
spread flows more widely, reducing the risk of scour when flow is concentrated in culverts
or bridges;
There are however some disadvantages, as follows, which mean that design needs careful
consideration.
• Allow water flow over road which leads to flood immunity and safety issues;
• Batter slopes can be affected by erosion or scour (particularly for higher embankments);
It is important that adequate approach sight distance be provided to allow drivers time to
recognise water over the road and to stop. It is also important that the length of a floodway
be limited at about 300 m so that drivers do not become disorientated when confronted with
wide open stretches of water. Where a proposed floodway would be longer than 300 m, it is
recommended that the proposed floodway be broken into shorter lengths by providing
sections of road that are raised above the maximum flood level. As a general principle,
floodways should be designed so that the depth of water over the road should be as uniform
as possible over the flooded section. Building a floodway on a level grade avoids the
possibility of a driver unexpectedly encountering deeper water and possibly stalling or being
swept downstream.
Exceptions to the level grading may occur where bridges have been built significantly higher
than the flooded approaches on both sides. The bridges have been built on the basis that
111
Hydraulic Structures
the approaches will be raised sometime in the future. Floodways should not be placed on
horizontal curves as:
• there are problems in defining the edge of the pavement for motorists;
• any superelevation may change the normal flow distribution ie. push more water to the
non-superelevated sections of road; and
• the water depth will be deeper on one side of the road than the other in a superelevated
section of road and there is the possibility of the high side being trafficable but not the
other, thus creating a safety problem.
Floodways should also not be located on vertical curves to avoid variations in flow depth.
Hydraulic Design
A floodway consists not only of the roadway embankment to accommodate flow over the
road but also waterway openings to provide for flow under the road. These openings may be
required for one or more of the following functions:
• reduce the afflux or rise in water level upstream due to the obstruction (embankment);
• raise the tailwater level so that less batter protection is required on the downstream side
e.g. grass instead of concrete; and
• submerged flow.
In the initial stages of overtopping a low tailwater usually exists and free flow occurs. Under
these circumstances flow passes through critical depth over the road and the discharge is
determined by flood levels upstream.
• plunging flow which flows over the shoulder and down the downstream face of the
embankment. The flow then penetrates the tailwater surface producing a submerged
hydraulic jump on the downstream slope. Velocities are likely to be high and erosive; and
• surface flow which separates from the surface of the road embankment and rides over the
surface of the tailwater. This flow will have less erosion potential downstream.
Submerged flowoccurs when the discharge is controlled by the tailwater level as well as the
headwater levels. This occurs when the depth of flow over the road is everywhere greater
than the critical depth. Typical velocities of flow over a floodway are shown in the
Figure 6.3.19 as sourced from Waterway Design (Austroads 1994) after Cameron and
McNamara (1966)Cameron and McNamara (1966).
112
Hydraulic Structures
Hydraulic calculation for flow over the road embankment is based on the broad crested weir
formula as described elsewhere in this Chapter.
With free-flow conditions on the embankment – that is, in the absence of submergence of
the control – analysis indicates that the discharge across the embankment may be
expressed in terms of the other relevant parameters by an expression of the form:
3
(6.3.9)
� = ��� 2
C = a coefficient
L = the length of the embankment (that is, the width of the flow)
H = the head of the approach flow, determined as shown in Figure 6.3.19 with the elevation
of the embankment crown as datum.
113
Hydraulic Structures
The coefficient C embodies numerical coefficients and the gravitational acceleration. For an
ideal fluid, the value of C would be 1.70 in SI units. Values of C for real fluids would generally
be expected to be somewhat less than this value, incorporating empirically the effects of
differences between real and ideal fluid behaviour.
3.8. Scour
3.8.1. Scour at Bridges
Scour at bridges is an important risk for these structures and design must incorporate
mitigation measures. In the case of existing bridges, where scour becomes apparent,
measures must be provided to protect the bridge asset and prevent further damage. Scour is
a very serious problem. Floods that result in scour are the principal cause of bridge failure.
Figure 6.3.20(a) shows the pier caps and pile caps exposed. Figure 6.3.20(b) shows pier
and abutment riprap moved downstream. Figure 6.3.20(c) shows a downstream scour hole
and bank erosion. Figure 6.3.20(d) shows a downstream scour hole arising from
submergence of the opening. Figure 6.3.20(e) shows slumped material at the toe of the bank
arising from failure of the riprap or bank. Figure 6.3.20(f) shows erosion and failure of a
highway embankment with flow on both sides of the abutment.
114
Hydraulic Structures
The biggest and most frequently encountered scour-related problems usually concern loose
sediments that are easily eroded. It is not true, however, to assume that the scour depth in
cohesive or cemented soils cannot be as large – it merely takes longer for the scour hole to
develop.
Many of the equations for scour were derived from laboratory studies, for which the range of
validity is unknown. Some were verified using very limited field data, which itself may be of
doubtful accuracy. In the field, the scour hole that develops on the rising stage of a flood, or
at the peak, may be filled in again on the falling stage. For this reason, the maximum depth
of scour cannot be easily assessed after the event.
Scour can also cause problems with the hydraulic analysis of a bridge. Scour may
considerably deepen the channel through a bridge and effectively reduce or even eliminate
the backwater. This reduction in backwater should not be relied on, however, because of the
unpredictable nature of the processes involved.
115
Hydraulic Structures
The first major issue when considering scour is the distinction between clear-water scour
and live-bed scour. The critical issue here is whether or not the mean bed shear stress of the
flow upstream of the bridge is less than or larger than the threshold value needed to move
the bed material.
If the upstream shear stress is less than the threshold value, the bed material upstream of
the bridge is at rest. This is referred to as the clear-water condition because the approach
flow is clear and does not contain sediment. Thus, any bed material that is removed from a
local scour hole is not replaced by sediment being transported by the approach flow. The
maximum local scour depth is achieved when the size of the scour hole results in a local
reduction in shear stress to the critical value such that the flow can no longer remove bed
material from the scoured area.
Live-bed scour occurs where the upstream shear stress is greater than the threshold value
and the bed material upstream of the crossing is moving. This means that the approach flow
continuously transports sediment into a local scour hole. By itself, a live bed in a uniform
channel will not cause a scour hole - for this to be created some additional increase in shear
stress is needed, such as that caused by a contraction (natural or artificial, such as a bridge)
or a local obstruction (e.g. a bridge pier). The equilibrium scour depth is achieved when
material is transported into the scour hole at the same rate at which it is transported out.
116
Hydraulic Structures
It is noted from Figure 6.3.21 that typically the maximum equilibrium clear-water scour is
about 10% larger than the equilibrium live-bed scour. Conditions that favour clear water
scour are:
• Channels with natural vegetation or artificial reinforcement where velocities are only high
enough to cause scour near piers and abutments; and
At any particular location both clear-water and live-bed scour may be experienced. During a
single flood the bed shear stress will increase and decrease as the discharge rises and falls.
Thus, it is possible to have clear-water conditions initially, then a live bed, then finally clear
water again. The maximum scour depth may occur under clear-water conditions, not at the
flood peak when live-bed scour is experienced. Similarly, relatively high velocities can be
experienced when the flow is just contained within the banks, rather than spread over the
floodplains at the peak discharge.
It is also possible to have the clear-water and live-bed conditions occurring at the same time.
For example, if the floodplains are grassed or composed of material that is larger in diameter
than that in the main channel, clear-water conditions may occur on the floodplain with live-
bed conditions in the main channel.
It is evident from this discussion that the problem may not always be as simple or as well
defined as would be desirable. If there are any uncertainties or if the consequences of failure
are large, prompting a conservative approach, it is recommended that clear-water conditions
be assumed at the peak flow condition.
Urbanisation has the effect of increasing flood magnitudes and causing hydrographs to peak
earlier, resulting in higher stream velocities and degradation. Channel improvements or the
extraction of gravel (above or below the site in question) can alter water levels, flow
velocities, bed slopes and sediment transport characteristics and consequently affect scour.
For instance, if an alluvial channel is straightened, widened or altered in any other way that
results in an increased flow-energy condition, the channel will tend back towards a lower
energy state by degrading upstream, widening and aggrading downstream.
The significance of degradation scour to bridge design is that the engineer has to decide
whether the existing channel elevation is likely to be constant over the 100 year life of the
bridge, or whether it will change. If change is probable then it must be allowed for when
designing the waterway and foundations.
The lateral stability of a river channel may also affect scour depths, because movement of
the channel may result in the bridge being incorrectly positioned or aligned with respect to
the approach flow. This problem can be significant under any circumstances but is potentially
very serious in arid or semi-arid regions and with ephemeral (intermittent) streams. Lateral
migration rates are largely unpredictable. Sometimes a channel that has been stable for
many years may suddenly start to move, but significant influences are floods, bank material,
vegetation of the banks and floodplains, and land use.
Scour at bridge sites is typically classified as contraction (or constriction) scour and local
scour. Contraction scour occurs over a whole cross-section as a result of the increased
velocities and bed shear stresses arising from a narrowing of the channel by a constriction
117
Hydraulic Structures
such as a bridge. In general, the smaller the opening ratio (� = �/� or �/�) the larger the
waterway velocity and the greater the potential for scour. If the flow contracts from a wide
floodplain, considerable scour and bank failure can occur. Relatively severe constrictions
may require regular maintenance for decades to combat erosion. It is evident that one way
to reduce contraction scour is to make the opening wider.
Contraction scour also occurs in the vertical where flow is contracted vertically as water
flows under the bridge and velocity increases, potentially causing a scour hole to develop
under the bridge, as shown in Figure 6.3.23.
118
Hydraulic Structures
Local scour arises from the increased velocities and associated vortices as water
accelerates around the corners of abutments, piers and spur dykes. The flow pattern around
a cylindrical pier is shown in Figure 6.3.24. The approaching flow decelerates as it nears the
cylinder, coming to rest at the centre of the pier. The resulting stagnation pressure is highest
near the water surface where the approach velocity is greatest, and smaller lower down. The
downward pressure gradient at the pier face directs the flow downwards. Local pier scour
begins when the downflow velocity near the stagnation point is strong enough to overcome
the resistance to motion of the bed particles.
119
Hydraulic Structures
When scour occurs the maximum downflow velocity is about 80% of the mean approach
velocity. The impact of the downflow on the bed is the principal factor leading to the creation
of a scour hole. As the hole grows the flow dives down and around the pier producing a
horseshoe vortex, which carries the scoured bed material downstream.
The combination of the downflow with the horseshoe vortex is the dominant scour
mechanism. As the scour hole becomes progressively deeper the downflow near the bottom
of the scour hole decreases until at some point in time equilibrium is reached and the depth
remains constant.
At the sides of the pier flow separation occurs, resulting in a wake vortex whose whirlpool
action sucks up sediment from the bed. As the vortices diminish and velocities reduce, the
scoured material is deposited some distance downstream of the pier.
For piers that are essentially rectangular in plan and aligned to the flow the basic scour
mechanism is similar to that just described, although rather more severe because of the
square corners. However, as the angle of attack to a rectangular pier increases, so does its
effective width, so the scour depth increases and the point of maximum scour moves
downstream of the nose to a point on the exposed side.
With a large degree of skew the maximum scour may occur at the downstream end of the
pier. If the flow direction is likely to change there is merit in using cylindrical piers to avoid
these complications.
The scour mechanism at a bridge abutment is similar to that at a pier, although the boundary
layer at the abutment or channel wall may result in an additional deceleration of the flow
compared with a central pier. The approach flow can be considered as separating into an
120
Hydraulic Structures
upper layer, which forms an upflow surface roller on hitting the abutment, and a lower layer,
which becomes the bottom or principal vortex. This is shown schematically in Figure 6.3.25.
Viewed in plan, the upper layer divides or separates, with part of the flow accelerating
around the upstream corner of the abutment into the bridge waterway while the remainder
slowly rotates in an almost stationary pool trapped against the face of the abutment and the
river bank.
In the bottom layer, the flow near the bank forms an almost vertical downflow, while that
nearer to the end of the abutment accelerates down and into the waterway, forming the
principal vortex. Usually scouring starts in this region of accelerating flow and grows along
the faces of the abutment. Wake vortices form downstream of the abutment.
The basic scouring process is the same for most types of abutment, although with wingwall
and vertical wall types the stagnation region is larger, and scour is most severe near the end
of the abutment where the principal vortex is concentrated.
The total scour depth is obtained by summing degradation, contraction and local scour. This
procedure is, strictly, only valid where the scour holes overlap. For instance, contraction
scour may have to be added to pier or abutment scour to get the total scour depth. However,
pier scour and abutment scour would not be added unless the two scour holes overlap.
This usually has to be determined by drawing a cross-section through the waterway and
superimposing the scour depths. If the holes do overlap the resultant scour depth is often
larger than the two components, but difficult to predict. Nevertheless, as a general and
conservative rule, the total scour depth is the sum of the three components.
The scour computation capability in the HEC-RAS software allows the user to compute
contraction scour and local scour at piers and abutments. The details may be found in the
HEC-RAS Hydraulic Reference Manual.
121
Hydraulic Structures
Hydraulic modelling of a bridge site is an integral part of any bridge design and these studies
should address the sizing of the bridge waterway helping to ensure that the foundations can
be designed to minimise scour.
It must be recognised that damage to bridge approaches from rare floods can be repaired
relatively quickly to restore traffic service. On the other hand, a bridge which collapses or
suffers major structural damage from scour can create safety hazards as well as significant
social impacts and economic losses for a prolonged period of time. Therefore, scour
resistant bridge foundations should be designed to a higher hydraulic standard. These
concepts must be reflected in bridge design procedures.
There are many methods for estimating scour as part of bridge design and these include
equations by Holmes, Neill, Faraday and Charlton, Melville and Coleman, the CSU equation,
FHWA HEC-18 equation, Froehlich equations and HIRE equations, with details for all
provided in QDTMR (2013).
Encroachment in the stream channel by abutments and piers reduces the channel section
and may cause significant contraction scour. Severe constriction of floodplain flow may
cause approach embankment failures and serious contraction scour in the bridge waterway,
where auxiliary (relief) openings can be considered but must be carefully designed. On wide
floodplains the design should seek to avoid excessive diversion of floodplain flows towards
the main bridge opening and skewed crossings of floodplains should also be minimised as
much as possible.
The increase in the velocity through the bridge waterway opening occurs as a result of the
increase in the energy head. The restriction in the waterway results in water banking
upstream to a level sufficient to develop the additional head to increase the velocity to
maintain equilibrium flow.
3.8.2.1. Length
In most cases it is not economical to bridge the full width of flood flow and the problem
reduces to what is an acceptable length of bridge. As a consequence, the road embankment
in the approaches to the bridge causes a restriction on the flow occurring under natural
conditions. Consideration of the increase in velocity and hence scour potential and afflux
would be the main determining factors for the length of a bridge. Longer bridges increase the
cost but reduce the extent of constriction and therefore the risk of scour.
122
Hydraulic Structures
The total bridge length is an important design feature, but others also need consideration,
since this influences the flow velocity through the bridge and therefore the risk of scour. The
important design consideration is to minimise the bridge length (and cost of the bridge) while
keeping the risk of scour to an acceptable level.
A detailed approach to assessing scour as part of bridge design is given in QDTMR (2013).
Over the last several decades, a wide variety of countermeasure structures, armouring
materials and monitoring devices have been used at existing bridges to mitigate scour and
stream stability problems.
Since scour susceptible bridges are already in place, options for structural or physical
modifications such as replacement or foundation strengthening are limited and expensive.
Unless these bridges are programmed for replacement, their continued operation will
ultimately require the design and installation of a scour countermeasure.
Riprap is one of the primary scour countermeasures to resist local scour forces at abutments
of typical bridges. Riprap is generally abundant, inexpensive and requires no special
equipment. However, proper design and placement is essential. Guidelines for proper
grading and placement methods are included in QDTMR (2013). When designing riprap
countermeasures, maintaining an adequate hydraulic opening through the bridge must be
considered. Improperly placed riprap may reduce the hydraulic opening significantly and
create contraction scour problems. If placed improperly, riprap can increase local scour
123
Hydraulic Structures
forces. Although riprap is widely used, the following countermeasures can be considered as
alternatives to riprap, but are not all covered here:
Armouring countermeasures:
• Rock riprap;
• Sack gabions;
• Grouted riprap;
River training structures alter stream hydraulics to mitigate undesirable erosional and/or
depositional conditions. They are commonly used on unstable stream channels to redirect
stream flows to a more desirable location through the bridge, and require specialist design:
• Bendway weirs;
• Dumped rock over a geofabric layer at piers, abutments and channel banks;
• Gabion mattresses over a geofabric layer at piers, abutments and channel banks; and
Normally the scour protection is used to fill any scour holes that have formed to the original
bed levels. Rigid measures such as concrete slabs are not as desirable due to potential for
catastrophic failure. Flexible scour protections have an ability to self heal once a failure
mode commences.
If shotcrete (concrete) is used at the bridge abutments for scour repair it must be tied into the
abutment slope. If it is not properly tied into the slope it can be undermined and result in
further damage to the abutment. This method is often not effective, particularly where the
scour is being caused by a geotechnical failure of the embankment slopes.
Detailed descriptions of scour repair and protection for existing bridges is included in
QDTMR (2013).
124
Hydraulic Structures
natural velocity in the channel. If this increase produces a velocity where scour could be
introduced, protection measures are required, though good practice would lead to a design
solution where the culvert design maintains a flow velocity at the outlet that is below the rate
that causes scour.
• outlet velocity exceeds the scour velocity of the bed or bank material;
• an unprotected channel bend exists within a short distance of the culvert outlet;
• if an erodible channel bank exists less than 10 to 13 times the pipe diameter downstream
of the outlet, and this bank is in-line with the outlet jet (ie. likely to be eroded by the outlet
jet) the bank should adequately protected to control any undesirable damage as a result of
the outlet jetting.
Culverts, however, are generally narrower than the natural waterway and a transition section
is required to return the flow to the natural channel. When culvert outlet velocities are high,
additional measures at the outlet may prove to be necessary for energy dissipation.
To check whether standard inlet and outlet structures with headwalls, wingwalls, aprons and
cut-off walls are adequate, the outlet velocity for the culvert requires examination with
respect to:
If outlet velocities exceed the acceptable limits, it may be necessary to check for potential
bed scour problems. Where the outlet flows have a Froude Number (Fr) less or equal to 1.7
and outlet velocities less than 5.0 m/s, an extended concrete apron or rock pad (commonly
used) protection is recommended.
They do, however, have a major disadvantage in flows containing substantial amounts of
sediment, in that they trap sediment and other solids behind them, leading to putrescible
deposits in sewer applications. For this reason, sharp-crested weirs are most commonly
used with relatively clean effluent, or in temporary flow monitoring locations.
125
Hydraulic Structures
The emphasis in this sub-section is on the analytical techniques, weir properties, and choice
for particular purposes, and submergence characteristics. In particular, complete details on
variations in discharge coefficient are not given, and the reader is referred to specialist texts
for such information.
The flow over such a weir is illustrated schematically in Figure 6.3.26. This figure includes
also the necessary nomenclature for the analysis. Before proceeding with the analysis, some
comments are provided on the flow situation.
It is noted first that the pressure distribution at the weir crest is non-hydrostatic. This situation
arises because the pressure at both Points A and B is atmospheric by definition, and there
are significant vertical components in the velocity as the flow contracts to pass over the weir
crest.
�2
0
Secondly, it is noted that, as expected, the total energy line (TEL) is situated an elevation 2g
above the free surface, where �0 is the approach velocity. In many cases, the magnitude of
the approach velocity head may be considered to be negligible. For simplicity, this is
assumed in the following analysis, although the influence of the approach velocity head will
be included later.
1. The flow does not contract as it passes over the weir – ie. the elevation of A is the same
as that of the upstream water surface; and
126
Hydraulic Structures
With reference toFigure 6.3.26, these assumptions lead to an expression for the velocity at C
of:
� = 2�� (6.3.10)
The flow rate per unit width through an elemental strip of height �� at C, is then given by:
�� = 2���� (6.3.11)
ℎ
�= ∫ 0
2���� (6.3.12)
Equation (6.3.12) is simply integrated and a contraction coefficient, ��, introduced to allow
for flow contraction over the crest, to yield:
3
2 (6.3.13)
� = �� 2�ℎ 2
3
If the magnitude of the approach velocity head cannot be ignored, the integral form of
Equation (6.3.11) is expressed as:
�2
0
∫
ℎ+
2�
�= 2���� (6.3.14)
�2
0
2�
3 3
2 �20 2 �20 2
�= 2� +ℎ − (6.3.15)
3 2� 2�
Manipulation and introduction of the contraction coefficient leads finally to the result:
3 3
3
2 2
�20 2 �20 2
�= � 2�ℎ 1+ − (6.3.16)
3 � 2�ℎ 2�ℎ
The equation is made more compact by introducing a discharge coefficient, ��, leading to:
3
2 (6.3.17)
�= � 2�ℎ 2
3 �
in which:
127
Hydraulic Structures
3 3
�20 2 �20 2
�� = �� 1+ − (6.3.18)
2�ℎ 2�ℎ
It is evident from the form of Equation (6.3.18) that, if the velocity head is negligible
compared with ℎ, �� = �� and Equation (6.3.17) is then identical to Equation (6.3.13).
ℎ
�� = 0.611 + 0.08 (6.3.19)
�
3
2 (6.3.20)
� = 0.407 2�ℎ
The total flow rate over the weir is then given by the product of Equation (6.3.20) and the
transverse crest length.
It needs to be recognised, however, that the concept of “flow rate per unit width” cannot be
used because the width varies over the height of the weir. Accordingly, the elemental flow
rate through the element of width � is given by:
�� = � 2���� (6.3.21)
�
� = 2tan ℎ−� (6.3.22)
2
128
Hydraulic Structures
Substitution of Equation (6.3.22) into Equation (6.3.21), integration between the limits of
� = ℎ and � = 0, and inclusion of the discharge coefficient yields:
5
8 � (6.3.23)
� = �� 2�tan ℎ 2
15 2
ℎ
The value of �� is dependent on the ratio of �
, but more particularly, on the vertex angle, �.
For the commonly used value for θ of 90°, a value for �� of 0.58 is commonly assumed. For
other situations, values for �� may be obtained from standard texts on flow measurement.
The analytical techniques discussed above can be applied to any weir crest shape.
129
Hydraulic Structures
A number of experiments were carried out on rectangular, triangular, parabolic, and Sutro
submerged weirs and determined that the flow rate could be determined from the following
equation:
� 0.385
� ℎ2
= 1− (6.3.24)
�1 ℎ1
Chief among these is the assumption that the pressure distribution is hydrostatic. In many
devices, the strongly curved stream lines negate this assumption, resulting in the necessity
for empirical coefficients.
The broad-crested weir and the long-throated flume are devices for which the flow rate can
be predicted theoretically without the need for such coefficients. The broadness of the crest
and the length of the throat are such that the stream lines are close to horizontal in the
region of critical flow, permitting the assumption that the pressure distribution is hydrostatic.
130
Hydraulic Structures
The analysis of both the broad-crested weir and the long-throated flume is identical in that
both rely on the determination of the relationship between the upstream water level (which
may be measured) and critical depth within the constricted section – which is a known
function of the flow rate. Thus, a unique relationship between flow rate and the upstream
water surface elevation can be determined.
Broad-crested weirs are more prone to sediment buildup than long-throated flumes and are,
consequently, less common in sewerage systems. For this reason, the emphasis in the
following is on long-throated flumes, while recognising that broad-crested weirs are analysed
in the same manner.
The long-throated flume is widely used in sewerage systems, within the pipe system and to
monitor open channel inflows and outflows at sewage treatment plants.
In this section, the advantages of this type of structure are first discussed. The derivation of
the theoretical rating curve is then presented for the rectangular flume and it is shown how
this can be generalised for arbitrary cross-section shape. Their use in practice is then
illustrated.
3.9.6. Advantages
The primary advantages of these devices are listed in the following:
• Provided that critical flow occurs in the throat, a rating table can be calculated with an
error of less than 2% in the listed discharge. This can be done for any combination of a
prismatic throat and an arbitrarily-shaped approach channel.
• The throat, perpendicular to the direction of flow, can be shaped in such a way that the
complete range of discharges can be measured accurately, without creating an excessive
backwater effect.
• The head loss across the structure required to obtain undrowned flow conditions is
minimal, and can be estimated with sufficient accuracy for any of the structures placed in
any arbitrary channel.
• Because of their gradually converging transitions, these structures have few problems with
floating debris.
• Field and laboratory observations indicate that the structures can be designed to pass
sediment transported by channels with subcritical flow. It should be noted, however, that
excessively high sediment loads or significant reductions in the velocity of the approach
flow may create sedimentation problems.
• Provided that its throat is horizontal in the direction of flow, a rating table based upon post-
construction dimensions can be produced, even if errors were made in construction to the
designed dimensions.
• Under similar hydraulic and other boundary conditions, these structures are usually found
to be the most economical for the accurate measurement of flow.
3.9.7. Disadvantages
The major property of a long-throated flume is that it is designed to create a constriction in
the flow area sufficient to produce critical flow over the full range of expected flow rates. In
131
Hydraulic Structures
addition, the head loss across the structure should not be excessive and afflux should be
kept to a minimum.
A typical long-throated flume is shown schematically in Figure 6.3.29. With regard to the
hydraulic characteristics of the flume itself, five components may be recognised as follows:
1. The approach channel, where the flow should be stable so that the water level and the
energy level can be accurately determined.
2. A converging transition region into the throat, which is designed to provide a smooth
acceleration of the flow with no discontinuities or flow separation. The transition may be
rounded or consist of plane surfaces.
3. The throat, where the flow is accelerated to the critical condition. The throat must be
horizontal in the flow direction, but can, in principle, be of any shape transverse to the
flow. The invert of the throat may be higher than the invert of the upstream and
downstream channels.
4. A diverging transition to reduce the flow velocity to an acceptable level and to recover
head. If there is ample available head, an abrupt transition may be used.
5. The tailwater channel in which a known hydraulic control is exercised by the downstream
conditions and the hydraulic properties of the channel.
132
Hydraulic Structures
The control section is the approximate location of critical flow within the throat of the flume. It
is not necessary to know precisely where this occurs because the developed head-flow rate
relationship is expressed in terms of the head upstream.
3.9.8. Analysis
With reference to Figure 6.3.30, application of the energy equation yields:
�2�
�1 = �� + (6.3.25)
2�
To proceed further, the shape of the control section must be known. For a rectangular cross-
section, the properties of critical flow are such that:
�2� 3 3 3 �2
�� + = �� = (6.3.26)
2� 2 2 �
where � is the flow rate per unit width within the control section.
3 2
3 �
�31 = (6.3.27)
2 �
from which:
3
2 2 (6.3.28)
�= � �12
3 3
In terms of the width of the control section, ��,Equation (6.3.28) is written as:
3
2 2 (6.3.29)
�= � ���12
3 3
133
Hydraulic Structures
The development of Equation (6.3.29) has assumed ideal flow conditions – in particular, that
there is no energy loss between the location of the upstream head, �1, and the critical
control. This is taken into account by introducing a discharge coefficient, ��, such that:
3
2 2 (6.3.30)
� = �� � ���12
3 3
�� may be determined by an analysis of the boundary layer between the upstream head
measurement point and the control section, but the complex procedure is rarely justified.
High accuracy can be obtained by using the simpler equation:
3
0.006� 0.003� 2
(6.3.31)
�� = 1 − 1−
�� ℎ
However, at the design stage, it is normally sufficient to assume a value for the discharge
coefficient of 0.95.
Now,
�21 �2
�1 = ℎ1 + = ℎ1 + (6.3.32)
2� 2��21
Equation (6.3.32) demonstrates that Equation (6.3.30) is difficult to use in practice because
the head term, �1, contains the unknown flow rate, �, in addition to the measured head, ℎ1.
An iteration method can be followed, using the following steps:
2. Use this approximate discharge to determine the velocity head and then use these data to
calculate an improved value of the total head at the gauging section.
3. Compute a more refined discharge value using this total head value.
4. Repeat steps (2) and (3) until the difference between successive discharge values is an
order of magnitude less than the required tolerance.
A much more convenient approach is developed by defining a velocity coefficient, ��, from
the equation:
3
2 2 (6.3.33)
� = ���� � ��ℎ12
3 3
134
Hydraulic Structures
3
3 �2 2 3
1
�1 2 ℎ1 + 2� �21 2
(6.3.34)
�� = = = 1+
ℎ1 ℎ1 2�ℎ1
�
Noting that �1 = �1
, Equation (6.3.34) is expressed as:
3
�2 2
(6.3.35)
�� = 1 +
2�ℎ1�21
We now replace ��ℎ1 by �*, the imaginary cross-sectional area of the control section if the
water depth there was equal to ℎ1, and further simplify to give:
�* ��3 − 1 (6.3.37)
�� = 2.60
�1 �2�
�*
A plot of �� against the area ratio, �� � , can then be drawn and is presented in
1
Figure 6.3.31. In this figure, the upper curve is a continuation of the lower curve beyond the
right hand limit of the figure.
Because �* and �1 can be expressed in terms of the measured water surface elevation, ℎ1,
the velocity coefficient, ��, can be directly determined.
The rating equations for non-rectangular cross-sections are easily determined once the
relationship between the critical depth, ��, and the upstream energy level, �1 is known. For
example, application of the specific energy principles to the triangular cross-section yields:
4
�� = � (6.3.38)
5 1
135
Hydraulic Structures
Substitution of Equation (6.3.38) into Equation (6.3.23) and subsequent manipulation yields:
5
16 2 � (6.3.39)
� = ���� �tan ℎ12
25 5 2
136
Hydraulic Structures
shorter word error is used. In this review, we will continue to use the word uncertainty since it
is a more accurate descriptor
Systematic uncertainties tend to shift all measurements in a systematic way so their mean
value is displaced. Systematic means that when the measurement of a quantity is repeated
several times, the uncertainty has the same size and algebraic sign for every measurement.
Other sources of systematic errors are external effects which can change the results of the
experiment, but for which the corrections are not well known.
Systematic errors can be more serious than random uncertainties for three reasons as
follows:
1. There is no sure method for discovering and identifying them just by looking at the
experimental data;
3. A systematic error has the same size and sign for each measurement in a set of repeated
measurements, so there is no opportunity for positive and negative errors to offset each
other.
Random uncertainties fluctuate from one measurement to the next and are present in all
experimental measurements. As such, they cause a measuring process to give different
values when a measurement is repeated many times (assuming all other conditions are held
constant to the best of the operator’s ability). Random uncertainties can have many causes,
including operator errors or biases, fluctuating physical conditions, varying environmental
conditions and inherent variability of measuring instruments.
The effect that random uncertainties have on results can be somewhat reduced by taking
repeated measurements then calculating their average. The average is generally considered
to be a better representation of the true value than any single measurement, because
uncertainties of positive and negative sign tend to compensate each other in the averaging
process. They yield results distributed about some mean value.
A measurement with relatively small random uncertainty is said to have high precision. A
measurement with small random uncertainty and small systematic error is said to have high
accuracy. Precision does not necessarily imply accuracy. A precise measurement may be
inaccurate if it has a systematic error.
137
Hydraulic Structures
The answer depends on the equation linking the flow rate with the directly measured
parameters. We first return to the functional relationship derived for the rectangular sharp-
crested weir:
3
(6.3.40)
� = 0.407� 2�ℎ 2
Using the rules of differentiation we can determine the derivative of this equation in the form:
1
3 (6.3.41)
� = 0.407� 2� ℎ 2 �ℎ
2
Now we divide Equation (6.3.41) by Equation (6.3.40). Because of the equality, we can
divide the left side of Equation (6.3.41) by the left side of Equation (6.3.40) and the right side
of Equation (6.3.41) by the right side of Equation (6.3.40).
Thus:
�� 3 �ℎ
= (6.3.42)
� 2 ℎ
In words, Equation (6.3.42) indicates that the percentage uncertainty in flow rate (�) is equal
to (3/2) x the percentage uncertainty in measured head (ℎ). We note that the fraction 3/2 is
the exponent of the function in Equation (6.3.40).
Indeed, we can now state the completely general equation for uncertainty for a functional
relationship of the form � = ��� as follows:
�� ��
=� (6.3.43)
� �
3.10. References
Austroads (2013) Guide to Road Design - Part 5: Drainage, Austroads Ltd, Sydney.
Fredrickson, L.H. (1979), Floral and faunal changes in lowland hardwood forests in Missouri
resulting from channelization, drainage, and improvement. Report No. FWS/OBS-78/91, Fish
and Wildlife Service, U.S. Department of the Interior, Washington, DC.
138
Hydraulic Structures
French, R.H. (1985), Open Channel Hydraulics, McGraw-Hill Book Company, New York.
Henderson, F.M. (1966), Open Channel Flow, Macmillan Publishing Co., New York
Keller, R.J. (1995), Meander re-instatement using submerged weirs. Department of Civil
Engineering, Monash University, Final Report, Land and Water Resources Research and
Development Corporation Partnership Project P85/05.
Keller, R.J. (1986), Hydraulic Investigation of Proposed Culvert Structures, Report for Road
Construction Authority.
Keller, R.J., Peterken, C.J, and Berghuis, A.P. (2012), Design and assessment of weirs for
fish passage under drowned conditions, Ecological Engineering, 48: 61-69.
List, E.J and Imberger, J. (1973), Turbulent Entrainment in Buoyant Jets and Plumes,
Journal of the Hydraulics Division, ASCE, 99(9), 1461-1474.
Peterka, A.J. (1978), Hydraulic design of stilling basins and energy dissipaters. Engineering
Monograph No.25, USER, Denver, Colorado, USA.
QDTMR (Queensland Department of Transport and Main Roads) (2015), 'Road Drainage
Manual', Available at http://www.tmr.qld.gov.au/business-industry/Technical-standards-
publications/Road-drainage-manual.
Rouse, J., Bhocta, B.V., Hsu, E.Y (1951), Design of Channel Expansions, Trans. ASCE,
1(116), 347.
U.S. Department of Transportation (1972), Hydraulic Design of Improved Inlets for Culverts,
Hydraulic Engineering Circular No.13.
United States Army Corps of Engineers (USACE). (1995), Hydraulic Design of Spillway,
Technical Engineering and Design Guides as adapted from the US Army Corps of
Engineers, No.12 ASCE.
United States Army Corps of Engineers (USACE) (2000), Design and Construction of
Levees, Report #EM 1110-2-1913, April.
139
Chapter 4. Numerical Models
Andrew McCowan
4.1. Introduction
Hydrology and hydrologic modelling are generally related to the determination of discharge
characteristics of flood flows. By comparison, the main aim of hydraulic modelling is to
describe the details of the main water level and velocity characteristics of the hydrologically
derived flood flows. An appropriately set-up and calibrated hydraulic model can be used to
not only describe the details of flood flows and their distribution throughout a river and
floodplain system, but also to predict the likely impacts that any changes to that system may
have on these flows. Typical applications for hydraulic models may include:
• Evaluation of the effects of proposed changes that may affect flood flows; and
Prior to the advent of computers, hydraulic modelling of river flows could only be carried out
in physical models. Although geometrically similar to the physical systems they represent
physical models are subject to scaling constraints as described in Book 6, Chapter 2,
Section 2. Due to the time and costs involved, physical modelling could only be justified for
major projects. Today, the use of physical models is typically limited to modelling complex
flows in relatively small reaches of river, and for modelling the behaviour of flows in hydraulic
structures.
Although the basic equations governing river flow were derived in the 19th Century, it was
not until the development of computers in the 1960s and 1970s that numerical modelling of
river flows became practical. With the rapid on-going development of computers and
computing power, there has been a continual evolution of numerical modelling and modelling
techniques. This has resulted in the availability of a wide range of numerical models with
increasing capability, and increased complexity.
The first numerical hydraulic models were little more than computerised backwater
calculators for steady flows in one (along stream) dimension. These one-dimensional (1D)
models gradually increased in sophistication to include hydraulic structures, unsteady flows,
simply connected (dendritic) branched channel networks and, ultimately, multiply connected
branched channel networks. With the multiply connected channel models it became possible
to separate floodplain flows from main channel flows through the introduction of separate
floodplain flow paths or systems of overbank floodplain cells. These models were sometimes
called quasi-2D models.
In the early to mid-1990s, numerical modellers began to apply fully two-dimensional (2D)
hydraulic models to river and floodplain systems. Many of these models had been originally
developed for modelling flows in bays and coastal seas and required modifications to make
them more suitable to simulating river and floodplain flows. 2D flood models use square or
curvilinear grids, or flexible meshes, to provide much greater resolution of the flows in
140
Numerical Models
Further development saw coupling of 1D channel models with full 2D models to provide a
better description of in-bank flows and flows in sub-grid (or mesh) scale channels. These
coupled 1D/2D models also allowed the introduction of model structures in localised 1D
model branches to provide a better representation of hydraulic structures such as culverts,
weirs and bridges. Full 2D models have also been applied extensively to simulating the
hydraulics of urban stormwater flows. For these applications 2D models have also been
coupled with pipe network models to provide a better description of the flows in underground
stormwater drainage systems.
In the early to mid-2000s, the use of full 2D models was extended to also include the effects
of rainfall over the model domain. With this “direct rainfall” or “rainfall on grid” approach it is
possible to simulate the rainfall/runoff (hydrologic) processes throughout the model area and
integrate them with the hydraulic routing of the resulting overland flows. This is particularly
useful in urban applications or other situations where the model area includes a significant
proportion (or even all) of the catchment contributing to the flow within the model. In these
cases, the approach can reduce (and in some cases eliminate) the need for separate
hydrologic modelling, but is still the subject of on-going research (Engineers Australia, 2012).
In general, it can be said that the more realistic the modelling approach, the greater the
probability of achieving a successful outcome. However, the use of the most sophisticated
modelling approach available will not, in itself, guarantee success. This is because the skill
of the modeller adapting a generic modelling system to a specific application, and the quality
of the data used as model input can be equally (or even more) important in determining the
success of a modelling exercise. This can especially true for direct rainfall modelling. Indeed,
there will be applications where simplified approaches, suitably applied, may be more
appropriate than the use of more sophisticated models.
This chapter introduces the basis of numerical hydraulic models of river and floodplain flows.
The differences between steady and unsteady flow models are discussed, and the range of
modelling approaches that are currently available are described. More details are then
provided on the application of 2D models for describing flood flows.
1. Review and define the physical system (the river and floodplain system to be modelled)
3. Select a generic numerical model (the modelling software to be used to solve the
equations of the mathematical model)
4. Develop the site-specific numerical model (through the application of site specific inputs to
the generic modelling software. These inputs will typically include topographic data, bed-
friction coefficients, flow boundary conditions and other parameters such as structure
information, as appropriate)
141
Numerical Models
142
Numerical Models
Numerical solution procedures can involve finite difference, finite element and finite volume
techniques. Finite differences are generally used in 1D models, while finite volume
techniques are becoming increasingly popular in 2D and 3D applications. The introduction of
“parallel processing” through multiple core computers and graphics processing units (GPUs)
has dramatically increased the computing power available allowing much larger model
domains and/or finer grid/mesh sizes to be used in 2D and 3D modelling applications.
Additionally, the various commercially available generic modelling packages can have quite
different approaches to modelling different flow characteristics such as flooding and drying,
super-critical flows and sub-grid scale processes. It is therefore important that the modeller
has an understanding of the capability and any potential limitations that the generic
numerical model may have for simulating the flow conditions in any particular application.
• The input of site specific data including: topographic data (cross sections and/or
topography) and bed-friction data, and
Once a site-specific model has been developed it must be calibrated and, where possible,
validated to ensure that it is capable of providing a reliable description of the flow
characteristics within the area of interest. This is described in the following section.
143
Numerical Models
tried and proven software packages. In these cases, it can be assumed that the modelling
software has been validated and that it is capable of solving the equations of motion
correctly. Nevertheless, it can be useful for inexperienced modellers to carry out their own
verification runs using standard test cases to provide confidence that they are operating the
model correctly. These verification runs could include reproduction of uniform channel flow
and/or reproduction of standard backwater or drawdown cases.
Modellers should be aware that all software packages have “bugs”, which can lead to
spurious results and/or instability, particularly when used in modelling high velocity flows and
in “non-standard” applications. Modellers should also be aware that different modelling
packages can use different forms of the under-lying equations, different numerical solution
procedure, and different approaches and assumptions to modelling special flow cases such
as super-critical and flows at structures. Here it is important for a modeller to be aware of the
assumptions that are used in the selected modelling package, and to be confident that they
are appropriate for the physical system to be modelled.
Finally, there is a tendency for modellers to be innovative and to use models in situations
well beyond the range of conditions for which they were originally developed. Typical
examples might include the use of a 2D model to simulate flows where there may be
significant vertical accelerations, or where there may be a significant three-dimensional
component to the flow. Caution should always be used when using a package in these types
of applications. Wherever possible, model results should be compared with analytical
results, physical model results, or more appropriate (e.g., 3D or CFD) model results to obtain
an estimate of the likely magnitude of the errors involved.
Model calibration is the process of comparing model results against measured flood levels
and extents and adjusting model parameters to obtain a “best-fit”. For flood studies, model
calibration is typically carried out on the largest flood for which reliable water level data is
available. In studies where more frequent flooding may be important, the model should also
be calibrated against measurements taken from a more frequent flood event. During the
calibration process, model parameters (typically bed-friction coefficients) are adjusted and
the model re-run until the results give the best reproduction of the measured data.
In the first instance, the calibration process is also used to identify any inconsistencies in the
model terrain data and boundary conditions. If after repeated efforts, it is not possible to
obtain a reasonable representation of the measured data or, if this can only be achieved by
the use of physically unrealistic input parameters, then it will be necessary to look more
closely at: the assumptions made in the selection of the generic mathematical model, the
appropriateness of the selected modelling package for reproducing the flow conditions under
consideration, and the reliability of the boundary conditions that have been applied to the
model.
Model Validation is the process whereby the calibrated model is used to simulate an
independent flood event to provide a check on the reliability of the calibration process. The
flood event will typically be somewhat lower than the calibration case and, in some cases,
the results may be used to further refine the calibration process.
144
Numerical Models
In the early days of numerical hydraulic modelling, the models were limited to 1D and the
distinction between steady flow and unsteady flow models was quite marked. This was
because Hydrolic Engineering Center of the US Army Corps of Engineers made their River
System, HEC-RAS, a relatively sophisticated 1D steady flow model, freely available to
anyone who wanted to use it. By comparison, unsteady flow modelling required modellers to
either have specialist modelling skills to develop or adapt research-based models, or to
purchase what was then quite expensive specialised modelling software packages. As a
result there was a tendency for modellers to “push” the use of HEC-RAS well beyond the
range of application for which it was originally designed.
With the advent of more readily available and relatively sophisticated 1D and 2D unsteady
flow models, the distinction between steady and unsteady flow models has become less
marked. This is because there is less reliance on specialised steady flow models and, where
required, much of the steady flow modelling is carried out by unsteady flow models using
steady flow boundary conditions.
1D steady flow models such as 1D steady flow version of HEC-RAS (USACE, 2010) can be
used to compute flood profiles in a wide range of situations. This version of HEC-RAS is
based on the numerical solution of the profile equation presented in Book 6, Chapter 2,
Section 8. This in turn is based on the 1D energy equation. Manning’s equation is used to
compute energy losses due to bed-friction, and other contraction and expansion losses are
calculated using a coefficient times the change in velocity head. The momentum equation is
used in situations where there are rapid variations in the water surface profile, such as at
hydraulic jumps. The computations can also include empirical structure equations to
describe rapidly varying flows at a range of structures including bridges, culverts, weirs and
spillways.
When using steady flow models, such as HEC-RAS, or using unsteady flow models with
steady flow boundary conditions, it is noted that the main underlying assumptions of steady
flow are effectively that:
• The peak flood levels coincide with the peak discharge; and
• The peak discharge and corresponding flood levels occur simultaneously over the full
length of the reach of channel under consideration.
145
Numerical Models
For these reasons, and for the reasons described in the following section, steady flow
models are best suited to modelling flows along relatively short reaches of river with well-
defined flow paths, and/or for modelling flows at structures. Steady flow models should not
be used to describe flows where there are:
• Flat channel slopes where wave propagation effects can become important;
• Wide floodplains and/or other features where storage effects may affect the flow; and
• Channel networks where the flow splits are not well defined.
The Manning equation for uniform flow can be considered as the simplest form of steady
flow model. Here the average velocity u across a cross-section can be related to a Manning
roughness coefficient n, the hydraulic radius Rh, and the bed slope S:
2 1
1 3 2 (6.4.1)
�= � �
� ℎ
For a wide open floodplain, the hydraulic radius can, to a first approximation, be replaced by
the flow depth y:
2 1
1 (6.4.2)
� = �3�2
�
These equations can be used by modellers as a sanity check to ensure that their models are
providing results with flow velocities of the correct order of magnitude. For example, a
relatively smooth floodplain with n = 0.04, a moderate slope of S = 0.001, and a depth of y =
1.0m would be expected to have a flow velocity of approximately u = 0.8 m/s.
With rapid changes in flow, the inertial (acceleration) terms in the equations of motion
become important. (These terms are not included in steady flow models). Examples include
modelling of dam-breaks and structure operations, as well as modelling of reaches where
the time of propagation of a flood wave through the reach cannot be considered insignificant
relative to the duration of the flood hydrograph.
For flat channel slopes, flow velocities and, correspondingly, the effects of bed friction can
become small relative to the main time-varying acceleration terms in the equations of
motion. This is particularly true in lakes and estuaries. For this reason, the American Society
of Engineers (1996) recommends that unsteady flow modelling should be carried out for
146
Numerical Models
channel slopes less than about 4x10-3 and, depending on the study objectives, for slopes up
to about 1x10-3.
Storage Effects
In reaches where there are significant storage effects, the rating curve on the rising limb of a
flood hydrograph will be different to the rating curve on the falling limb. This results in a
“looped” rating curve, as shown in Figure 6.4.2. Under these conditions the peak water level
will not correspond to the peak discharge. As a result, unsteady models should be used to
simulate these effects.
Figure 6.4.2. Storage Effects on the Rating Curve; (a) Flood Hydrograph, (b) Corresponding
Rating Curve (Cunge et al., 1980)
Channel Networks
Unsteady flow models have the ability to compute the distribution of flows throughout a
channel network or across a model domain, whereas steady flow models can only modelled
on the basis of pre-determined flow splits. In real-life situations peak flows in tributaries
rarely correspond with the peak flow in the main channel and, in some cases, backwater
effects from one part of a network (e.g., a main channel) can cause time-varying flow
reversals in another part (e.g., a tributary). Unsteady flow models are required to simulate
these effects.
• 1D models
• 2D models
• 3D models
147
Numerical Models
These are described briefly below. More details on the use of 1D, 2D and coupled 1D/2D
models in flood modelling applications are then provided in the following sections.
4.5.1. 1D Models
1D flow models are based on the numerical solution of the Saint Venant equations for
describing gradually varying unsteady flow in one horizontal dimension. Early 1D models
required the main channel and flood plain of a river to be schematised as a single one-
dimensional channel. The use of these early 1D models was generally restricted to
modelling single river branches, or simply connected (dendritic) branched river systems. As
part of the evolutionary process of model development these relatively simple models have
been replaced by more sophisticated models that allow arbitrary connections of multiple
channel systems. In these models, floodplains can be represented as separate flow paths
and there can be multiple flow paths within a single floodplain. This makes it possible to
provide a somewhat more realistic description of the flows in real river and floodplain
systems.
1D models are computationally quick to run and are well suited to modelling flows along
well-defined channel and floodplain flow paths. Their more general use in flood studies has
been largely superseded by 2D models which can provide a much more detailed description
of flood flows in overbank areas. 1D models are still used in applications where large
numbers of multiple model runs are required and computational time requirements make 2D
modelling impractical. 1D models have also been integrated with 2D models in order to
make the most of the relative advantages of both types of model.
4.5.2. 2D Models
2D flow models are based on the numerical solution of the depth-averaged equations
describing the conservation of mass and momentum in two horizontal dimensions. These
equations assume that the flow velocity is uniform over the depth, both in magnitude and
direction. This assumption is reasonable in most floodplain applications where the flow depth
is relatively shallow with respect to the horizontal dimensions of the main physical features to
be resolved in the model.
The 2D model equations are solved at each active water grid point or mesh element over a
two-dimensional model grid or mesh. The computational domain may be a square,
rectangular or curvilinear grid, or may be a flexible mesh comprising triangular and/or
quadrilateral mesh elements. With these models survey information for the area of interest is
digitised onto the two-dimensional model grid or mesh. This capable of providing a detailed
description of the flow in floodplains and overbank areas.
2D models can have problems in providing adequate resolution of in-bank flows and,
compared with 1D models, are heavy computationally. With respect to the latter, it is noted
that the use of parallel processing coupled with multi-core computers and/or graphics
processing units (GPUs) has significantly enhanced the computational capability of 2D
models.
148
Numerical Models
in two ways. In the first, an overall 1D model of an extended reach of river may be coupled
dynamically to one or more detailed 2D model domains to provide a more detailed
description of the flows in local areas of interest. In the second, one or more 1D model
branches may be dynamically coupled within a 2D model domain to provide a better
description of in-bank channel flows, and flows through hydraulic structures such as culverts,
weirs and bridges. In some software packages, the 1D/2D coupling has also been extended
to include 1D pipe network models. This has extended the range of application of coupled
1D/2D models to providing a more detailed description of the flooding associated with urban
stormwater flows.
4.5.4. 3D Models
3D flow models are based on the numerical solution of the Reynolds-averaged Navier-
Stokes equations describing the conservation of mass and momentum in three-dimensions.
For most 3D river and estuarine modelling applications, these equations are simplified by
assuming that the pressure distribution is hydrostatic. This assumption is consistent with the
equations used in 1D and 2D modelling, described above. As for 2D models, the
computational domain of a 3D model may be formed using a square, rectangular or
curvilinear grid, or using a flexible mesh comprising triangular and/or quadrilateral mesh
elements. With 3D models there are additional grid cells or mesh elements in the vertical
dimensional for describing the variations in flow with depth.
The use of a full 3D model should be considered in cases where it is important to simulate
three-dimensional flow effects. These can include: stratified flows and wind-driven over-
turning circulations in lakes and estuaries, “helicoidal” flows around river bends, flows
associated with hydraulic structures, and flows where the water depth is of the same order of
magnitude as, or greater than, the horizontal dimensions of the main physical features to be
resolved.
Additionally, the numerical methods used to solve the equations of motion are generally
based on the assumption that the flow is sub-critical. As such, super-critical flows can
usually only be modelled through locally simplifying the momentum equation(s) and/or
through the addition of significant amounts of numerical dissipation. These approaches
149
Numerical Models
make it possible to maintain the numerical computation through regions of super-critical flow.
Modellers should, however, be aware that these approaches only provide approximate
solutions to super-critical flow. Care should be taken when interpreting the results in these
regions, and in the transition zones between super-critical and sub-critical flow (and, in
particular, in the location and size of any resulting hydraulic jumps).
Computational Fluid Dynamics (CFD) refers to a class of models that are based on the
numerical simulation of the more generalised form of the Navier-Stokes equations. (See for
example, Roache (1998)). CFD can be used to model the details of non-hydrostatic flows,
including flows where there is no free-surface (e.g., internal flows within a structure). CFD
can also be used to model supercritical flows, as well as the transitions between super-
critical and sub-critical flows. CFD models require significantly more computing time than the
3D models discussed above and are more demanding with respect to model set-up and
boundary condition requirements. As such, the use of CFD in flood modelling is generally
limited to simulating the details of complex flows at or within hydraulic structures (e.g., flow
over a dam spillway or flow through a complex system of culverts).
CFD and the other types of model considered above typically operate in what is known as an
“Eulerian” reference frame. This involves the computation of the fluid flow relative to a fixed
model grid or mesh. By comparison, Smoothed Particle Hydrodynamics (SPM) uses a
“Lagrangian” reference frame (See for example, Violeau (2012)). Here the fluid itself is
broken down into small individual “particles” and, rather than using a fixed grid or mesh, the
computation follows the movements of these particles. This makes SPM particularly well
suited to modelling the dynamics of interactions at the water-air interface. For reliable results
SPM requires the use of very large numbers of particles. SPM has similar disadvantages to
CFD in relation to computational time and the demands associated with model set-up and
the application of appropriate boundary conditions. As for CFD, the use of SPM in flood
modelling is generally limited to simulating the details of complex flows at or within hydraulic
structures.
150
Numerical Models
channel and river and floodplain systems, and can be used to simulate flows in a wide range
of physical systems from steep river reaches to tidal estuaries. The capabilities of these
models typically include most, if not all of the following features:
• Hydraulic structures such as bridges, culverts, weirs, levees, etc., through the use of in-
built structure routines.
• Options for including user-defined structures for describing structures such as flow
regulation gates and pumps.
• Options for specifying simplified diffusive wave or kinematic wave approximations to the
equations of motion to improve computational speed, where appropriate
Figure 6.4.3. The main dependent variables for the 1D model equations
With two dependent variables it is necessary to have two equations to describe the flow in
terms of the stage height h and discharge Q at any given point in x and t. The equations
used are the Saint Venant equations. These equations describe the cross-sectionally
averaged conservation of mass and conservation of momentum. The momentum equation is
used in unsteady flow models in preference to the energy equation that is used in steady
flow models. This is because momentum is a vector and introduces directionality into the
computation. By comparison, energy is a scalar and cannot. Additionally, the momentum
equation can be used to maintain the computation through discontinuities such as hydraulic
jumps.
151
Numerical Models
∂� ∂� ∂�
+� +� =0 (6.4.3)
∂� ∂� ∂�
By combining the two spatial derivatives, the conservation law form of the cross-sectionally
averaged mass equation can be given as:
∂� ∂�
+ =0 (6.4.4)
∂� ∂�
For modelling applications, this equation is transformed into terms of the required water level
h through the introduction of a storage width bst and, when a lateral inflow of ql per unit
length of channel is included, the mass equation becomes:
∂ℎ ∂�
��� + = �1 (6.4.5)
∂� ∂�
∂� ∂ �2 ∂ℎ
+� + �� + � � = � 1� � (6.4.6)
∂� ∂� � ∂�
Where: A is the cross-sectional area, g is the gravitational constant, Sf is the friction slope, β
is the momentum correction factor, and ql is a lateral discharge with a downstream velocity
component ul relative to the velocity of the main stream.
With the introduction of a bed friction term based on Manning’s equation, the momentum
equation can be expressed as:
∂� ∂ �2 ∂ℎ �2�2
+� + �� + 2 4/3 = �1�� (6.4.7)
∂� ∂� � ∂� � �
Where: n is Manning’s roughness coefficient and R is the hydraulic radius (defined as the
cross-sectional area divided by the wetted perimeter)
Equation Assumptions:
The Saint Venant equations for unsteady flow, as described above, are based on the
following assumptions:
• The flow is one-dimensional (i.e., the flow velocity is uniform and the water surface is
horizontal across each cross-section).
• The pressure is hydrostatic (i.e., streamline curvature is small and vertical accelerations
can be neglected).
• The effects of bed friction and turbulence can be included through resistance laws (e.g.,
Manning’s equation) that have been derived for steady flow conditions.
• The flow is nearly horizontal (i.e., the average channel bed slope is small)
Unsteady flow models based on the numerical solution of the full de Saint Venant equations
are sometimes called “dynamic wave” models.
152
Numerical Models
∂ℎ ∂�
= − �� (6.4.8)
∂� ∂�
Ignoring the effects of lateral inflows for the purpose of this exercise, the momentum
equation can then be rewritten as:
∂� ∂ �2 ∂�
+� + �� − �� + � � = 0 (6.4.9)
∂� ∂� � ∂�
In the diffusive wave approximation, the local acceleration term ��/�� and the convective
acceleration term �(�2 /�)/�� are neglected, and the momentum equation reduces to:
∂ℎ ∂�
= − �� = − �� (6.4.10)
∂� ∂�
That is, the water surface slope is balanced by the friction slope.
This approximation includes backwater effects, but the “dynamic” or wave propagation
effects associated with the “inertial” acceleration terms have been excluded. As a result, a
model based on the diffusive wave equation does not have the same stability constraints as
an equivalent full dynamic wave model, and can use much larger time steps.
The diffusive wave approximation is valid for describing gradually varying flows in reaches
with moderate to steep slopes. It should not be used for rapidly varying flows such as in
dam-breaks, or in reaches with flat-bed slopes, including lakes and estuaries, where the
acceleration terms become important. Diffusive wave models are sometimes used to
describe regional scale flows where the use of larger time steps can provide significant
reductions in the amount of computation required.
In the kinematic wave approximation, it is assumed that the momentum equation can be
further reduced to:
�� = �� (6.4.11)
That is, the friction slope is equal to the bed slope. Backwater effects are excluded from this
approximation and water can only flow downstream.
The kinematic wave approximation is only valid for describing gradually varying flows in
reaches with moderate to steep slopes where bed-friction dominates and backwater effects
can be neglected. As such it is not well suited to general modelling applications. The
kinematic wave approximation is very stable and is sometimes used to describe super-
critical flows in localized regions of an otherwise dynamic wave model.
153
Numerical Models
The computation can be carried out on a “staggered” grid where water level h and discharge
Q grid points are specified alternately along each model branch, or on a “non-staggered” grid
where the water level h and the discharge Q are specified at each grid point, as shown in
Figure 6.4.4. Grid points are typically allocated to each surveyed cross-section; water level h
points for staggered grids, and combined water level h and discharge Q grid points for non-
staggered grids. If necessary, additional intermediate grid points may be allocated between
cross-sections. For the staggered approach, discharge Q grid points are located midway
between adjacent water points.
Implicit finite difference procedures are typically used to solve the equations of motion along
each model reach. The scheme first attributed to Abbot and Lonescu (1967) is commonly
used with staggered grid models. With this scheme, numerical approximations to the mass
equation are centred on each water level h grid point, while numerical approximations to the
momentum equation are centred on each discharge Q grid point. The Preissmann
(1961)Preissmann (1961) scheme is generally used in non-staggered grid models. With this
scheme, the numerical approximations to both the mass and momentum equations are
centred on the mid-point between each combined water level h and discharge Q grid point.
154
Numerical Models
The staggered Abbot and Lonescu (1967) scheme has some advantages with the way in
which structures can be incorporated. This is discussed briefly below. By comparison, the
non-staggered Preissmann (1961) scheme has the advantage that the model grid size can
be varied from one grid point to another with no loss of numerical accuracy.
Initial Conditions
Before starting a model simulation, initial water level h and discharge Q values must be
specified at each water level and discharge point, irrespective of whether a staggered or
non-staggered grid is being used. Depending upon the particular modelling package being
used, these initial values can be obtained from:
• Hot start conditions obtained from the results of an earlier model run;
• A combination of user defined and auto-start conditions (e.g., using user defined
conditions in initially dry flood plains).
Boundary Conditions
The main input to the model is generally provided by the boundary conditions. These must
be specified at each upstream or downstream open boundary. In most flood studies the
155
Numerical Models
Model Structures
1D hydraulic models typically incorporate a range of structure formulations for including flow
control structures within a model. These may include:
• Weirs: for describing flows over weirs, levees, road and rail embankments and
overtopping of bridges, etc. A range of weir formulations may typically be available for
describing different weir characteristics (e.g., broad-crested or sharp-crested) and different
flow combinations (e.g., free overflow or drowned flow). Special weir formulations may
also be included through which user-defined flow relationships can be specified.
• Culverts: for describing flows through culverts, bridges and pipes. Different formulations
are used for a range of different upstream and downstream controlled flow conditions.
User-defined culvert relationships can also be used.
• Regulating structures: where flows at structures such as gates or pumps can be specified
as a level or discharge at another point in the model.
For simplicity, the remianing discussion has been limited to the use of staggered grid
models. In these models, structures are located at discharge grid points, and water level grid
points (and therefore cross-sections) must be specified immediately upstream and
downstream of the structure, as shown in Figure 6.4.5. For these cases, the momentum
equation at the discharge grid point is replaced by a structure equation. Flow through the
structure then becomes a function of the upstream and/or downstream water levels,
depending on the flow conditions (e.g., free overflow or drowned flow for weirs, or inlet or
outlet control for culverts).
156
Numerical Models
Multiple structures can be defined at a single discharge point. For example, a bridge which
can be overtopped can be described by the combination of a culvert, for normal flow
conditions, and a weir, for overtopping flows.
Floodplain Flows
The treatment of floodplain flows can be very different depending on the main characteristics
of the river channel and floodplain system. For floodplains that drain naturally to the river
channel, as shown in Figure 6.4.6(a), the water level can be the same in both the river and
the floodplain. Under these conditions the effects of storage and flow along the floodplain
can be included directly in a single combined river channel and floodplain branch.
For floodplains where the river flows spill out over a natural or man-made levees, as shown
in Figure 6.4.6(b), the water level in the floodplain can be very different to the water level in
the main channel. Under these conditions, the effects of storage and flow along the
floodplain should be incorporated in a separate model branch. Flow exchange between the
river channel and floodplain branches can then be controlled by a link branch with a broad-
crested weir representing the levee bank.
157
Numerical Models
• Survey cross-sections at representative locations across the river channel and floodplain.
• Survey levels along flow controls such as levees, weirs and road embankments.
Additionally, data on historic flood levels is required for model calibration. Ideally, a 1D model
should be calibrated against water level hydrographs at various locations throughout the
model area. This will provide a measure of how well the model can reproduce the timing of a
flood and the shape of the hydrograph. In many cases, however, water level hydrographs of
historic flood events do not exist. Consequently, most models are calibrated against peak
water levels surveyed after a flood.
Figure 6.4.8 shows a plan view of a branched 1D model of the Lindsay and Murray River
system in northwest Victoria and southwest New South Wales. The 1D model is able to
provide a good description of the flows along the main channels of the Lindsay River in the
south, the Murray River to the north, and a range of older flow paths across the flood plain,
including Mullaroo Creek. Figure 6.4.9 shows a water surface profile along one particular
branch of the model.
Figure 6.4.8. A Branched 1D Model Network of the Lindsay and Murray River Systems
158
Numerical Models
The main disadvantages of 1D models are that they are based on cross-sectionally
averaged one-dimensional equations of motion. As such:
• The flow paths are by definition 1D and no information is available on the distribution of
flows within individual flow paths.
• Losses due to two-dimensional effects such as bends, flow separations, etc., must all be
lumped into the bed-friction parameter (making detailed calibration essential).
• There can be problems in interpreting model results for mapping flood extents and depths
of inundation.
As such, 1D models are not well suited to modelling the details of flood flows within the
floodplain.
159
Numerical Models
Mass
∂� ∂(� . �) ∂(� . �)
+ + = Sources − Sinks (6.4.12)
∂� ∂� ∂�
x-Momentum
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + + Source/Sink (6.4.13)
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
y-Momentum
∂� ∂� ∂� ∂� �� �2 + �2 ∂2 � ∂2 �
+� +� +� = − + � + + Source/Sink (6.4.14)
∂� ∂� ∂� ∂� � 2� ∂�2 ∂�2
This coupled system of equations provides the three equations necessary to solve for the
three dependent variables; ς the free surface elevation, u the velocity in the x direction and v
the velocity in the y direction. It is noted that in this secton and the following sections, d.u
and d.v refer to the depth d multiplied by the x-velocity component u and the y-velocity
component v, respectively.
The mass and momentum equations include sources and sinks for describing the effects of
localised inflows and outflows. The x and y momentum equations also include a quadratic
Chezy-type friction formulation and a simple eddy viscosity formulation (with eddy viscosity
coefficient E). For practical modelling applications, the Chezy coefficient C can be related to
the more usual (for Australian applications) Manning’s “n” by the Strickler relation:
� = �1/6�−1 (6.4.15)
� = �−1 (6.4.16)
For modelling large expanses of open water, such as in lakes and estuaries, these 2D model
equations can be extended to include the effects of wind shear and/or Coriolis forces.
4.7.2. Assumptions
In the derivation of the 2D model equations, it has been assumed that:
• The pressure is hydrostatic (that is, vertical accelerations can be neglected and the local
pressure is dependent only on the local depth).
160
Numerical Models
• The flow can be described by continuous (differentiable) functions of ς, u andv (that is, the
flow does not include step changes in ς, u and v).
• The flow is two-dimensional (that is, the effects of vertical variations in the flow velocity
can be neglected).
• The flow is nearly horizontal (that is, the average channel bed slope is small).
• The effects of bed friction can be included through resistance laws (e.g., Manning’s
equation) that have been derived for steady flow conditions.
Mass
∂� ∂(� . �) ∂(� . �)
+ + = Sources − Sinks (6.4.17)
∂� ∂� ∂�
x-Momentum
∂(� . �) ∂ 1 ∂(� . � . �) �� �2 + �2 ∂2 � ∂2 �
+ � . �2 + � . �2 + = − + � . � +
∂� ∂� 2 ∂� � 2� ∂�2 ∂�2
(6.4.18)
+ Source/Sink
y-Momentum
∂(� . �) ∂(� . � . �) ∂ 1 �� �2 + �2 ∂2 � ∂2 �
+ + � . �2 + � . �2 = − + � . � +
∂� ∂� ∂� 2 � 2� ∂�2 ∂�2
(6.4.19)
+ Source/Sink
This is effectively the form of the equations that provides the basis of the finite difference
solution procedure used by DHI (2005). A distinct advantage of the conservation law
formulation of the equations is that, when the depth average momentum fluxes d.u and d.v
are used as the dependent velocity variables, the mass equation becomes linear and,
barring coding errors, the numerical solutions should remain mass conservative.
161
Numerical Models
Finite Difference Methods: With finite differences, the differential forms of the equations of
motion described above are used directly. The dependent water level and velocity variables
are defined at individual grid points on a structured rectilinear or curvilinear grid. Spatial
derivatives are approximated by taking arithmetic differences between the dependent
variables in adjacent grid points, while time derivatives are approximated by taking arithmetic
differences between the variables at different time levels. The main advantages of finite
difference methods are that they are relatively simple to implement and are easy to use. The
main disadvantage is that complex geometries cannot be readily resolved without the use of
fine scale grid resolution. Finite difference techniques for providing solutions to the shallow
water wave equations have been described by, for example, Abbott (1979); Stelling (1984);
and Abbott and Basco (1989).
Finite Element Methods: With finite elements, the equations of motion are transformed into
integral formulations. Weighting or trial functions are introduced and the resulting equations
are then solved numerically over an mesh of regular or irregularly shaped elements. The
shapes of these elements are typically triangles and/or quadrilaterals, but can take other
forms. Finite element methods provide solutions that are smooth and continuous over each
element and which have matching values at the interfaces between elements. One of the
main advantages of finite elements is that the integral formulation of the equations does not
require a structured mesh. This makes it possible to use an unstructured flexible mesh which
can be aligned with the local flow direction, or which can provide greater resolution in
particular areas of interest. The main disadvantage with flood modelling with standard finite
element techniques is that mass is not necessarily conserved. Finite element techniques for
providing solutions to the shallow water wave equations have been described by, for
example, Connor and Brebbia (1976); and Zienkiewicz et al. (2014).
Finite Volume Methods: Finite volume methods are similar to finite elements in that they
use integral formulations of the equations of motion and can be solved over an unstructured
flexible mesh of irregularly shaped (typically triangular and/or quadrilateral) mesh cells. The
main differences are that finite volume methods use integral forms of the conservation law
form of the equations of motion and each mesh cell is treated as a control volume
represented by volume-averaged values of the conserved variables (mass and momentum).
The rates of change of these conserved variables are derived by integrating the cell-
interface fluxes. A key step in these methods involves calculating the numerical fluxes at the
cell interfaces. As for finite elements, the integral formulation of the equations used in finite
volumes makes it possible to use unstructured flexible meshes that can be aligned with the
local flow directions and which can provide greater resolution in particular areas of interest.
The main advantage of finite volumes over finite elements is that mass is conserved. Finite
volume techniques for providing solutions to the shallow water wave equations have been
described by, for example, LeVeque and Bale (2012).
Finite difference methods have been used extensively in the historical development of
numerical 2D flood modelling practice (e.g., Bishop et al. (1995); Stelling et al. (1998);
McCowan et al. (2001) and Syme (2001)) and are still used widely today. Early flexible mesh
flood model development was mostly carried out using finite element techniques (e.g., King
and Roig (1988)). However, due to potential mass conservation issues, more recent flexible
mesh model development has focussed more on finite volume techniques. In this respect, it
is noted there are also hybrid approaches that combine the finite element and finite volume
schemes. Further, the conservation law formulation of the shallow water wave equations can
be used to develop finite difference solution procedures that apply the finite volume
approach to a structured rectilinear or curvilinear grid.
162
Numerical Models
The way in which these different solution procedures move forward in time can be described
as being either “explicit” or “implicit”. In this respect it is noted that the “Courant” number Cr
is a key parameter for defining the differences between explicit and implicit solution
procedures. The Courant number expresses the number of grid or flexible mesh cells that
flow information can travel in one timestep. The Courant number Cr can be defined as:
(� + ��) . ��
�� = (6.4.20)
��
With an explicit solution procedure, the solutions to the water surface elevations and flow
velocities at the new timestep are computed directly (explicitly) as a function of the known
values at the old timestep. Explicit schemes tend to be computationally efficient, but have a
stability constraint that information can only travel a maximum of one grid/mesh element in a
single timestep. That is, that the Courant number must always be less than or equal to one
(i.e., Cr ≤ 1). This provides a stability constraint on the timestep Δt, that is commonly called
the “Courant” stability criterion, where:
��
��max ≤ (6.4.21)
(� + ��) min
By comparison, in an implicit solution procedure, the water surface elevations and flow
velocities at the new timestep are expressed as a combination of both the known values at
the old timestep and adjacent unknown values at the new timestep. As a result, the solutions
at one grid/mesh element are linked to those in the neighbouring cells. These solutions are,
in turn, linked to those in their neighbouring cells, and so on. In this way, the solutions to the
discrete numerical approximations to the mass and x and y momentum equations for each
grid/mesh cell are linked “implicitly” to those in every other cell over the entire model domain.
This approach allows flow information to travel much further than one grid point per timestep.
As a result, the Courant stability criterion does not apply to implicit solution procedures and
model time steps can be determined more by accuracy requirements rather than by stability
constraints.
Most of the early 2D model developments were for coastal and marine applications where
the use of high Courant numbers was found to provide significant computational
advantages. Following Leendertse (1967) many of these models used finite difference
schemes which used what is known as an “alternating direction implicit” or ADI algorithm.
This approach involves the use of a series of implicit 1D sweeps alternating along x-grid
lines and y-grid lines which is much simpler to implement than a fully two-dimensional
implicit solution procedure.
The ADI approach can be shown to be independent of the Courant stability criterion and has
been used extensively in two-dimensional flood modelling (e.g., Stelling et al. (1998);
McCowan et al. (2001) and Syme (2001)). It should be noted, however, that the ADI
approach is not directly equivalent to a fully two-dimensional implicit scheme. Although the
timestep is not subject to the Courant stability criterion, it is subject to accuracy constraints,
particularly when the solution involves flow in relatively narrow channels at an angle to the
grid (Benque et al., 1982). In practice, the timestep should be selected such that the Courant
number is less than the minimum number of grid cells used to describe the width of a
channel. In many cases this tends to restrict the timestep such that the Courant number is of
order Cr ≈ 1 in narrow channels, although higher Courant numbers can occur in other parts
of the model.
163
Numerical Models
With the advent of multiple-core processors and graphics processing units (GPUs) it became
possible to carry out multiple computations in parallel. This has provided significant
increases in computational speed for models with code that could be readily “parallelised”.
The direct explicit relationships between water surface elevations and flow velocities at the
new timestep and the corresponding values at the old time step make explicit solution
procedures particularly well suited to parallel processing. As a result, much of the recent
development in 2D flood modelling has focussed on explicit finite volume numerical solution
procedures (need references here). These models have the capability to increase the
effective computational speed by one to two orders of magnitude, depending upon what
features are, or are not, included in the model.
To illustrate this effect, a simple first-order finite difference approximation to the uδu/δx
“convective momentum” term in the x momentum equation has been examined more closely.
This term is to be discretised on a square “staggered” grid similar to that shown in
Figure 6.4.10.
164
Numerical Models
For a given timestep nΔt and x grid line at kΔy, the term uδu/δx can be approximated by:
With this approximation, the continuous derivative of the velocity u with respect to x is
approximated by the linear gradient of the velocity between the grid point at which the
derivative is to be taken, and the grid point immediately upstream. As a result, this approach
is often termed “upwind” differencing. The errors associated with this approximation can be
determined by using Taylor’s Series to expand the terms in the right hand side of this
equation in terms of the velocity u velocity at the centre-point, jΔx. This results in the
following expression:
This shows that, the discrete finite difference approximation is equal to the original
continuous partial differential term u.δu/δx, but with additional second, third, fourth and
higher order truncation error terms. Further, it can be seen that the truncation error includes
a second order term which is of the same form as one of the eddy viscosity terms in the x-
momentum equation. That is, upwinding of the convective momentum term u.δu/δx can be
seen to be equivalent to introducing a numerical eddy viscosity term with, in this case, an
eddy viscosity coefficient of:
165
Numerical Models
��
�� = � (6.4.24)
2
This numerical eddy viscosity coefficient is grid size and flow velocity dependent. For typical
floodplain flow conditions and model dimensions, the numerical eddy viscosity introduced by
first-order upwind differencing can be an order of magnitude or more greater than the
corresponding physically realistic values. Issues associated with first-order upwind
differencing of the convective momentum terms are discussed in detail by Leonard (1979).
Similar truncation error terms can be developed for numerical approximations to the other
terms in the mass and momentum equations. The properties of these truncation errors can
have a significant effect on the accuracy and stability of the numerical procedures being
used.
From the above, it can be seen that first-order schemes have second-order truncation errors
that are proportional to the grid or mesh size Δx (or timestep Δt for time derivatives). In a
similar manner, it can be shown that second-order schemes have third-order errors that are
proportional to the square of the grid or mesh size Δx2 (or of timestep Δt2). If the grid or
mesh size Δx and timestep Δt are treated as fractions of representative length and time
scales of the flood under consideration, the truncation errors of a second-order scheme can
be shown to reduce quadratically with decreasing grid size and timestep. That is, the finer
the grid or mesh size or shorter the timestep, the smaller the numerical truncation errors.
In general, it can be said that first-order schemes are “diffusive”. That is, they tend to damp
out sharp gradients in water levels and flow velocities. The artificially high levels of numerical
eddy viscosity, discussed above, help to smooth out flow irregularities and make the model
calculations very stable. However, the unrealistically high levels of smoothing results in the
suppression of flow separations and eddy formation and makes it impractical to compute
channel/overbank interactions, where the specification of appropriate eddy viscosity
coefficients becomes important.
166
Numerical Models
Figure 6.4.11. Figures showing (a) flow separation and eddy formation, and (b) suppression
of flow separation with numerical eddy viscosity form a first order upwind scheme
By comparison, second-order schemes are “dispersive”. That is, high frequency components
of flood flow can travel at different speeds. This is not normally a problem with the
propagation of flood waves which tend to be very long (low frequency) with respect to the
model grid or mesh size and timestep. As a result, second-order schemes are generally well
suited to modelling the wave propagation properties of flood flow. They are, however, not as
well suited to modelling high velocity flows and flows where there are strong velocity
gradients. Under these conditions, the convective momentum terms in the momentum
equations become more important and the use of second-order schemes can result in
artificial “zig-zagging” of the flow and non-linear instabilities (Abbott and Rasmussen, 1977).
As a result, Leonard (1979) advocates the use of higher third-order schemes for modelling
transport dominated flows.
167
Numerical Models
Stelling et al. (1998) describe a finite difference model based on first-order upwind
differences. The model provides smooth stable solutions for high velocity flows. However, as
discussed above, the high levels of numerical diffusion (numerical eddy viscosity) may result
in reduced accuracy under flow conditions where the use of more physically realistic values
of eddy viscosity may be required. To avoid this problem, McCowan et al. (2001) use a
second order scheme for the main part of the computation, but gradually introduce first-order
upwinding in localised areas where the Froude number exceeds Fr = 0.25. In both cases,
the numerical diffusion introduced in this way is sufficient to maintain the numerical
computation through regions of super-critical flows.
As an alternative, BMT WBM (2008) uses the kinematic wave approximation to describe flow
conditions in regions with super-critical flow. This approach is reasonable for most flooding
applications as super-critical flows are upstream controlled and are normally friction
dominated. However, whenever simplified forms of the equations are used, it is important for
the modeller to understand their limitations, and care may be required in interpreting the
results, particularly in transition areas.
It is noted that the full set of the equations should always be used for describing flows in
relatively flat river reaches, and in regions of relatively flat, deep water such as in estuaries
and lakes
Clearly, there is a wide range of models with quite different solution procedures with varying
orders of accuracy available to the numerical flood modeller. It is important for the modeller
to be aware of the type of solution procedure being used, and of any constraints that that
this may impose on the timestep, model accuracy, and on the way in which the computation
is carried out.
• Initial Conditions
• Boundary Conditions
• Hydraulic Structures
In practice, the selection of the Generic Numerical Model or modelling software package to
be used can be limited to the software available within the modeller’s organisation or, in the
case of a consulting project, may even be specified externally by the client. Where a choice
is available some of the key factors to be considered in selecting the particular software to
be used include:
168
Numerical Models
• The choice of the use of a structured grid or an unstructured flexible mesh: and
Modeller Experience: The skill level and experience of the modeller is an important factor in
determining the success of any modelling exercise and should be taken into account when
selecting the software package to be used.
Finite Difference vs Finite Volume: A big advantage with finite volume models and finite
difference models based on finite volume techniques is that, barring coding errors, they
conserve mass. This is a very important property in a flood model, as any loss or gain of
mass caused by the approximations made in the numerical solution procedure can invalidate
the model results. It should be noted that this does not preclude the use of finite difference
models. It only means that care should be taken when using finite difference models to
ensure that there is no significant loss or gain of mass during the model simulations.
Structured Grid vs Unstructured Flexible Mesh: Until relatively recently, most of the
commercially available flood models operated with a structured rectilinear (square or
rectangular) grid. The advantages of these structured grid models are that they are easy to
set-up and can be very computationally efficient. A disadvantage is that they do not provide
a good description of flows along relatively narrow channels aligned at an angle to the grid,
as shown in Figure 6.4.12(a). To get adequate resolution of these flow paths may require
reduction in the model grid size, as shown in Figure 6.4.12(b), with a corresponding
significant increase in computational requirements.
Figure 6.4.12. Showing (a) poor resolution of a sinuous flow path on a coarse square grid,
and (b) improved, but still relatively poor resolution at a finer grid scale.
169
Numerical Models
Figure 6.4.13. Showing (a) improved resolution of a sinuous flow path using a flexible mesh,
relative to (b) a fine square grid.
Model Schematization
The first task in schematizing a model is to decide on the model domain. This is the area to
be included within the model. It should include the main area of interest and should extend
out far enough to include all the areas that are likely to be inundated in the most extreme
flood to be considered. It should also extend sufficiently upstream and downstream such that
any irregularities in the flow at the model boundaries will not affect the model results in the
main area of interest. Wherever possible, the model boundaries should be located in areas
of relatively uniform flow.
For structured grid models, a square or rectangular is overlaid on the model domain. This
grid should, where possible, be aligned with the main flow direction. The grid size should be
selected to provide adequate resolution of the main features to be modelled (e.g., channels,
levees, bridges, etc). Care should be taken to ensure that the main flow paths to be
considered can be adequately described, particularly when they are aligned at an angle to
the grid.
For unstructured flexible mesh models, a model mesh covering the model domain must be
developed. This mesh may typically be formed using triangular mesh cells, or a combination
170
Numerical Models
of triangular and quadrilateral mesh cells, although some software allows the use of other
shapes as well. As seen in Book 6, Chapter 4, Section 7, one advantage of the flexible mesh
approach is that the mesh cells can be aligned to the main flow paths irrespective of their
orientation. Another is that finer cell sizes can be used to provide greater resolution in
particular areas of interest. Conversely, larger cell sizes can be used to reduce
computational requirements in less important areas.
Clearly, the smallest feature that can be resolved will be one grid or mesh cell wide.
However, if realistic simulation of flow separation and eddy formation behind structures such
as bridge abutments is required, then these structures will need to be resolved by a
minimum of 6 to 8 grid or mesh cells.
The selection of the appropriate cell size is generally a trade-off between model resolution
and computational requirements. In this respect it is noted that the computational time
required by a model is roughly proportional to the cube of the cell size. That is, halving the
cell size could be expected to increase the computational time by a factor of 8.
The time step is generally set to the largest value that can be used without affecting the
accuracy of the model results. For explicit models, the time step is set close to the maximum
allowable under the Courant stability criterion, discussed in Book 6, Chapter 4, Section 7.
Although implicit models are not affected by this constraint, their timestep is usually limited
by accuracy requirements, and the optimum timestep is typically determined by sensitivity
testing during model calibration.
Some of the key inputs to a 2D model include: the model topography, bed resistance values
and, where appropriate, the eddy viscosity formulation. These are discussed briefly below.
Topography: The model topography forms the basis of any 2D hydraulic model. It is the
numerical analogue of the actual terrain over which the water flows. It comprises survey data
(e.g., from a digital terrain model) that has been interpolated onto the model grid or mesh.
Following the initial interpolation process, some degree of manipulation may be required to
ensure that the main flow paths and flow controls, such as road embankments or levee
banks, have been adequately resolved. For flow paths that are less than a few cell widths
wide, some degree of schematization may be required to ensure that the model flow path
provides a realistic description of the flow path conveyance. This is particularly true for
structured grid models. Care taken in setting up the model topography can significantly
reduce the amount of time required to calibrate a 2D model.
Eddy Viscosity: Eddy viscosity is used in 2D models to represent the effects of turbulence
and sub-grid scale processes. The use of appropriate eddy viscosity values is necessary in
simulations where the realistic representation of flow separations and eddy formation, or of
momentum transfers between the main channel and overbank areas are important.
Depending on the software being used eddy viscosity coefficients can be described as
constant values, spatially varying values, or values computed internally by a turbulence
closure model. In this respect, it is noted that a “Smagorinsky-type” eddy viscosity
formulation has been found to be suitable for describing the effects of sub-grid scale
processes in many applications.
171
Numerical Models
Initial Conditions
For any model computation to move forward in time there must be a known starting point.
For numerical hydrodynamic models this starting point is known as the model “initial
conditions”. The initial conditions necessary for two-dimensional hydrodynamic models
consist of water surface elevation ς and u and v velocity values at every grid/mesh element
that is “active” at the start of the computation.
• A “cold start”, where initial estimates of the water surface elevation ς values are made,
and the u and v velocity values are set to zero (i.e., no flow).
• A “hot start”, where the initial water surface elevation ς, and u and v velocity values are
specified from the results of a previous model simulation.
Cold Starts: In many flood modelling applications, the precise values of the initial conditions
are not that critical, provided the model computation starts in a reasonably realistic manner;
that is, relatively smoothly and with no initial instabilities. However, in applications where
there are lakes, wetlands, retarding basins, or other depressions that may provide initial
flood storage, it is important that the initial water surface elevations provide the correct
amount of initial storage in these areas. If the initial amount of water in these storage areas
is underestimated, this may cause the model to artificially attenuate the flood peak.
Conversely, if it is overestimated, the flood peak may be artificially enhanced.
Hot Starts: Hot starts are not used extensively in practice. Their main uses tend to be
limited to:
• Providing initial conditions for model simulations where significant computing time may be
required for the model “warm-up” period required. In these cases, the results of a single
prolonged “warm-up” simulation can be used to provide initial conditions for a subsequent
series of model simulations.
Urban Applications: In many urban applications, the area to be modelled may be initially
dry. In these cases, the initial water surface elevation ς values will typically be set to the
corresponding ground surface elevation z values, and the initial u and v velocity values set to
zero. This approach could be considered as a special case of the cold start. It is applicable
in model simulations where there is no initial overland flow, including direct rainfall on grid
applications.
Boundary Conditions
With the initial conditions specified, the model boundary conditions are the remaining pieces
of necessary information required for the model computation to proceed. In this respect it is
noted that boundary conditions are required at every grid/mesh element along the model
boundaries. These boundaries include both external boundaries and internal boundaries,
where:
• External boundaries are located along the external edges or boundaries of the model,
where water can flow into or out of the model domain.
• Internal boundaries are located within the model domain, and include the interface
between wet (water) cells, where the computation is to be carried out, and dry (land) cells
where there is no computation.
172
Numerical Models
The subject of boundary conditions for two-dimensional flow models is quite complex and
the following discussion is, of necessity, relatively superficial.
External Boundaries: The model requires boundary conditions in terms of either water
levels or discharges along both the upstream and downstream boundaries.
The upstream boundary conditions for a 2D flood model are generally provided by a
discharge hydrograph. This has to be converted to discharges or flow velocities and water
depths at each boundary cell. For this to be done, some assumptions need to be made with
respect to the distribution of the flows along the boundary, and to the direction of the flow.
Depending on the modelling package being used, the flow distribution along the boundary
may be computed internally using a range of possible assumptions (e.g., uniform flow or the
use of Manning’s equation). There may also be options for providing user specified flow
distributions and directions.
The downstream boundary conditions are generally specified in terms of water surface
elevations. These may be specified as a constant, a times series, or computed internally
using a rating curve. As a first approximation, it can be assumed that the water surface
elevations along the boundary are horizontal. As such, the water surface elevation specified
at each individual boundary cell will be the same. Depending on the modelling package
being used, the flow directions may be specified as being normal to the boundary, or there
may be options for them to be computed as a function of the upstream flow conditions, or
specified externally by the user.
Whatever forms of boundary conditions that are being used, it is important to recognise that
the model has no information regarding the flow conditions upstream or downstream of the
model boundaries. As such, it is important that, wherever possible, the model boundaries
should be located in areas where the flow is expected to be relatively uniform. The model
boundaries should also be placed far enough upstream and downstream of the area of
interest to ensure that any errors in flow distribution and/or direction do not have a significant
effect on the model results.
Internal Boundaries: There are two types of internal boundaries used in 2D models:
• The first occurs at the land/water interfaces within the interior of the model domain. These
internal boundaries can be considered as a special case of a velocity boundary where the
flow velocity between adjacent pairs of wet and dry cells is set to zero. The locations of
these internal boundaries are dynamic as different cells are brought into the computation
as the flood rises, and taken out again as the flood recedes.
• The second type of internal boundary occurs where there are source or sink terms (related
to local inflows and outflows), hydraulic structures, or where there are links to embedded
1D model branches.
Hydraulic Structures
Depending upon the particular software being used, hydraulic structures such as weirs,
culverts, bridges and regulating structures can be introduced into the model. This may be
done either by introducing a structure equation to replace the momentum equation between
two adjacent computational cells (similar to the 1D model approach), or by introducing a 1D
model branch containing the required structures (see Book 6, Chapter 4, Section 5) and
coupled to the 2D model domain via internal boundaries.
173
Numerical Models
• The floodplain flow paths do not need to be pre-determined by the user, as they are
computed directly as a function of the model terrain and the applied flows.
• The flow paths can change with increases in water level in much the same way as they do
in real life.
• Losses due to two-dimensional effects such as bends, flow separations, etc, are
automatically included within the computation, and do not need to be lumped into the bed-
friction parameter (as such, the bed-friction coefficients can be specified directly as a
function of bed-roughness only).
• Model results can provide details of the flow distribution within individual flow paths.
• Model results can be used directly for mapping flood extents and depths of inundation.
In the early days of full 2D modelling, the main disadvantages of the 2D models were that
they required significantly more survey data than 1D models, and that they were very heavy
computationally. The advent of LiDAR has, to some extent, overcome some of the survey
requirements. There have also been significant increases in computing power, which
combined with the introduction of parallel processors and GPUs has greatly increased the
computational capacity of modern computers. However, along with the increase in
computing capacity, there has been a tendency for modellers to use smaller cell sizes, to get
better resolution, and also to use much larger model domains. As a result, long simulation
times can still be an issue with 2D models. Further, the result files of these model runs can
become very large, and can make significant demands on data storage and processing
capability.
Another disadvantage of 2D models is that flow paths and channels can only be resolved at
the same scale as the model grid or mesh. Even when a channel may be several
computational cells wide, the in-bank flows may not be described as well as in a 1D model
with detailed cross-sections.
Figure 6.4.14 shows the topography of the floodplain region of the Lindsay and Murray River
system considered in Book 6, Chapter 4, Section 6, where the channel of the Murray River is
shown in white. To provide adequate resolution of the complex channel system throughout
the floodplain in a 2D model would require a relatively fine grid or mesh size resulting in very
large computational arrays and correspondingly long run times.
174
Numerical Models
The coupling can work in different ways depending upon the particular software package
being used. These can include:
• The use of a 1D model to simulate an overall river and floodplain system dynamically
coupled to a 2D model providing detailed flow computations in particular areas of interest.
• The use of dynamically coupled 1D model branches to provide a better description of in-
bank channel flows within a 2D model domain. The coupling of these branches can
provide for the exchange of water between the 1D in-bank flows and the 2D model flood
plain flows.
• The use of dynamically coupled 1D model branch to introduce hydraulic structures (such
as weirs, culverts, bridges, etc.) into the 2D model domain.
A further extension of the integrated 1D/2D modelling concept has been the dynamic
coupling of 2D models with 1D pipe network models. This has significantly enhanced the
capabilities of 2D models for urban flood modelling applications.
175
Numerical Models
Figure 6.4.15. Example of a Coupled 1D/2D Model of the Lindsay River System
With the direct rainfall approach, the rainfall-runoff process is simulated by applying rainfall
directly to each cell within the model domain. Losses are accounted for using different
approaches depending upon the software package being used. With the simplest approach,
the losses are applied directly to the rainfall with only a resulting rainfall “excess” being
applied to the model cells. More sophisticated approaches may use infiltration models
incorporated within the 2D modelling software and, ultimately, may involve coupling with a
groundwater model.
The use of direct rainfall on a 2D model makes it possible to simulate the rainfall-runoff
process, as well as the hydraulic routing of the resulting overland flows throughout model
domain. This provides a more realistic representation of catchment storage and runoff
effects. It is, however, essential to have good topographic data. The selection of appropriate
roughness coefficients is critical to the success of this approach (Muncaster et al., 2006).
Further, roughness values may need to be increased for describing shallow flows in rural
areas, or decreased to allow for more rapid runoff from rooves and some paved areas in
urban applications (Caddis et al., 2008).
The use of a 2D hydraulic model in this way integrates both the hydrologic and hydraulic
aspects of the rainfall-runoff process into a single model. The use of direct rainfall is,
however, an area of on-going research and care should be used when interpreting the model
results. Wherever possible models should be calibrated to measurements. Where calibration
is not possible (as in many cases), sensitivity testing should be carried out to assess the
sensitivity of the model to variations in the main model parameters.
176
Numerical Models
• Modelling the details of flows at bridges, culverts and intake and outlet structures
• Modelling separation zones and wakes behind structures where the horizontal dimensions
of the structure, and of the cell size, are much smaller than the water depth
In these situations care needs to be taken in interpreting the model results. In some models
the effects of three-dimensional flows can be included through additional loss terms, or the
inclusion of sub cell-scale pier loss formulae. If, however, it is necessary to model the details
of these types of flow, then a full three-dimensional model should be used.
Situations where the hydrostatic pressure assumption becomes invalid include free
overflows of water over levees and embankments. In these cases the hydrostatic pressure
assumption can lead to a significant over-estimation of the overflows. Depending upon the
software being used, these effects can be overcome by incorporating weir equations into the
computation, or by widening the model description of the levee or embankment to include
two cell widths.
• Validation: to confirm that the modelling software is doing what it is supposed to do.
• Calibration: the process of adjusting model parameters to obtain a best fit with measured
flood data.
• Verification: ideally, a check of the model calibration against an independent set of flood
data.
In cases where there is insufficient or no calibration data, sensitivity tests should be carried
out to assess the sensitivity of the model to variations in the main model parameters
Steady flow hydraulic models are best suited to modelling flows along relatively short
reaches of river with well-defined flow paths, and/or for modelling flows at structures.
However, unsteady hydraulic models should be used to describe flows where there are:
177
Numerical Models
• Flat channel slopes where wave propagation effects can become important;
• Wide floodplains and/or other features where storage effects may affect the flow; and
• Channel networks where the flow splits are not well defined.
With respect to the different types of unsteady hydraulic models that are available, it can be
said that:
• The use of CFD and other non-hydrostatic models, including physical models, is generally
limited to simulating the details of complex flows in relatively short reaches of a river, or at
or within hydraulic structures.
• 3D models are very heavy computationally, and are best suited to modelling the details of
complex flows in relatively short reaches of rivers, around structures and in other flows
cases where three-dimensional effects become important in determining localized flood
effects.
• 1D models are computationally quick to run and are well suited to modelling flows along
well-defined channel and floodplain systems. However, the more general use of 1D
models in flood studies has been largely superseded by 2D models.
• 2D models can provide a much more detailed description of flood flows and flow
distributions within individual flow paths and have become virtually the standard for rural
and urban flood studies. 2D models are, however, more demanding on input data and on
computing resources.
• The integration of 1D models with 2D models has made it possible to include 1D model
branches within a 2D model domain to provide a better description of in-bank flows and/or
to introduce hydraulic structures (such as weirs, culverts, bridges, etc.) into the 2D model
domain.
• The integration of 1D pipe network models with 2D models has significantly enhanced the
capabilities of 2D models for urban flood modelling applications.
• The use of rainfall on grid has made it possible to integrate both the hydrologic and
hydraulic aspects of the rainfall-runoff process into a single model. The use of direct
rainfall is, however, an area of on-going research and care should be used when
interpreting the model results.
4.9. References
Abbott, M.B. and Ionescu, F. (1967), On the Numerical Computation of Nearly Horizontal
Flows, Journal of Hydraulic Research.
Abbott, M.B. (1979), Computational Hydraulics - Elements of the Theory of Free Surface
Flows, Pitman, London.
Abbott, M.B. and Basco, D.R. (1989), Computational Fluid Dynamics: an Introduction for
Engineers, Longman.
Abbott, M.B. and Rasmussen, C.H. (1977), On the Numerical Modelling of Rapid
Contractions and Expansions in Models that are Two-Dimensional in Plan, Proc. 17th
Congress IAHR, Baden-Baden.
178
Numerical Models
American Society of Engineers (1996), River Hydraulics - Technical Engineering and Design
Guides as Adapted From the US Army Corps of Engineers, No. 18, ASCE Press, New York.
BMT WBM (2008), TUFLOW User Manual. GIS Based 2D/1D Hydrodynamic Modelling.
Benque, J.P., Hauguel, A. and Viollet, P.L. (1982), Numerical Models in Environmental Fluid
Mechanics, Pitman.
Bishop, W.A., McCowan, A.D., Sutherland, R.J. and Watkinson, R.J. (1995), Application of
Two-Dimensional Numerical Models to Urban Flood Studies, 2nd International Symposium
on Stormwater Management, Melbourne.
Caddis, B.M., Jempson, M.A., Ball, J.E. and Syme, W.J. (2008), Incorporating Hydrology into
2D Hydraulic Models - the Direct Rainfall Approach, (9th Aust. Conf. on Hydraulics in Water
Engineering, Darwin.
Carr, R.S. and McCowan A.D. (1988), An Integrated Approach to Two?Dimensional Flood
Plain Modelling, ACADS Workshop on 2D Flood Plain Modelling, Monash University.
Connor, J.J. and Brebbia, C.A. (1976), Finite Element Techniques for Fluid Flow, Newnes-
Butterworths
Cunge, J.A., Holly, F.M. and Verwey, A. (1980), Practical Aspects of Computational River
Hydraulics, Pitman, London (Reprinted by University of Iowa).
DHI Water and Environment (2005), MIKE 21 - Coastal Hydraulics and Oceanography -
Hydrodynamic Module, Scientific Documentation, DHI Software, Hørsholm.
Engineers Australia (2012), Australian Rainfall and Runoff Revision Project 15: Two-
Dimensional Modelling in Urban and Rural Floodplains.
King, I.P. and Roig, L.C. (1988), Recent Applications of RMA's Finite Element Models for
Two-Dimensional Hydrodynamics and Water Quality, Proc 2nd Int. Conf. on Finite Elements
in Water Resources, Pentech Press.
LeVeque, R.J and Bale D.S. (2012), Wave Propagation Methods Conservation, Journal of
Hyperbolic Problems, Zurich.
Leonard, B.P. (1979), A Stable and Accurate Convective Modelling Procedure Based on
Quadratic Upstream Interpolation. Computer Methods in Applied Mechanics and
Engineering, 19(1), 59-98.
McCowan, A.D., Rasmussen, E.B. and Berg, P. (2001), Improving the Performance of a
Two-dimensional Hydraulic Model for Floodplain Applications, Proceedings of the 6th
Conference on Hydraulics in Civil Engineering, Hobart.
Muncaster, S.H., Bishop, W.A. and McCowan, A.D. (2006), Design Flood Estimation in Small
Catchments Using Two-Dimensional Hydraulic Modelling - A Case Study, 30th Hydrology
and Water Resources Symposium, Launceston.
Preissmann, A. (1961), Propagation des Intumescences dans les Canaux et Rivieres, First
Congress of the French Association for Computation, Grenoble.
179
Numerical Models
Sherwin S.J., Peiro J., (2005), Finite difference, finite element and finite volume methods for
partial differential equations, Handbook of materials modeling, Editors: Yip, Berlin, Publisher:
Springer, pp: 1-30, ISBN: 9781402032875
Stelling, G.S. (1984), On the Construction of Computational Methods for Shallow Water Flow
Problems, Delft University of Technology.
Stelling, G.S., Kernkamp, H.W.J and Laguzzi, M.M. (1998), Delft Flooding System: A
Powerful Tool for Inundation Assessment based upon a Positive Flow Simulation, in
Hydroinformatics '98, V. Babovic and L.C. Larsen (ed's), Balkema.
Syme, W.J. (2001), TUFLOW - Two and One-Dimensional Unsteady Flow Software for
Rivers, Estuaries and Coastal Waters, I.E.Aust Workshop on 2D Flood Modelling, Sydney.
United States Army Corps of Engineers (USACE) (2010), HEC-RAS River Analysis System,
Hydraulic Reference Manual, Version 4.1, Hydrologic Engineering Center.
Violeau, D. (2012), Fluid Mechanics and the SPH Method, Theory and Applications, Oxford
University Press.
Zienkiewicz, O.C., Taylor, R.L. and Nithiarasu, P. (2014), The Finite Element Method for
Fluid Dynamics, 7th Edition, Butterworth-Heinemann.
180
Chapter 5. Interaction of Coastal and
Catchment Flooding
Seth Westra, Michael Leonard, Feifei Zheng
Chapter Status Final
Date last updated 14/5/2019
5.1. Introduction
Floods in estuarine areas can be caused by runoff generated by an extreme rainfall event,
an elevated ocean level generated by a storm surge and/or a high astronomical tide, or a
combination of both processes occurring simultaneously or in close succession. Research in
Australia (Zheng et al., 2013) and internationally (Svensson and Jones, 2002; Svensson and
Jones, 2004; Hawkes and Svensson, 2006) has shown that extreme rainfall and storm surge
processes are statistically dependent, and therefore their interaction needs to be taken into
account for areas affected by both processes.
This chapter describes procedures that can be used to estimate design flood levels in the
'joint probability zone', defined as the region in which the dependence between riverine and
ocean processes has the potential to influence the design flood level. This region is
illustrated in Figure 6.5.1, and shows that the range of possible flood levels corresponding to
a given Annual Exceedance Probability are enclosed in an envelope bounded by cases
where flood events and ocean levels are perfectly dependent (upper curve) and independent
(lower curve).
Figure 6.5.1. Schematic of a longitudinal section of an estuary, which shows two hypothetical
water levels: the level obtained by assuming that fluvial floods will always coincide with
storm tides of the same exceedance probability (upper curve); and the level assuming fluvial
processes and ocean processes are completely independent and thus will almost never
coincide (lower curve).
This chapter provides practical guidance on estimating the exceedance probability of floods
in the joint probability zone. The focus of this chapter is on the ‘design variable method’,
181
Interaction of Coastal and
Catchment Flooding
which has been developed as a flood estimation approach that can be applied across
Australia’s diverse climates. The method has been tested for design floods from 50% to 1%
Annual Exceedance Probabilities , and can account for the influence of climate change by
adjusting both the design rainfall and design ocean levels that are required as inputs.
The theory and practice of flood estimation in the joint probability zone is considerably more
complex than many traditional flood estimation problems, and engineering judgement is
required on whether the design variable method described in this chapter is suitable for a
given situation. This judgement should be based on a sound knowledge of joint probability
theory, combined with an understanding of riverine and oceanic flood processes. Alternative
methods that may be appropriate under certain conditions are discussed briefly in Book 6,
Chapter 5, Section 3. The approach presented in this chapter is also valid for overland
flooding problems.
Figure 6.5.2. Timing factors affecting the magnitude of a flood in the joint probability zone
1. Rainfall - A flood can be initiated by a sustained burst of intense rainfall (often referred to
as a storm burst, shown in Figure 6.5.2 as the shaded four hour period) over an estuarine
catchment. This storm burst can be characterised by its duration, spatial extent, temporal
pattern and rainfall intensity. The storm burst is often embedded in a longer period of
rainfall, which can be caused by large-scale meteorological features such as a frontal
rainfall system or a tropical cyclone.
2. Runoff Generation- The shape, size, slope, soil type, vegetation and level of urbanisation
all contribute to the way a catchment translates rainfall into runoff. The time of
concentration refers to the time it takes all of the catchment to contribute runoff at the
catchment outlet, and is often assumed to be equivalent to the time taken for water to
travel from the most distant point in the catchment to the catchment outlet. The time of
182
Interaction of Coastal and
Catchment Flooding
3. Hydrograph at Catchment Outlet: - The time it takes for the hydrograph to enter the joint
probability zone causes a lag between the flood producing rainfall event and the
hydrograph peak. The hydrograph represents the fresh water contribution to floods in the
joint probability zone, and may form the upstream boundary condition for hydrodynamic
models of this region.
4. Storm surge - The ocean level forms the downstream boundary to the system, and
typically comprises a deterministic component (the astronomical tide) and a random
component (usually dominated by the storm surge). The storm surge is caused by
anomalous wind and atmospheric pressure that are linked to large-scale weather
patterns, and the magnitude of the surge at a particular location will be influenced by the
coastal geography and bathymetry. Figure 6.5.2 shows a composite of ten storm surge
events near Perth, with the composite exhibiting a sharp peak lasting several hours, yet
with some effects still apparent for a day or longer both before and after the peak.
5. Astronomical tide- Tidal patterns, whether diurnal (24 hour), semi-diurnal (12 hour) or
mixed, can vary substantially with location. The astronomic tide level is usually assumed
to be independent of the rainfall intensity.
As illustrated in Figure 6.5.2, the question of whether or not a large fluvial flood will coincide
with an elevated ocean level will depend on several timing issues, which are influenced by a
combination of meteorological, catchment scale and oceanographic processes. In particular,
the timescale of both rainfall and storm surge events are determined by meteorological
influences, whereas the timescale of the runoff depends on specific catchment features that
are related to the catchment’s time of concentration.
A further complicating factor is that the same meteorological events can drive both rainfall
and storm surge events, and this has led to the finding in Australia (Zheng et al., 2013) and
internationally (Svensson and Jones, 2002; Svensson and Jones, 2004; Hawkes and
Svensson, 2006) that extreme rainfall and storm surge is statistically dependent. The
dependence strength between extreme rainfall and storm surge in Australia was found to
vary as a function of geographic location and the duration of the rainfall burst (Zheng et al.,
2013; Zheng et al., 2014a). Each of these factors will need to be taken into account when
selecting a method for estimating flood exceedance probabilities in Australia's estuarine
catchments.
Flood Frequency Analysis (FFA) - This approach involves fitting a probability distribution to a
time series of historical streamflow. The approach is relatively easy to implement, but
requires long, high-quality historical flood records at the location of interest. The advantage
of this approach is that, by directly focusing on the statistical characteristics of historical
floods, it may be possible to avoid modelling the complex processes that lead to estuarine
floods as depicted in Figure 6.5.2.
183
Interaction of Coastal and
Catchment Flooding
However, the approach assumes that the upstream catchment conditions and the
bathymetry of the estuary are unchanged over the historical record and are reflective of
future conditions, and that the statistical characteristics of the upper and lower boundary
conditions (e.g. extreme rainfall, sea level, storm surge) will remain constant into the future.
For most of Australia's estuarine catchments, one or more of the assumptions underpinning
FFA will be violated; therefore this approach is unlikely to be practically applicable in most
situations.
The computational load of continuously running hydrological and hydraulic models at the
short time steps required for capturing tidal dynamics—while producing long runs required
for estimating floods with low exceedance probabilities—is often extremely high.
Furthermore, in many cases, long historical time series of both extreme rainfall and storm
tides are unlikely to be available at the location of interest. If implemented correctly,
continuous simulation is likely to be a technically rigorous approach for flood estimation in
the joint probability zone, but given its numerous practical challenges, the design variable
method has been developed as an alternative approach for flood estimation problems along
the Australian coastline.
The design variable method -This approach has been developed as a simpler alternative to
continuous simulation, without the limiting assumptions of Flood Frequency Analysis. For
further information on the theory and practical limitation of the method, refer to Book 6,
Chapter 5, Section 4 and Book 6, Chapter 5, Section 5, respectively. The primary
assumptions of the approach are:
• The statistical dependence between extreme rainfall and storm surge can be represented
through a bivariate logistic extreme value dependence model, discussed in further detail in
(Zheng et al., 2014a);
184
Interaction of Coastal and
Catchment Flooding
• The dependence strength can be interpolated between gauged locations along the
Australian coastline, and therefore can be represented by a map of dependence strength
(given in Figure 6.5.13 and discussed further in Book 6, Chapter 5, Section 5); and
• The Annual Exceedance Probability of the rainfall event is equivalent to the Annual
Exceedance Probability of the flood event (probability neutral);
• Ocean water levels are assumed to be 'static', as tidal dynamics are not considered
explicitly in the method; and
• Anthropogenic climate change will have negligible effect on the strength of dependence
between extreme rainfall and storm surge, although the effects of climate change can be
accounted for by changing the marginal distributions (ie. the extreme rainfall intensity and
the ocean level).
The validity of the assumptions of the design variable method need to be considered when
applying the method to a specific flood estimation problem, and weighed against
assumptions associated with alternative approaches. For many situations, the design
variable method is a pragmatic approach that can be applied across a range of estuarine
flood estimation approaches.
Table 6.5.1. Comparison design flood estimation methods in the joint probability zone
Aspect \ Model Flood Frequency Design Variable Continuous
Analysis Method Simulation
Domain of Analysis restricted to Can be applied Can be applied
Applicability locations with gauged throughout the joint throughout the joint
data. probability zone. probability zone.
Models Required Univariate extreme Event-based Continuous
value model or other hydrological and hydrological and
statistical model of hydraulic models, and hydraulic models, and
extremes (see Book a bivariate extreme a univariate extreme
6, Chapter 4). value model. value model.
Technical Complexity Low Intermediate Advanced
Computational Low Medium High
Demand
Capacity to Account N/A Static ocean levels Dynamic Tides.
for Dynamic Tidal only
Effects
Parametric Well understood It is feasible to Model-dependent.
Uncertainty likelihoods and estimate the
methods for uncertainty of each
parameter uncertainty parameter in a
(refer to Book 3, bivariate extreme
Chapter 2 on FLIKE). value model, but this
is beyond the scope
of this Chapter.
Capacity to Account Cannot account for The method enables Requires the full
for Climate Change climate change. the distribution of both distribution of future
185
Interaction of Coastal and
Catchment Flooding
The joint probability density function is written as ��, � �, � , and has the property:
∫∫�� �
�, � �, � ���� = 1 (6.5.1)
For independent variables, the joint probability distribution can be expressed as:
The conditional probability density function given the occurrence � = � is given as:
�� � �, �
�� � � �=� = (6.5.3)
�� �
surge or storm tide. However, the theory can be applied more generally to any pair of variables.
186
Interaction of Coastal and
Catchment Flooding
� � � �� �
�� � � = � = = �� � (6.5.4)
�� �
In other words, the conditional distribution becomes equivalent to the marginal distribution of
� when the two variables are independent.
�� � = ∫�
�
�, � �, � �� = ∫�
�
� � � � �� � �� (6.5.5)
These concepts are illustrated in Figure 6.5.3. The main panel shows a joint Gaussian
probability density function ��, � �, � with simulated data that has been drawn from this
distribution function given as light blue dots. The marginal distributions �� � and �� � are
shown as solid lines in the left and bottom panels, respectively. A conditional distribution ��
� � = � is represented as a slice through the joint density at X=2, and the conditional
probability density function is shown as the dashed line in the left panel.
187
Interaction of Coastal and
Catchment Flooding
Univariate extreme value theory is now a mature field, and the reader is referred to the text
by Coles (2001) for a detailed overview of the theory and practical applications of extreme
value models. Probably the most well-known representation of univariate extremes are
'block maxima', which are the maximum values of a process of independent and identically
distributed random variables over a period of time such as a year. These maxima are
commonly modelled using a Generalised Extreme Value (GEV) distribution, with the
cumulative GEV distribution function given as:
188
Interaction of Coastal and
Catchment Flooding
−1/�
�−�
�(�; �, �, �) = exp − 1 + � (6.5.6)
�
for 1 + �(� − �)/� > 0, where � ∈ ℝ is the location parameter, � > 0 is the scale parameter,
� ∈ ℝ is the shape parameter, and �(x) is the cumulative distribution function.
For both univariate representations, the definition of an ‘extreme’ event is clear. In contrast,
the definition of an ‘extreme’ event becomes more ambiguous in the multivariate context.
Four characterisations were identified in Zheng et al. (2014b), and are summarised briefly
herein (refer also to the illustration in Figure 6.5.4). For more theoretical treatment of
multivariate extremes, the reader is referred to Kotz and Nadarajah (2000) and Beirlant et al.
(2004).
Component-wise block maxima -This is a direct analogue of univariate block maxima, but
has the limitation that the component-wise maxima may occur at different times in the block.
As such, the joint maxima will not necessarily correspond to 'real' (ie. simultaneously
occurring) events. This representation is also very wasteful of data, as only the maximum
values in each block contribute to the analysis. In practice these are severe limitations, and
therefore component-wise are rarely used in multivariate extreme value analyses.
Threshold-excess extremes - (Figure 6.5.4, left panel): A high threshold (�� ��� ��) is set for
both variables X and Y, and the multivariate threshold-excess model simulates the
dependence between extremes that exceed both thresholds (illustrated by blue 'plus'
symbols in Figure 6.5.4). Identifying appropriate thresholds (�� ��� ��) represent a
compromise between maximising the amount of data exceeding both thresholds, and
ensuring that the asymptotic assumptions that support the Generalised Pareto distribution
are approximately valid; diagnostics for threshold identification are discussed in more detail
in Coles (2001). A disadvantage to this characterisation is that, by only focusing on cases
where both thresholds are exceeded, situations where only one variable is extreme are not
modelled.
Point process representation - (Figure 6.5.4, middle panel): In this representation, the data
are first transformed to radial (� = � + �) and angular (� = �/(� + �)) components, which is
a transformation from Cartesian to pseudo-polar coordinates. Here, � represents the
distance of each data point from the origin (and therefore describes the ‘extremeness’ of the
observation), and � measures the angle on a [0,1] scale (and thus describes whether the
variable is mostly influenced by �, �, or a combination of both variables) (Coles, 2001).
Extreme events are those above the radial threshold �0 (red 'plus' symbols in Figure 6.5.4),
and the identification of an appropriately high threshold �0 is based on asymptotic
arguments, with diagnostic measures given in Coles (2001). As can be seen from the figure,
this representation characterises the situation where both margins are extreme as well as
the situation where only a single margin is extreme.
Conditional extremes distribution - (Figure 6.5.4, right panel): This representation is based
on conditional distributions in both the X and Y dimensions, with the distribution of Y
189
Interaction of Coastal and
Catchment Flooding
conditioned on the threshold exceedances in X (ie. � � > �� �) and vice versa. The
threshold �� � (vertical green line in Figure 6.5.4) needs to be specified, and then all points
with � > �� � (green open circles) are defined as extremes when modelling the distribution
of � �. The extremes when modelling the distribution of � � are defined analogously, with
the horizontal green line representing �� � and the green 'plus' symbols representing
extremes above this threshold. The extreme events in the upper right quadrants (the
combination of green circles and plus symbols) are based on combining � � and � �, with
further details in Heffernan and Tawn (2004). Similar to the point process representation, this
characterisation models the situation where both margins are extreme as well as the
situation where only a single margin is extreme.
The decision of how to represent multivariate extremes can have important implications in
the context of estimating flood exceedance probabilities in the joint probability zone, with
different models potentially leading to different probability estimates. In particular, given that
the dependence between extreme rainfall and storm surge is generally statistically significant
but not very strong (refer to Book 6, Chapter 5, Section 5 ), it is necessary to assess the
probability of floods for situations when only a single variable is extreme, as well as when
both variables are extreme, suggesting that the point process and conditional
representations may be most suitable for coastal flood problems.
A detailed study in Zheng et al. (2014b) compared the three methods illustrated in
Figure 6.5.4, with results summarised in Table 6.5.2. Zheng et al. (2014b) generated
synthetic data from a bivariate logistic model with dependence α = 0.9 and Gumbel margins,
and the threshold-excess, point process, and conditional methods were used to fit an
extreme value model to this simulated data. Zheng et al. (2014b) concluded that the point
process representation was most suitable for estimating the exceedance probability of floods
in Australia's estuarine regions, as the conditional model tended to underestimate the
dependence strength, and the parameter estimates are also highly variable. It was noted,
however, that the dependence parameter estimates can be biased when simulating
extremes using the point process representation, and that to overcome this issue it may be
appropriate to estimate the dependence parameter the threshold-excess model. This was
the approach taken to develop Figure 6.5.13, and is discussed in more detail in Zheng et al.
(2014a).
Figure 6.5.4. Three Representations of 'Extreme Values' Following Different Extreme Value
Methods ( After (Zheng et al., 2014b))
190
Interaction of Coastal and
Catchment Flooding
The cumulative distribution function of the bivariate logistic model is given as (Tawn, 1988):
�
�(�, �) = exp − �−1/� + �−1/� , � > 0, � > 0, 0 < � ≤ 1 (6.5.8)
191
Interaction of Coastal and
Catchment Flooding
where � and � are standard Fréchet-transformed values of original observations x and y, and
α represents the dependence strength with α→0 and α=1 representing complete
dependence and independence, respectively.
−1/� � −1
� (� − ��)
− log 1 − � � [1 + ] , � > � �, � � ≠ 0
� ��
�= −1/� � −1 (6.5.9)
� − ��
− log 1 − � � exp( − ) , � > ��, �� = 0
� ��
−1
− log�(��) , � ≤ ��
where z represents one of the original margins (either x or y), � is the standard Fréchet value
corresponding to the z in the original scale. � � = Pr � > �� , and �� is an appropriately high
�
threshold for the margin z, and � � and � � are the maximum-likelihood estimated parameters
of the Generalised Pareto distribution. Finally, � is the empirical distribution function of z,
estimated by �(��) = �/(� + 1), where i is the rank of �� and n is the total number of data
points.
The application of the bivariate logistic model (Equation (6.5.8)) and the Fréchet
transformation (Equation (6.5.9)) are illustrated in Figure 6.5.5 for an example dataset near
Perth, Western Australia. First, a pairwise scatterplot of daily rainfall and daily maximum
storm tide is presented (Figure 6.5.5a). Prior to applying the Fréchet transformation, it is
necessary to identify marginal thresholds �� and ��. The choice of threshold values
represents a trade-off between bias and variance: if the threshold is too low, then the
parameters will likely be biased as the asymptotic justification of the extreme value model
may not be valid; conversely if the threshold is too high then the limited sample size will
result in parameter estimates with high variance. Based on visual inspection of two
diagnostic plots—the mean residual life plot and the plot of parameter estimates against
threshold - at multiple rainfall-storm surge pairs across Australia, it was found that the 1%
daily exceedance probability (ie. the top 1% of rainfall and storm tide days) led to reasonable
model performance for most locations along the Australian coastline (Zheng et al., 2013).
These thresholds are shown as grey lines in Figure 6.5.5b.
The Frechet transformation in Equation (6.5.9) is applied to each margin, and the
transformed data are shown on a logarithmic scale in Figure 6.5.5c. As discussed in Book 6,
Chapter 5, Section 4, the point process representation focuses only on data above a radial
threshold �0; values below this threshold are represented in the figure as solid blue shading.
The bivariate logistic model can then be fitted to this transformed data, with dependence
represented using a single dependence parameter, �. For this case, a weak dependence
parameter (�=0.95) was used. The bivariate probability density function, �(�,�), and the
bivariate cumulative distribution function, �(�,�), are presented as dashed blue contours and
solid black contours, respectively, in Figure 6.5.5d.
192
Interaction of Coastal and
Catchment Flooding
Figure 6.5.5. (a) Pairwise Plot of Daily Maximum Storm Tide and Daily Rainfall; (b)
Application of Marginal Thresholds (Based on the 1% Daily Exceedance Probability for Each
Margin), with Events Below the Radial Threshold r0 Shaded in Blue; (c) Transformation of
Events to Unit Fréchet Scale; and (d) Fitting the Joint Probability Distribution
To assist in the interpretation of the dependence parameter (�), the relationship between �
and the number of events that exceed a bivariate threshold are shown in Figure 6.5.6 (refer
also to Zheng et al. (2013)). The analysis was based on a study of 13 414 pairs of daily
rainfall and daily maximum storm surge data located throughout the Australian coastline, and
a marginal threshold of the 99 th percentile of observed rainfall or storm surge data was used,
which corresponds to an average of 3.65 exceedances per year. Assuming statistical
dependence, it would be expected on average that one event every 100 x 100 = 10 000
days exceeds the joint threshold by random chance. The actual number of exceedances was
then plotted against the fitted dependence parameter � , to see the relationship between this
parameter and the number of events exceeding the joint threshold.
There is a close relationship between � and the number of joint exceedances of both
thresholds. As will be discussed in Book 6, Chapter 5, Section 5, the value of � typically
varies from about 0.8 to 0.95 throughout most of the Australian coastline, therefore it is
expected between eight and 27 more exceedances above the joint 99% threshold compared
to what might be expected had the processes been independent. This is an order of
magnitude increase in the probability of a ‘joint’ flood event (ie. a flood event caused by the
combination of extreme rainfall and storm surge), and highlights the importance of
accounting for joint probability issues in the Australian estuarine zone.
193
Interaction of Coastal and
Catchment Flooding
Figure 6.5.6. The Relationship Between the Dependence Parameter and the Number of
Joint Extreme Events per 10 000 days
Univariate estimation methods are used for many flood estimation problems, whereby the
frequency of a single forcing variable (e.g. extreme rainfall or storm tide) is assumed to be
equivalent to the frequency of the corresponding flood level. The exceedance probability
Pr(� ≥ ℎ) for a given flood level, h can be defined as:
∞
Pr(� ≥ ℎ) = ∫� = �0
�(�)��, ℎ = �(�0) (6.5.10)
where �(�) is a function relating the flood level to a single forcing variable � (e.g. rainfall or
storm tide); �0 is the value of the forcing variable that causes the flood level ℎ, and �(�) is a
density function of X at extreme levels. To obtain Pr(� ≥ ℎ), one needs to first estimate the
corresponding �0 that causes the flood level ℎ, then estimate the exceedance probability that
194
Interaction of Coastal and
Catchment Flooding
a value of the forcing variable will be greater than �0, and assign this to Pr(� ≥ ℎ), ie.
Pr(� ≥ ℎ) = Pr(� ≥ �0). Typically, an annual maximum, �-largest or a peak-over-threshold
method is used to obtain the tail distribution of X (Coles, 2001).
where �(x,y) is a ‘boundary function’ that maps the two dimensional space of the forcing
variables to the one dimensional response variable. In flood estimation, a combination of
hydrologic and hydraulic models are typically used to obtain a flood level ℎ = �(�, �) as a
function of boundary conditions such as rainfall and storm tide (x, y). The failure region �ℎ
can be interpreted as the set of values of the constituent processes (x, y) that cause flood
levels greater than a specified design flood level ℎ. The corresponding exceedance
probability Pr(� ≥ ℎ) is given as:
Pr(� ≥ ℎ) = ∫∫ �(�, �)
�ℎ
(6.5.12)
where �(�, �) = ∂�(�, �)/ ∂� ∂� is the joint density function of the two variables X and Y at
extreme levels, and �(�, �) is their corresponding joint cumulative distribution function.
Figure 6.5.7 illustrates the difference between the univariate method (top panel) and the
design variable method as an example of a joint probability method (bottom panel) for a
hypothetical scenario in which floods are caused by two forcing variables � and �. In the top
panel, the grey shaded region represents the exceedance probability Pr(� ≥ ℎ), where ℎ
(the red dashed line) is determined by a single forcing variable (e.g. rainfall or storm tides).
The grey shaded region in the bottom panel illustrates the exceedance probability for the
region �ℎ where ℎ (the solid red line) depends on both forcing variables. In the bivariate
case, the probability Pr(� ≥ ℎ) can then be evaluated as the integral of the joint density
�(�, �) (thin blue contours) across the whole failure region �ℎ (Equation (6.5.12)).
195
Interaction of Coastal and
Catchment Flooding
Figure 6.5.7. Exceedance Probabilities Obtained from a Univariate Analysis (top panel) and
a Bivariate Analysis (bottom panel)
It is possible to compute the integral in Equation (6.5.12) using two dimensional numerical
integration or Monte Carlo techniques, but these approaches can be slow for the required
levels of precision. It is more computationally efficient to exploit the properties of the joint
cumulative distribution function �(x,y) to reduce the bivariate integral to a univariate line-
integral along the boundary function �(x,y) = ℎ. This is implemented numerically as:
196
Interaction of Coastal and
Catchment Flooding
�−1
Pr(� ≥ ℎ) = 1 − Pr(� < ℎ) = 1 − ∑ � � �,� � − � � �,� � + 1 (6.5.13)
�=1
Finally, the exceedance probability of flood event from the univariate method (the grey
shaded region in the top panel) is also illustrated on the bivariate plot (the grey shaded
region to the right of the red dashed line in the bottom panel). The univariate failure region is
smaller than the Ah that would be obtained by the joint probability method, demonstrating
that the univariate method will underestimate the exceedance probability of the flood in this
case.
Flood estimation is often concerned with understanding the behaviour of the upper tail
of a probability distribution. In the context of multivariate extremes, this requires
assumptions about how the dependence between variables changes as the variables
become increasingly extreme.
Multivariate probability distributions can be classified based on how they behave in the
limit as each variable becomes increasingly extreme (refer to Coles (2001) for
additional coverage of the theory of asymptotic dependence). Examples of an
asymptotically independent and an asymptotically dependent distribution are given the
Figure 6.5.8: the dependence between variables for the Gaussian distribution
decreases with extremity of � or � (evidenced by the increased scatter of points away
from the leading diagonal), where dependence remains high for the asymptotically
dependent bivariate logistic distribution.
A detailed study along the Australian coastline ((Zheng et al., 2013)) found that at most
locations, the bivariate distribution between extreme rainfall and storm surge was
asymptotically dependent, meaning that rarer events are more likely to occur jointly
compared to more frequent events. This is the basis for the recommendation to use a
bivariate logistic distribution for dependence analysis, and provides a cautionary note
for using correlation-based measures (which assume Gaussianity) for representing
joint dependence.
Results are presented both in terms of Annual Exceedance Probabilities (AEPs) and
−1
Average Recurrence Intervals (ARIs), using the conversion ��� = 1 − � ���
197
Interaction of Coastal and
Catchment Flooding
The probability of two independent events Z = X>x or Y>y: Consider two independent
random variables, X and Y. What is the probability of a ‘failure event’ Z = X > x or Y > y? This
type of question might arise when (i) a system is considered to ‘fail’ when any component of
the system fails, and (ii) the failure of any component of the system is independent of the
failure of any other component. For example, a road might ‘fail’ when either of two bridges
are overtopped, and the bridges are sufficiently far from each other so that it is possible to
assume the flood producing mechanisms are approximately independent2.
The set of events X>x is shown by the vertical blue lines in Figure 6.5.8, and the set of
events Y>y is shown by the horizontal blue lines in Figure 6.5.8. Start by considering only
the probability of two variables exceeding their respective thresholds in a given year (but
potentially on different days). The question of calculating the probability of the two events
coinciding (ie. occurring at the same time) is considered in a later example.
The exceedance probabilities corresponding to those thresholds is ���� = Pr{X>x} and ����
= Pr{Y>y}. The Annual Exceedance Probability of the failure event, Z, is then given as ����
= Pr{X>x or Y>y}. With reference to the illustration in Figure 6.5.8, it is straightforward to see
that ���� = ����+ ���� – (����*����); the reason for the subtraction term is because the
cross-hatched region in Figure 6.5.8 would otherwise have been counted twice. Example
calculations of ���� assuming a number of different combinations of � and � are presented
in Table 6.5.3.
Figure 6.5.8. Illustrating the Probability of Two Independent Events Z = X>x or Y>y
Table 6.5.3. Worked Examples of the Probability of Two Independent Events Z = X>x or Y>y
2Given that rainfall is a spatial process, the assumption that extreme rainfall at two nearby locations is statistically
independent is unlikely to be valid; it is made here for illustration purposes only.
198
Interaction of Coastal and
Catchment Flooding
The probability of Two Independent Events Z = X>x and Y>y: An alternative question
concerns the probability of both X and Y exceeding their specified thresholds.
This situation is illustrated as the hatched region in Figure 6.5.9. Defining ���� as Pr{X>x
and Y>y}, ���� = ���� * ���� can be estimated, with a number of specific examples shown
in Table 6.5.4.
Figure 6.5.9. Illustrating the Probability of Two Independent Events Z = X>x and Y>y
Table 6.5.4. Worked Examples of the Probability of Two Independent Events Z = X>x and
Y>y
AEPX AEPY AEPX(years) AEPY(years) AEPZ ARI Z(years)
1.00% 65.0% 99.5 1.0 0.65% 153.3
2.00% 40.0% 49.5 2.0 0.80% 124.5
5.00% 20.0% 19.5 4.5 1.00% 99.5
10.00% 10.0% 9.5 9.5 1.00% 99.5
The Probability of Two Completely Dependent Events Z = �>� and �>�: In situations of
perfect dependence between � and �, the probability of �>� would be equal to the
probability of Y>y for all x and y. Because of this, Pr{�>�} = Pr{�>�} = Pr{�>� and �>�}. For
example, if ���� and ���� are both 10%, then ���� = 10%.
The probability that Two Events X>x and X>y with Specified Annual Exceedance
Probabilities Pr{�>�} and Pr{�>�} Occur on the Same Day: In the previous examples, the
interest was in the probabilities that two random variables � and � exceeded thresholds �
and � in a given year. However, when estimating the exceedance probability of floods, our
interest is in the probability of these two variables coinciding. Therefore, one must consider
the probability that both variables reach their maxima at the same time within a given year.
This issue can be illustrated by considering the case where the daily maximum storm tide is
assumed to coincide with the daily maximum rainfall. This is still a conservative assumption
(since the peak of the hydrograph will not always occur at exactly the same time as the peak
199
Interaction of Coastal and
Catchment Flooding
of the storm surge within a given day), but less conservative than the assumption that the
annual maximum of variable � will always occur at the same time as the annual maximum of
variable �.
The conversion between an AEP and a Daily Exceedance Probability (DEP) is DEP=AEP/
365. For the example of 1 year ARI event, there is a 63% chance that any given year will
exceed that level (AEP), and a corresponding 0.17% chance that any given day will exceed
that same level (DEP).
The earlier example is now revisited (the probability of two independent events Z = �>� and
�>�), but now first converting to daily values. Table 6.5.5 shows the results using the
example of two coinciding 10% AEP events. It is clear from this example that the joint
exceedance probability is much lower than the results presented in Table 6.5.4 (since, in the
absence of dependence, the most likely case is that the two extreme events would occur on
different days). In contrast, had complete dependence been assumed, then ���� would
remain at 10%, as by the definition of complete dependence, high values of � and � will
always occur at the same time.
Table 6.5.5. Calculating the Probability of Two Independent Events Z = X>x and Y>y, When
Adding the Constraint that Both Events Must Occur on the Same Day
Why the Probability That �>� or �>� is Non-commensurate with the Probability that �
>ℎ Where ℎ = �(�, �), Even When Random Variables � and � are Independent: This
illustrates a basic problem in joint probability analysis of floods in the coastal zone, where
interest is in a quantity such as a flood level (h) that is some complex function of forcing
variables such as rainfall (�) and storm tide (�). A ‘boundary function’ �() is used to
represent the complex mapping from rainfall/storm tide to the flood level. In most practical
applications this mapping would be achieved using hydrologic and hydraulic models that
take rainfall and storm tide as their boundary conditions, and produce flood level as their
output.
Consider the situation concerning the exceedance probability of a flood of height �>ℎ, and
after a hydrological/hydraulic analysis concluded that this height can be caused by a rainfall
event with ����= 5% but with no significant storm tide, or a storm tide event with ����=
20% but with no significant rainfall. Furthermore, assume that the processes X and Y are
independent. In this case, it is tempting to refer to Table 6.5.3 and suggest that the AEP of
the flood becomes 24.0%.
A problem with this calculation is that it neglects floods that can occur through a combination
of smaller values of rainfall and storm tide. This is illustrated by the red hatched region in
Figure 6.5.10. Therefore, even if it is assumed that extreme rainfall and storm tide are
statistically independent, it will still be necessary to apply the design variable method to
compute flood exceedance probabilities.
200
Interaction of Coastal and
Catchment Flooding
Figure 6.5.10. Conceptual Diagram (a) Probability of Floods Caused by Either a Significant
Rainfall Event or a Significant Storm Tide Event, and (b) Additional Probability of Floods
Produced by Combinations of Smaller Rainfall and Storm Tide Events
201
Interaction of Coastal and
Catchment Flooding
202
Interaction of Coastal and
Catchment Flooding
Accounting for dependence between extreme rainfall and storm surge as part of estuarine
flood assessments represents significant additional computational effort when compared to
traditional univariate methods. Therefore, a pre-screening analysis is recommended to
determine whether the additional complexity of a joint probability analysis is warranted.
The aim of this step is to calculate the outer envelope of flood estimates obtained from the
joint probability method. This involves calculating a minimum number of cases to determine
the magnitude of flood differences between independence and full dependence:
1. the independence case where a fluvial flood occurs in the absence of an ocean event; (2)
the independence case where a coastal flood occurs in the absence of a rainfall event;
and
2. the full dependence case where both a fluvial flood and a coastal flood occur
simultaneously.
The specific number of runs required in the pre-screening analysis will depend on the
number of AEPs that need to be evaluated. Table 6.5.6 presents the pre-screening analysis
for three AEPs, which requires nine instances ('runs') of a hydrological and hydrodynamic
model. The boundary conditions are specified in terms of their AEP rather than in their
corresponding dimensional value (e.g. m3/s, m, etc), so that it will be necessary to consider
the probability distribution of the extremes of each boundary to enable translation between
an AEP and the dimensional value.
Table 6.5.6. Flood Levels of Different Combinations of Rainfall and Storm Tide in Terms of
Annual Exceedance Probability, for a Particular Storm Burst Duration. Only the highlighted
cells need to be evaluated.
The pre-screening analysis should be undertaken at each cross-section or grid cell in the
floodplain where information is required, with a longitudinal profile of the independence and
complete dependence cases illustrated for a single exceedance probability in Figure 6.5.12.
The pre-screening analysis involves classifying each cross-section and AEP into one of the
following cases.
Case 1 - If the flood levels in the green cells are similar to those in the red cells for each
rainfall AEP (ie. the difference is less than some tolerance threshold value of � mm), the
flood levels for the catchment of interest are completely dominated by the rainfall (the 'fluvial
zone' in Figure 6.5.12). Normally such catchments are in upstream reaches of the river. For
this case, complete dependence should be assumed; the AEP of a flood level is obtained
when the same AEP of the rain and storm surge are assumed to coincide (red cells). While
this is a conservative assumption, it eliminates the need for modelling a much larger number
of combinations.
203
Interaction of Coastal and
Catchment Flooding
Case 2 - If the flood levels in the blue cells are very close to those in the red cells for each
storm tide AEP (ie. the difference is less than the tolerance threshold of � mm), the flood
levels are completely dominated by the storm tide (the 'coastal zone' in Figure 6.5.12).
Normally this location is in lower reaches of the river. As with Case 1, complete dependence
should be assumed, and the flood level should be obtained based on the combinations in
the red cells.
Case 3 - If the flood levels in the red cells are significantly higher than those in the green and
blue cells (ie. the difference is greater than the tolerance threshold of � mm) with the same
rainfall and storm tide AEPs, this indicates that the joint dependence has a significant
influence on the flood level (the 'joint probability zone' in Figure 6.5.12). It will be necessary
to continue to Step 2 and conduct a full joint probability analysis.
Figure 6.5.12. Pre-Screening Step, which Involves Calculating the Outer Envelope of the
Possible Flood Levels.
The threshold value of � represents a tolerance defined by the practitioner. This tolerance is
a trade-off between the benefit of a more accurate assessment of flood exceedance
probabilities (obtained through the joint probability calculation) and the additional effort
required to implement a joint probability analysis. A joint probability analysis has additional
computational cost and this cost should be proportional to the benefit of the additional
precision. This trade-off will vary according to different locations and design problems. As
illustrated in Figure 6.5.12, the tolerance is also used to formally define the 'joint probability
zone' that was first introduced in Figure 6.5.1.
It should be noted that if one is only interested in a single AEP (rather than a range of AEPs)
then only three runs are required instead of the nine runs in Table 6.5.11. For example, if
only the 1% AEP is of interest, then the three model runs are: (i) an event with 1% AEP
rainfall combined with the lower bound of the storm tide; (ii) an event with 1% AEP storm tide
combined with no rainfall; and (iii) an event with the 1% AEP rainfall combined with the 1%
AEP storm tide.
A map of dependence parameters from the bivariate logistic extreme value model has been
created for the Australian coastline (Figure 6.5.13). The map was derived based on an
analysis of the joint dependence using data from 64 tide gauges, 7684 daily rainfall gauges
and 70 sub-daily rainfall gauges, and is described in more detail in Zheng et al (Zheng et al.,
204
Interaction of Coastal and
Catchment Flooding
2014a). The dependence parameters are available for storm burst durations shorter than 12
hours, between 12 and 48 hours, and between 48 and 168 hours. Note that values closer to
one represent weaker dependence, and values closer to zero represent stronger
dependence.
This step involves selecting the dependence parameter from the map (Figure 6.5.13). The
duration should be estimated with reference to the catchment time of concentration. Values
closer to 1 represent weaker dependence, and values closer to 0 represent stronger
dependence.
Figure 6.5.13. Dependence Parameter (α) Map for the Basins of the Australian Coastline -
shorter than 12 hours, 12 to 48 hours, and 48 to 168 hours
In this step, the flood level corresponding to a number of scenarios of rainfall and storm tide
needs to be evaluated to accurately estimate flood levels incorporating dependence. The
scenarios should include no rainfall and the lower bound of the storm tide cases to represent
the lowest possible value of each variable. The scenarios should also consider cases with
exceedance probabilities lower than the smallest AEP (ie. largest flood level) of interest. Up
to the 1% AEP, a typical example is given in Table 6.5.7 which has seven cases for each
variable leading to 49 runs of a hydrologic and hydrodynamic models.
Table 6.5.7. Flood Levels of Different Combinations of Rainfall and Storm Tide in Terms of
Annual Exceedance Probability with a Particular Storm Burst Duration
205
Interaction of Coastal and
Catchment Flooding
20%
10%
2%
1%
0.2%
0.05%
The final step of the analysis involves superimposing the flood level table (Table 6.5.7) onto
the joint probability density function of the bivariate logistic extreme value distribution, and in
the context of estimating the probability of a specific design flood level, h will involve the
following steps:
1. Using the bivariate logistic extreme value model of Equation (6.5.8) with the dependence
parameter estimated in Step 2, estimate the bivariate probability distribution function
corresponding to the extreme rainfall and storm tide. This is represented as the blue
contours the hypothetical example that was presented in Figure 6.5.7.
2. Using the data obtained in Step 3 (Table 6.5.7), estimate the set of all possible
combinations of extreme rainfall and storm tide that would produce flood level h. This
involves interpolating over the values in Table 6.5.7. The contour of fixed, h, was
illustrated as a solid red line in the hypothetical example presented in Figure 6.5.7.
3. Integrate the bivariate probability distribution function to the right of (ie. above) the design
flood level to obtain the exceedance probability of that flood event. This is represented in
Figure 6.5.7 as the integration of the blue contours over the grey shaded region.
If the objective of the analysis is to find the design flood level corresponding to a specific
AEP, then the above steps need to be repeated for a number of flood levels until the flood
level corresponding to the desired AEP is identified.
A software tool3 has been developed to perform these calculations. This tool requires as
inputs the dependence parameter and the flood level table, and will produce a plot of water
levels against AEPs. Implementation of the software is illustrated using worked examples in
Book 6, Chapter 5, Section 7. At present the software implementation of the method has
been tested for flood levels for the 50% to 1% AEP. The software tool is needed primarily for
Step 4.2 and 4.3. To determine contour lines in Step 4.2 there are a number of standard
libraries, but to determine the integral in Step 4.3 a customised routine is required for
implementing Equation (6.5.13).
Finally, a note of caution is required regarding the identification of the storm burst duration in
Step 2. In that step, it was recommended that the storm burst duration was selected based
on the time of concentration of the catchment, with the reasoning that this would lead to the
3http://p18.arr-software.org/
206
Interaction of Coastal and
Catchment Flooding
maximum flow rate. Assuming static tailwater levels (which is a fundamental assumption of
the design variable method; see Book 6, Chapter 5, Section 3) and a constant dependence
parameter, this would be equivalent to the duration that would lead to the largest flood event.
However, because the dependence parameter depends on duration, it is possible that some
storm burst duration could result in a lower peak flow rate but nonetheless lead to a higher
flood level because the peak flow is more likely to coincide with the peak ocean level.
Therefore, if the identified storm burst duration identified in Step 2 is close to a threshold
between dependence parameters, it may be necessary to test the implications of an
adjacent duration with a lower dependence parameter (ie. stronger dependence).
Information on how the dependence between extreme rainfall and storm surge/tide may
change in a future climate is currently unavailable, and therefore guidance on possible
changes to the dependence parameters in Figure 6.5.13 cannot be provided at this stage.
Guidance is available, however, on possible changes to both extreme rainfall (Book 1,
Chapter 6) and mean sea level. As an interim measure, it is recommended that estimates of
the impact of climate change on flooding in the joint probability zone be accounted for as
follows:
• Changes to extreme rainfall should be estimated using the approach described in Book 1,
Chapter 6;
• Changes to ocean level should be estimated using the approach described in Engineers
Australia Guidelines for Responding to the Effects of Climate Change in Coastal and
Ocean Engineering (NCCOE, 2012); and
• The dependence parameters described in Figure 6.5.13 that correspond to the historical
climate situation should be used unless more precise estimates of future dependence
parameters are available.
The implication of changing both the extreme rainfall intensity and the ocean levels is
illustrated in Figure 6.5.14. Using the adjusted rainfall and ocean levels, the four step
methodology described earlier in this section then can be applied. It is noted that a possible
effect of climate change is that the tidally affected part of a river is likely to change (for
example, it may reach further upstream due to the effects of sea level rise), and this will
influence the area classified requiring a full joint probability analysis. Therefore the pre-
screening analysis in Step 1 will also need to be repeated when considering climate change.
207
Interaction of Coastal and
Catchment Flooding
Figure 6.5.14. Interim Approach to Account for the Effects of Climate Change
To account for climate change, two alternative methods for adjusting the flood level table
(Table 6.5.7) are proposed:
• The table can be populated using the climate change-affected rainfall and ocean levels as
upper and lower boundary conditions to the hydrologic/hydraulic models, which would
require repeating all the simulations to account for the changes in the rainfall and ocean
level values; or
• The historical flood level table can be used but the exceedance probabilities of the
extreme rainfall and storm tide can be modified to reflect future exceedance probabilities.
This can eliminate the need for additional hydrologic and hydraulic runs, although it is
possible that additional simulations may still be required for low exceedance probability
events.
The hydraulic and hydrologic models for the Hawkesbury-Nepean River system were
originally developed as part of an Environmental Impact Statement in the 1990s for works to
upgrade the spillway capacity of Warragamba Dam. The study included a detailed analysis
of the existing flooding behaviour and was carried out by Webb McKeown and Associates
(1996). The outcomes were subject to rigorous technical reviews by a range of parties
including Sydney Water, the then Department of Land and Water Conservation, the Bureau
of Meteorology and other experts.
The hydrologic model used was RORB, and the hydraulic model was RUBICON. The RORB
model was calibrated and then evaluated using historical records available for five of the
events between June 1964 and April 1988. The RUBICON hydrodynamic model software
was used to quantify the hydraulic aspects of the flood behaviour (e.g. flood levels and
velocities). RUBICON is a fully dynamic one dimensional (1D) model and uses different
elements to simulate complex flow over floodplains and through channel systems. The
208
Interaction of Coastal and
Catchment Flooding
hydraulic model covers the entire area from Lake Burragorang to the ocean at Broken Bay.
The process of calibrating and evaluating the RUBICON model was undertaken using
recorded information for 11 individual historical events. The models were then used to
determine design flood behaviour of the system. The calibrated RUBICON model of the
Hawkesbury-Nepean has been maintained by WMAwater (previously Webb McKeown and
Associates) since this original study and was selected for this project as there is a very high
flood gradient along the river even though the river is tidal for 140 km upstream of the
estuary inlet under non-flood conditions.
Consider the location of Liverpool which is 80 km upstream from the ocean boundary.
Assume that the practitioner specifies a tolerance of 0.1 m for the design in question.
Assume also that the user is interested in a design at the 2% AEP level. Three model runs
are required:
2. Flow boundary only — flow boundary at 2% AEP and ocean boundary at 100% AEP.
3. Ocean boundary only — flow boundary at 100% AEP and ocean boundary at 2% AEP.
The water levels resulting from these three runs are summarised in Table 6.5.8. For the
dependent case, the water level is 9.590 m. For the independent case, the water level is
obtained by taking the highest water level from either the flow boundary only case or the
ocean boundary only case. For this location, the flow boundary only case dominates, and
leads to a flood level of 9.537 m. The difference between the dependence and
independence cases is 0.053 m, which is within the specified tolerance. This implies that this
cross-section is not highly sensitive to the ocean level, and is thus in the 'fluvial zone'. The
fully dependent value of 9.590 m is therefore used as the best approximation to the 2% AEP
event, without having to implement the design variable method. This analysis is only valid for
the 2% AEP level, and should be repeated for other AEPs if there is interest in analysing
other exceedance probabilities.
Table 6.5.8. Model-Derived Water Levels (mAHD) for Given Pairs of Tide and Rainfall
Boundary Input Conditions for a Cross-section Located at Liverpool (Chainage–80 300)
By repeating the pre-screening analysis at multiple locations along a river, the extent of the
joint probability zone can be defined. Two examples of dependent locations for this case
study are Olga Bay and Spencer, with 2% AEP model runs shown in Table 6.5.9 and
Table 6.5.10 respectively. The difference between the dependent and independent cases at
209
Interaction of Coastal and
Catchment Flooding
both these locations is greater than the tolerance of 0.1 m, indicating the influence of both
boundary conditions. At these locations, it is therefore necessary to implement the design
variable method to determine the 2% AEP water level. AtSpencer, in particular, the
difference in flood level based on the dependent and independent cases is 0.614 m,
suggesting potentially significant discrepancies depending on the joint probability
assumption at this location.
For the location of this case study, assume that the dependence parameter is 0.9 (refer
Figure 6.5.9).
The hydraulic response table should include runs where the 100% exceedance probability of
each margin is considered, and there should be a wide range of AEPs. The range of AEPs
for the margins should include events that are rarer than the AEPs being calculated for the
water level (since, for example, a 1% AEP water level could hypothetically arise from the
combination of a 10% AEP flow and a 0.1% AEP storm tide). The total number of model runs
(here 5 x 10 = 50 runs) is likely to be the limiting factor for the feasibility of the method
(especially where two dimensional (2D) hydrodynamic models are used) and this will govern
the resolution at which the table is evaluated (Zheng et al., 2015).
Table 6.5.11. Model-Derived Water Levels (mAHD) for Given Pairs of Storm Tide and
Rainfall Boundary Input Conditions for a Cross-section Located at Olga Bay (Chainage–20
400).
210
Interaction of Coastal and
Catchment Flooding
Table 6.5.12. Model-Derived Water Levels (mAHD) for Given Pairs of Storm Tide and
Rainfall Boundary Input Conditions for a Cross-section Located at Spencer (Chainage–34
700)
Figure 6.5.15 is a plot of the water level contours that have been interpolated from the
hydraulic response tables for Olga Bay and Spencer. These plots provide a consistency
check of the water levels in the tables. Vertical lines imply that the storm tide (in this case the
� variable) is the dominant process affecting the water level, whereas horizontal lines imply
that the flow (the � variable) dominates the water level. Any other slope between these two
indicates variation with respect to both inputs.
211
Interaction of Coastal and
Catchment Flooding
Figure 6.5.15. Interpolated Contours for Olga Bay (left) and Spencer (right) Corresponding to
the Water Levels in Table 6.5.11 and Table 6.5.12.
Figure 6.5.16 shows the output water levels at the two locations. For Olga Bay the best
estimate (solid black line) is very similar to the independence case. Figure 6.5.16 also shows
that the difference between complete dependence and independence is a function of the
AEP (AEPs >10% are very similar, but AEPs <10% diverge between these two cases). For
Spencer, the best estimate lies approximately midway between the complete dependence
and independence cases. This demonstrates that the relationship of � is non-linear with
respect to the resulting water levels and that it varies with location. Specifically, although �
varies from zero (complete dependence) to one (independence), �=0.90 does not
necessarily mean the water level is ‘near independence’.
Figure 6.5.16. Water Levels at Olga Bay (left) and Spencer (right) Corresponding to Cases
of Complete Dependence, Complete Independence and the Best Estimate when α=0.9
A longitudinal plot can be generated by repeating the analysis for multiple cross sections
(Figure 6.5.17). The joint probability zone is indicated as the region where the difference
between the complete dependence and independence cases is greater than the defined
tolerance. From Figure 6.5.17 it is clear that Spencer is situated in the middle of this zone,
and that Olga Bay – being closer to the ocean boundary – is less affected by the joint
dependence. The extent of the zone also depends on AEP, as the joint dependence is more
212
Interaction of Coastal and
Catchment Flooding
important for more frequent events, and this leads to a longer extent of the zone (e.g.
compare the range of distance over which there is a noticeable difference between
dependence and independence cases, for the 10% and 1% AEP respectively).
Figure 6.5.17. Longitudinal Comparison of 1% AEP and 10% AEP Water Levels
Model runs for a pre-screening analysis at three different AEPs (9.5%, 2% and 1%) are
shown in Table 6.5.13. For the 9.5% AEP the difference between independence and
complete dependence is 0.21 m, for the 2% AEP it is 0.12 m and for the 1% AEP it is 0.12
m. The importance of accounting for joint probability effects therefore appears to be greater
for more frequent events. If a tolerance of �=0.1 m was specified for the design, a joint
probability analysis would be required for each AEP to obtain more accurate estimates of the
water level corresponding to a specified exceedance probability.
213
Interaction of Coastal and
Catchment Flooding
The critical duration of the Nambucca River catchment is between 36 and 48 hours. Given
this storm burst duration and the location of the Nambucca River catchment, � =0.90, taken
from Figure 6.5.13, was used to represent the dependence between extreme rainfall and
storm tide.
Table 6.5.14 shows flood levels at Macksville for various combinations of critical-duration
rainfall and storm tides in terms of AEP.
Table 6.5.14. Flood Levels for Various Combinations of Rainfall and Tide Levels at
Macksville (Pacific Highway Bridge) Nambucca River
Figure 6.5.18 shows the flood levels at the Macksville cross-section (Pacific Highway Bridge)
for various AEPs. As with the pre-screening analysis, the difference between the flood levels
214
Interaction of Coastal and
Catchment Flooding
is larger for more frequent AEPs. For rarer AEPs, the small difference between the
independence and complete dependence-based estimates indicates that one flood-
producing mechanism has negligible effect and that the other dominates. Based on the
results in Table 6.5.14 rainfall is the dominant mechanism (there is a larger variation with
changes in rainfall than with changes in tide).
Given that the location is jointly affected by storm tides and streamflow, it is preferable to
compare the modelled water levels to observed levels (rather than flows). The observation
gauge at Macksville has records from 1890 to 2011, giving 121 annual maximum events. Of
these, 93 were censored below the 2 m threshold due to the tidally influenced nature of the
location, leaving 28 uncensored gauged values. Of the 28 values, one value – the largest on
record – could not be specified precisely but instead was suggested to have a range
between 3.5 m-4 m ((WMAwater, 2013)).
Figure 6.5.19compares the observed water levels at Macksville (blue points) to the range of
estimates from the design variable method, from complete dependence to independence,
with the range depicted in the figure as grey shading. The fitted model gives reasonable
agreement for less frequent events (AEP < 5%) that were the focus of hydraulic model
calibration, but there is noticeable discrepancy for more frequent events (AEP > 5%). These
215
Interaction of Coastal and
Catchment Flooding
observations lie outside the bounds produced by the dependence parameter, suggesting
that variability in the dependence between extreme rainfall and storm tide is insufficient to
explain this discrepancy.
Figure 6.5.19. Comparison of Observed Water Levels at Macksville with Range of Estimates
from Design Variable Method from Complete Dependence to Complete Independence
1. Parametric uncertainty in the rainfall, storm tide and observed water levels;
2. The hydraulic model and how it is represented via the hydraulic response table;
3. The assumed entrance conditions being too efficient for these more frequent events; and
4. The upper and lower boundary models (ie. the hydrological and storm tide models).
Uncertainty Assessment
To estimate the 90% confidence intervals for the water levels, the procedures outlined in
Book 3, Chapter 2 were used to implement a Flood Frequency Analysis. The 90%
confidence limits from a fitted Generalised Extreme Value distribution are represented as
grey dashed lines in Figure 6.5.20, and appear to encompass the simulated flows for most
AEPs.
There is also uncertainty in the distributions used to model the design variable method, such
that the rainfall AEPs and storm tide AEPs may differ from those in Table 6.5.14. To estimate
the 90% confidence intervals to account for the effects of rainfall and storm tide, the
censored threshold likelihood of (Zheng et al., 2015) was used. In the analysis, the joint
distribution of storm tides at Stuart Island and rainfall from the Utungun gauge were
extracted, jointly dependent Generalised Pareto distributions were fitted and the
corresponding parameters were sampled using a Markov Chain Monte Carlo method to
216
Interaction of Coastal and
Catchment Flooding
adjust the AEPs in Table 6.5.14. This method is beyond the scope of this chapter, but it is
nonetheless useful for diagnosing the discrepancy with observations. Figure 6.5.20 shows
the 90% confidence limits of the uncertainty analysis of the design variable method.
Comparing the observations to the confidence limits in Figure 6.5.20, the design variable
method has considerable uncertainty in the upper tail, but less uncertainty in the lower tail.
The observed water levels in the lower tail lie outside the confidence limits, which suggests
that this discrepancy is not accounted for by considering parametric uncertainty.
Nonetheless, the confidence intervals between the two methods overlap for the majority of
AEP estimates suggesting general agreement.
Figure 6.5.20. Comparison of Observed Water Levels to 90% Confidence Limits from
Generalised Extreme Value Distribution and Design Variable Method
Hydraulic Model
The hydraulic model entails a number of assumptions that could lead to misspecification of
water levels for a given set of boundary conditions. For example, typical issues such as
simplified model representations and coarse grids could affect the flood level estimates. If
the water levels specified for Table 6.5.14 are different, this leads to different water level
contours and exceedance probabilities. For this study a two dimensional model was used
with a rigorous calibration ((WMAwater, 2013)). The focus of the calibration was to less
frequent events, whereas the discrepancy in Figure 6.5.19 is with respect to the more
frequent events. One potential issue with respect to more frequent events is an assumption
about the river entrance. The model assumed that events generated sufficient flow to blow-
out the river entrance, but for more frequent events this may not hold, leading to higher
observed water levels than those modelled. This issue will be further considered in the
following section by adjusting AEPs corresponding to the water levels (since it is
computationally expensive to rerun the hydraulic model).
A related issue may be due to the coarseness and range of the hydraulic response table
(Zheng et al., 2015), but Table 6.5.14 extends to a 0.05% AEP event and has a relatively
large number (twelve) increments for each dimension. Based on these considerations, the
discrepancy does not seem to be due to the coarseness or extent of AEPs in Table 6.5.14.
Boundary Model
The boundary model refers to the methods by which the boundary conditions of the hydraulic
model were derived and linked with exceedance probabilities (e.g. the AEPs in Table 6.5.14).
For example, the probability distribution for the ocean boundary may have been specified
217
Interaction of Coastal and
Catchment Flooding
using a surrogate location or may itself be derived from a coastal model. The probability
distribution for the streamflow boundary may have assumptions in how the streamflow was
derived from rainfall, for example, loss parameter values, temporal patterns,
representativeness of the rainfall gauge, and the coincidence of rainfall across multiple
tributaries. In short the probabilities associated with water levels in Table 6.5.14 may not be
correct.
Visual inspection of Table 6.5.14 shows that the water level at Macksville is more responsive
to the rainfall distribution rather than storm tide. This suggests that the design variable
method at this location will be more sensitive to the assumptions made when associating the
rainfall AEPs to water levels. Rather than reassess the hydrological model, a heuristic
method is to manually adjust the AEPs and determine whether an improved fit to water
levels is plausible. Taking this approach, the frequent rainfall AEPs in Table 6.5.14 were
modified from {63.1%, 39.3%, 18.1%} to {63.1%, 50%, 39.1%} with all other AEPs remaining
the same. The result of this approach is shown in Figure 6.5.21 giving the strongest
indication that the discrepancy is due to the association of water levels to the frequent
rainfall AEPs. As noted previously, the hydraulic model assumption that the river entrance is
blown-out for frequent events is the most likely plausible explanation for this observation.
However, it cannot be ruled out that the issue may instead be with the hydrological model
and further inspection would be required to isolate the specific issue (beyond the scope of
this chapter).
Figure 6.5.21. Comparison of Observed Water Levels at Macksville to Best Fit Estimates
from Design Variable Method Assuming Correction to Frequent Rainfall AEPs
From this example, it is clear that care is required when interpreting the results from the
design variable method. This example has illustrated the type of issues that should be taken
into account, including the uncertainty of data sources, hydraulic model assumptions and
boundary model assumptions.
The flood level table for Macksville was constructed from output from a 2D hydraulic model.
At other locations the flood level table may be only partially complete because some flow/
tide combinations do not cause the water level to exceed the base elevation of that grid cell.
As an example, Table 6.5.15 presents a water level table from a different location, and has a
218
Interaction of Coastal and
Catchment Flooding
number of NA (not available) values indicating that for these combinations the free water
surface was not high enough to wet the grid cell. Provided that there are not too many NAs
and they are in a consistent block, the design variable method can handle partially wetted
flood level tables by ignoring the region of missing values.
Another issue is that the hydraulic model output should be ‘well behaved’ for all
combinations of boundary conditions. This issue can be seen in Table 6.5.15 for the column
of 0.05 % AEP storm tide, which shows two instances where a larger rainfall value results in
a lower water level. When a rarer rainfall (or, equivalently, storm tide) event yields a lower
water level this is referred to as being non-monotonic increasing. Strictly, this is not
physically possible, but could be produced by a model for various reasons. One explanation
is that the hydraulic model itself has spurious numerical artifacts.
Another explanation is cases where the boundary conditions have been derived
inconsistently. For example, the practitioner may have switched between critical durations,
used different temporal patterns or changed the way hydrographs are derived from tributary
catchments. A practical workaround is to enforce monotonicity by artificially raising the water
levels to be at least as high as water levels from more frequent events (in Table 6.5.15, the
two events would be set to be 10.44 m). See the underlined cases of tide= 0.05% with rain =
(5% or 0.05%) that are not monotonic increasing when compared to the values at lower
AEPs.
219
Interaction of Coastal and
Catchment Flooding
5.8. Summary
Flood estimation in estuarine regions is generally more complicated than for other locations
due to the range of processes and timescales that can lead to flood events. As described in
Book 6, Chapter 5, Section 2, these processes can include extreme rainfall events on the
upstream catchment, combined with storm surges and high astronomical tides in the lower
reaches of the estuary. The strengths and limitations of three alternative methods - Flood
Frequency Analysis, continuous simulation and the design variable method—were reviewed
in Book 6, Chapter 5, Section 3, with the design variable method identified as the most
appropriate method for general application in Australia.
A detailed overview of the theory joint probability modelling was presented in Section Book
6, Chapter 5, Section 4, and a practical approach to implementing the design variable
method was provided in Book 6, Chapter 5, Section 5. The recommended approach includes
a pre-screening analysis that can be used to ensure that a detailed joint probability analysis
is only conducted for cases where the additional complexity is warranted. The
implementation of the method has been designed to be generally applicable to a range of
situations across the Australian coastline, and the method is able to accommodate changes
to extreme rainfall and ocean levels as a result of anthropogenic climate change. Worked
examples describing the implementation of the method were provided in Book 6, Chapter 5,
Section 6 and Book 6, Chapter 5, Section 7.
The additional complexity of joint probability modelling means that the methods described
here should only be implemented by users with sufficient understanding of the theoretical
basis of each method. In all cases, the assumptions and limitations of each method
(summarised in Book 6, Chapter 5, Section 3) should be taken into account to ensure that
the selected method is appropriate for the problem. The theory, computational methods and
supporting datasets required to implement joint probability approaches will continue to
advance, and users should maintain familiarity with on-going developments in this field.
5.9. References
Ang, A.H.S. and Tang, W.H. (2006), Probability Concepts In Engineering Planning and
Design. Emphasis on applications in Civil and Environmental Engineering, John Wiley and
Sons.
Beirlant, J., Goegebeur, Y., Segers, J. and Teugels, j. (2004), Statistics of Extremes - Theory
and Applications, 490 pp., John Wiley and Sons, West Sussex, England.
Coles, S. G. and Tawn, J.A. (1994), Statistical Methods for Multivariate Extremes: An
Application to Structural Design, Journal of the Royal Society. Series C (Applied Statistics),
43(1), 1-48.
220
Interaction of Coastal and
Catchment Flooding
Hawkes, P.J. and Svensson, C. (2006), Joint Probability: Dependence Mapping and Best
Practice. R&D Technical Report FD2308/TR1Rep., DEFRA.
Heffernan, J. and Tawn, J.A. (2004), A conditional approach for multivariate extreme values
(with discussion), Journal of the Royal Society. Series B (Methodological), 66(3), 497-546.
Kotz, S. and Nadarajah, S. (2000), Extreme Value Distributions: Theory and Applications,
Imperial College Press.
The National Committee on Coastal and Ocean Engineering (NCCOE) (2012), 'Guidelines
for responding to the effects of climate change in coastal and ocean engineering', Engineers
Australia, Third Edition.
Svensson, C. and Jones, D.A. (2002), Dependence between extreme sea surge, river flow
and precipitation in east Britain, International Journal of Climatology, 22: 1149-1168.
Svensson, C. and Jones, D.A. (2004), Dependence between sea surge, river flow and
precipitation in south and west Britain, Hydrological Earth Systems Science, 8(5), 973-992.
Tawn, J. (1988), Bivariate extreme value theory: Models and Estimation, Biometrika, 75(3),
397-415.
WMAwater (2013), Hydraulic modelling report - Nambucca River and Warrell CreekRep.
Webb McKeown and Associates (1996), Warragamba Dam Auxiliary Spillway EIS flood
study parts A-ERep.
Zheng, F., Westra, S. and Sisson, S.A. (2013), The dependence between extreme rainfall
and storm surge in the coastal zone, Journal of Hydrology, 505: 172-187.
Zheng, F., Leonard, M. and Westra, S. (2015), Efficient joint probability analysis of flood risk,
Journal of Hydroinformatics, in press (accepted 14/1/2015).
Zheng, F., Westra, S. and Leonard, M. (2014a), Australian Rainfall and Runoff Revision
Project 18, Stage 3: Coincidence of fluvial flooding and coastal water levels in estuarine
areasRep., pp: 145.
Zheng, F., Westra, S., Leonard, M. and Sisson, S.A. (2014b), Modelling dependence
between extreme rainfall and storm surge to estimate coastal flood risk, Water Resources
Research, 50(3), 2050-2071.
221
Chapter 6. Blockage of Hydraulic
Structures
William Weeks, Ted Rigby
6.1. Introduction
6.1.1. Background and Scope
The capacity of drainage systems can be severely impacted by blockage. However, there
are situations where significant blockage may not impact flood behaviour to any great extent.
Determination of likely blockage levels and mechanisms, when estimating design flows, is
therefore an important consideration in quantifying the potential impact of blockage of a
particular structure on design flood behaviour.
Blockage of drainage structures is a subject where a range of advice has been provided in
different guidelines. Many drainage guidelines do not mention blockage at all (Pilgrim, 1987),
therefore blockage is ignored in many cases. In other situations, especially where there has
been an observed blockage problem in historical flood events, blockage may be specified for
extreme conditions. Other guidelines provide inconclusive advice.
In fact, the actual evidence for the impact of blockage on design flood events is very limited
and the evidence for any clear quantitative design advice is lacking. This is the case
internationally as well as in Australia.
This chapter is not a definitive approach, but is an attempt to provide an approach that
allows a consistent analysis methodology, while not becoming too extreme in either direction
since there are risks in either under- or over-estimating the influence of blockage on design
flood levels. It draws heavily on the findings of an earlier report prepared by the ARR
Revision Project 11 team (Weeks et al., 2009). Materials upon which this guideline has been
based are referenced in the Bibliography of this chapter and in the earlier project reports and
papers released on the ARR website (http://arr.ga.gov.au/).
It is expected that this chapter will be updated and revised as more information becomes
available and designers gain experience in the assessment of blockage and how it affects
the drainage system and calculated design flood behaviour.
222
Blockage of Hydraulic
Structures
floodwater. It has not been developed for and is not appropriate when considering the impact
of what are known as hyperconcentrated flows, mudflows or debris flows, on blockage of a
structure. Hyperconcentrated flows are typically defined by a solids content of 20% or more
by volume (or about 40% by weight) of the water column. Mud and debris flows include even
higher levels of solids. At these much higher levels of suspended or fully integrated solids,
blockage levels are likely to be much higher than those assessed in accordance with this
chapter. Care should be taken in the review of catchment conditions where bed grades are
relatively steep (say > 3%), to confirm bed and banks would remain relatively stable, such
that flows would remain in the sediment or debris laden category and not become
hyperconcentrated during the event under consideration.
223
Blockage of Hydraulic
Structures
Debris Availability
The volume of debris available in the source area;
Debris Mobility
The ease with which available debris can be moved into the stream;
Debris Transportability
The ease with which the mobilised debris is transported once it enters the stream;
Structure Interaction
The resulting interaction between the transported debris and the bridge or culvert
structure; and
Random Chance
An unquantifiable but significant factor.
These various factors which impact debris movement and interaction with the structure are
discussed further in the following sections.
Debris comprising natural materials is discussed in Book 6, Chapter 6, Section 3 and Book
6, Chapter 6, Section 3 and urban debris in Book 6, Chapter 6, Section 3. A means of
determining the relevant dimensions of the debris is discussed in Book 6, Chapter 6, Section
4.
224
Blockage of Hydraulic
Structures
Small items of vegetation will usually pass through drainage structures during floods, while
larger items may be caught in the structure. Once larger items are caught, this then allows
smaller debris to collect on the structure.
225
Blockage of Hydraulic
Structures
Once mobilised, gravels and cobbles are primarily transported as bed load within high
gradient streams. The deposition of cobbles can readily block the entrance of culverts or
reduce the flow area under bridges.
Boulders
comprise rocks greater than 200 mm. The source of boulders is mostly from gully and
channel erosion, landslips and the displacement of rocks from channel stabilisation
works. Like gravel and cobbles, this material is typically transported as bed load in high
gradient streams. This material can readily block the entrance to a structure and/or
cause damage to the structure from the force of impact/collision.
The following factors affect the availability of debris material within a source area:
Local geology
The geology of the debris source area, particularly the exposed geology of the
watercourse, influences the availability of materials such as clay, silt, sand, gravel, rocks
and boulders.
Source area
Increasing the area supplying debris typically increases the quantity of available
blockage material. It is noted however, that once blockage occurs at a given structure,
the debris source area for the next downstream structure may be much less than that of
the upstream structures source area.
226
Blockage of Hydraulic
Structures
typically increase the availability of debris. Some types of cover are also more prone to
produce debris than others (eg Cora trees). The type of cover in the source area can
also impact availability
Land clearing
This is associated with both rural and urban land use practices. Deforestation and
urbanisation can alter the long-term flow regime of streams and may lead to gully erosion
and channel expansion.
Urbanisation
Such areas make available a wide range of debris typically influenced by the extent of
flood inundation and proximity of such debris to the stream. In most circumstances this a
manageable factor linked to town planning and drainage design.
Rainfall erosivity
Different regions experience a range of frequencies of rainfall intensity, and in general,
those areas that experience more intense rainfall have a greater potential to mobilise
debris than areas of lower rainfall intensity.
Soil erodibility
This can vary from weathered rocks to cohesive clays, all soils have different abilities to
become eroded, entrained and available for mobilisation.
Slope
For sediment and boulder movement, there is a relationship between the mobilisation of
such debris and the slope of the catchment, with respect to overbank areas where debris
may be sourced and the stream channel which conveys the debris.
Storm duration
The mobilisation of materials generally increases with increasing storm duration.
Vegetation cover
Sparse vegetation cover can increase sediment mobility.
227
Blockage of Hydraulic
Structures
Exposed services attached to the face of culverts or bridges or obstructing the culvert
waterway opening can significantly increase the risk of blockage. Similarly, some through-
culvert features introduced to improve fish passage can also collect and hold debris
increasing the risk of internal blockage problems. Many other factors such as skew
alignments, opening aspect ratios, opening height to overtopping height ratios, culvert
hoods, sloping inlet walls and the smoothness of transitions can also modify the likely
interaction between the arriving debris and the bridge or culvert structure.
In urban drainage systems, any individual culvert in the system is not an individual structure,
it is part of a system, generally with culverts and other structures in a series down the water
course. As a consequence, upstream culverts are likely to collect a portion of the transported
debris in the stream, reducing the quantity of debris that would otherwise reach the
downstream culverts so the risk of blockage in these downstream structures is reduced.
The design blockage is the blockage condition that is most likely to occur during a given
design storm and needs to be an “average” of all potential blockage conditions to ensure
that the calculated design flood levels reflect the defined probability. For example, an
228
Blockage of Hydraulic
Structures
assumption of a higher than average level of blockage would lead to the calculated design
flood level upstream of the structure being higher than would be appropriate for the defined
probability. Downstream flood levels would be lower because of the additional flood storage
created upstream of the structure. On the other hand, an assumed lower than average level
of blockage would result in lower flood levels upstream and higher flood levels downstream.
This is a similar concept to that of probability neutrality used in various aspects of design
flood event analysis. It is also noted that actual blockage levels vary greatly from event to
event with a potential spread from “all clear” to “fully blocked” even in floods of comparable
magnitude. Antecedent catchment conditions and random chance are major factors in
determining blockage levels in an actual event. The selected design blockage must aim for
probability neutrality (the concept of ensuring that the AEP of the design flood discharge is
the same as the AEP of the design rainfall input) so design floods are appropriate for the
particular circumstances. As with other similar aspects of design flood estimation, such as
losses, each individual historical flood may have quite different amounts of blockage
compared to the design event.
This chapter is based on a design event type analysis, where a flood of a defined flood
probability is required. For Monte Carlo analysis of flood risk, a probability distribution of
blockage is required, as an input. Considering the uncertainty in the assessment of
blockage, analysis of probability distributions is even more difficult. This topic is discussed
more detail in Book 6, Chapter 6, Section 5. The procedure presented in this chapter is
based on a qualitative assessment of debris likely to reach a structure, and the likely
interaction between that debris and the structure regarding its potential for blockage. It is
based on the various papers prepared by Barthelmess, Rigby, Silveri and others.
The procedure initially involves a series of decisions leading to estimation of the likely
magnitude of debris reaching a structure in a 1% AEP event and the most likely blockage
level that would develop at the structure under consideration. Subsequent adjustments are
then made to reflect the most likely design blockage levels in lesser or greater AEP events
and to establish the associated most likely blockage mechanism. This procedure provides an
probability neutral approach to the assessment of an appropriate level of blockage for the
simulation of design flood behaviour , but may not reflect specific conditions in an equivalent
historical event. Such is the random nature of the many variables controlling blockage
behaviour.
Where the structure/site under consideration is located in a particularly flood sensitive area
and blockage of the structure could significantly impact flood behaviour in that area, then a
229
Blockage of Hydraulic
Structures
high level of investigation is warranted. This should include a field inspection of the upstream
catchment/source area to confirm the types of debris likely to reach the site, their availability,
mobility and transportability together with the average size of the largest 10% of each debris
type likely to reach the site. Any structures upstream of the target structure/site should be
inspected and consideration given to their ability to trap debris reaching the target structure/
site. Any photographs/records of past blockage material and extents should be used to
validate the choice of L10 and debris type. Although seldom available, any photos/records of
the blockage mechanism (Location – Type – Timing) that have been observed in past events
will help to validate the chosen blockage mechanism to be used in the hydraulic model.
However it must be stressed that it is the most likely (probability neutral) blockage
mechanism that is required, not the worst case scenario. Flood mapping, aerial photography,
annual rainfall and rainfall IFD data, rainfall and soil erosivity maps, topographic maps,
vegetation and soil maps should be consulted when available to further consolidate
conclusions as to the types of debris likely to reach the site and the quantum of such debris.
Conversely, when the structure under consideration is in an area where changes in flood
behaviour would have no significant consequences on safety, property damage or amenity,
then an extensive investigation to support the blockage assessment process, as outlined
above, may not be warranted. This decision should be documented.
The final decision as to what is an appropriate level of investigation must ultimately be the
responsibility of the person making the assessment. It will vary greatly between sites and will
to some extent be constrained by what information is available. Whatever the approach
adopted, it is important that the level of investigation undertaken should be relevant to the
importance of the assessment of blockage at the site and is documented, so that others
relying on the assessment can be aware of the confidence limits attaching to that particular
assessment.
In particular, if there has been no long term history of blockage at a particular structure and
similar drainage structures in the catchment have not demonstrated blockage problems,
blockage may not need to be considered, or a nominal allowance only may be appropriate in
design.
230
Blockage of Hydraulic
Structures
The types of debris available in their respective source areas will normally be readily
apparent during a field visit or from aerial photographs, but relevant dimensions may be
more difficult to assess.
The ratio of the opening width of the structure (e.g. diameter or width of the culvert or bridge
pier spacing) to the average length of the longest 10% of the debris that could arrive at the
site (termed here as L10) is a well correlated guide to the likelihood that this material could
bridge the openings of the structure and cause blockage. This L10 value is defined as the
average length of the longest 10% of the debris reaching the site and should preferably be
estimated from sampling of typical debris loads. However, if such data is not available, it
should be determined from an inspection of debris on the floor of the source area, with due
allowance for snagging and reduction in size during transportation to the structure.
For debris of any particular type and size to reach the structure, the debris must:
• be able to be mobilised into the stream and not snagged by bank vegetation as it enters
the stream; and
• be delivered into a stream able to transport the debris from the source area down to the
structure, without floating debris being snagged by bank vegetation or stream bends or
constrictions, or without non-floating debris being deposited prior to reaching the structure
as the stream grade and velocities reduce. For smaller more turbulent streams (less than
say 6 m bank to bank) the width between banks of the stream through the source area will
normally limit the size/length of larger floating debris to less than the stream width. The
bed grade immediately upstream of the structure will normally limit the size of the larger
non-floating debris reaching the structure to that capable of being moved by the flow.
Any loose material and pockets of debris lying within or in close proximity to the channel are
likely to be representative of the debris that could cause downstream blockage. A detailed
inspection of the waterway upstream of the target structure, particularly after a flood, will
assist with assessing the above factors and deriving a realistic value for L10.
In an urban area the variety of available debris can be considerable with an equal variability
in L10. In the absence of a record of past debris accumulated at the structure, an L10 of at
least 1.5 m should be considered as many urban debris sources produce material of at least
this length such as palings, stored timber, sulo bins and shopping trolleys.
231
Blockage of Hydraulic
Structures
• Streams with boulder/cobble beds and steep bed slopes and steep banks
showing signs of substantial past bed/bank movements.
• Arid areas, where loose vegetation and exposed loose soils occur and
vegetation is sparse.
• Urban areas that are not well maintained and/or where old paling fences,
sheds, cars and/or stored loose material etc., are present on the
floodplain close to the water course.
Medium • State forest areas with clear understory, grazing land with stands of trees.
• Source areas generally falling between the High and Low categories.
Low • Well maintained rural lands and paddocks with minimal outbuildings or
stored materials in the source area.
• Streams with moderate to flat slopes and stable bed and banks.
• Arid areas where vegetation is deep rooted and soils are resistant to
scour.
• Urban areas that are well maintained with limited debris present in the
source area.
Table 6.6.2. Debris Mobility - Ability of a Particular Type/Size of Debris to be Moved into
Streams
232
Blockage of Hydraulic
Structures
Table 6.6.3. Debris Transportability - Ability of a Stream to Transport Debris Down to the
Structurea
233
Blockage of Hydraulic
Structures
In conjunction with the quantity of debris likely to arrive at the site, Table 6.6.6 provides an
estimate of the ‘most likely’ inlet blockage level should a blockage form from floating or non-
floating debris bridging the inlet.
Control Dimension Inlet Clear Width (W) AEP Adjusted Debris Potential At Structure
(m) High Medium Low
W < L10 100% 50% 25%
L10 ≤ W ≤ 3*L10 20% 10% 0%
W > 3*L10 10% 0% 0%
Table 6.6.7 classifies the likelihood of deposition in the barrel or waterway based on
sediment size and velocity through the structure. Using this likelihood of deposition
Table 6.6.8 then combines the likelihood of deposition with the debris potential to provide a
most likely depositional barrel or waterway blockage level for the structure.
234
Blockage of Hydraulic
Structures
Likelihood that Deposition AEP Adjusted Non Floating Debris Potential (Sediment)
will Occur (Table 6.6.7) at Structure
High Medium Low
High 100% 60% 25%
Medium 60% 40% 15%
Low 25% 15% 0%
While the above tables provide a means of estimating a realistic value for the magnitude of a
likely (probability neutral) blockage, they do not address the other characteristics required to
properly describe the blockage mechanism (viz the blockage type, location and timing) and
its impact on the hydraulics of flow through the structure. These issues are discussed further
in Book 6, Chapter 6, Section 4.
235
Blockage of Hydraulic
Structures
Where the main stream width is considerably less than the total structure width, it is likely
that more debris will be delivered to and accumulate at or in the cells/spans falling within the
main stream width, than at the cells/spans located on the adjacent floodplains. This may not
be the case when the mainstream flow is only a small proportion of the total flow reaching
the structure. In such cases the presentation of debris to the multiple cells/spans may
become more uniform resulting in more consistent levels of blockage.
As an initial guide it is suggested that, where the width of that part of the approach flow that
is capable of transporting the debris under consideration, is comparable with or greater than
the total width of the structure, then the assessed BDES be applied uniformly to all cells/
spans.
Where the width of that part of the approach flow that is capable of transporting the debris
under consideration is significantly less than the total width of the structure, then the
culverts/spans within the effective transport width be assessed as blocked to BDES and those
outside of that zone be reduced to half BDES. Measurements of observed distributions are
however essentially non-existent at this time. More information, to permit refinement of
guidelines for blockage of multiple spans/cells, is needed.
The question then arises as to what are the ‘likely’ probability neutral combinations of
blockage that could occur across a catchment. Clearly an ‘all clear’ (BDES=0) global solution
is possible in any event and even probable in lesser events. In these lower probability events
the single site BDES is probably also low so the change in catchment floods behaviour
between different mixes of sites with BDES>0 and BDES=0 may not be great. In larger events
however substantial differences in flood behaviour can be created from different mixes of ‘all
clear’ and BDES structures across the catchment. Simple math shows that n independent
sites with two choices for blockage presents 2n combinations. A catchment with 6 interacting
culverts therefore could involve 64 possible blockage scenarios. In analysing these
combinations it is therefore critical both with respect to probability neutrality and computation
time that only likely combinations are considered. Seldom will all structures be responding in
a truly independent manner. There is unfortunately no pre-prepared solution for this problem
– all catchments will be different. While not a truly probability neutral approach, modelling all
structures 'all clear' and 'guideline blocked' ensures individual structure impacts are properly
simulated in the envelope solution together with the 'all clear' impacts. If these scenarios are
then augmented with 'likely' mixtures of clear and 'guideline blocked' structures, the resulting
flood surface envelope should reasonably represent the likely envelope flood surface levels
that could be reached at any site in the catchment. It should be noted however that in any
single historic event of a given AEP, the recorded flood surface will likely only reach the
envelope levels at some locations (due to the variability in actual historic blockages).
236
Blockage of Hydraulic
Structures
As previously noted, where there are multiple structures on a contiguous water course, the
debris availability will normally reduce downstream since debris will be captured by the
upstream structures. Therefore for downstream structures, the debris availability, as defined
in Table 6.6.1 will normally be reduced.
As blockage of a structure with significant upstream available flood storage can lead to a
reduction in flood flow and levels downstream of the structure, effectively protecting
downstream properties, it is important to review the all clear analysis to see if the all clear
scenario results in significantly increased flows downstream of the structure. If this is found
to be the case then the all clear and 'guideline blocked' results should be enveloped for
design flood estimation purposes.
237
Blockage of Hydraulic
Structures
6.4.6. Implementation
A form has been prepared to assist in implementing this procedure and is available on the
ARR website1.
238
Blockage of Hydraulic
Structures
A bottom up blockage
occurs, when non-floating material is deposited at the inlet and/or in the barrel or
waterway of the structure. This also is a dynamic type of blockage with sediment being
both added and removed from the blockage as time passes. Because of the dynamic
nature of this process, the debris apparent at the conclusion of the event may have little
relationship to the debris level at any point in time during the event. As with the top down
blockage, the temporal history of blockage in an historic event can be important in
realistically reproducing actual flood behaviour during the event. Bottom up blockages
are relatively common in steep lightly vegetated catchments with unstable stream banks
or easily eroded stream beds. As the geometry of a bottom up blockage does not directly
vary with flood stage (as in a top down blockage), hydraulic analysis of a bottom up
blockage is more straightforward.
Progressive floating raft inlet blockages are assumed in this chapter to significantly impact
flow through the structure only after the flow peaks (being mostly clear at higher flows as the
raft lifts clear of the inlet and possibly overtops the structure. Pulse like blockages of floating
material at an inlet mostly arise from vegetation injected into the stream from collapsing
banks, as floodwater rise or from litter swept off the floodplain as streams overtop their
banks. Neither of the above blockage types is likely to create a significant barrel/waterway or
outlet blockage although non-floating debris, if present in any quantity can build up under the
raft at the inlet and in the barrel, particularly as the flood recedes. It should be noted that
factoring of 'all clear' flow will not necessarily provide a good estimate of the impact of either
of these mechanism as both are inlet control mechanisms and the 'all clear' structure could
be operating under strong outlet control.
Non-floating material reaching a culvert or bridge will mostly build up progressively but can
occur as a pulse of debris in streams with unstable banks. Typically, non-floating material
(sediment) will build up throughout the structure (inlet, barrel and outlet) as increasing flows
mobilise ever increasing amounts of bed and bank material. Material will be continuously lost
from the accumulated debris mass, but the rate of supply is likely to exceed the rate at which
material passes on downstream, at least while flows are increasing and new material is
being mobilised.
239
Blockage of Hydraulic
Structures
These observations and assumptions on the likely type, location and timing of a blockage
are summarised in Table 6.6.9 In this table, the following designations are used to describe
the timing of key trigger points in the blockage process.
TTOTB/SA
Is the time when flow that first overtops the stream’s banks in the source area reaches
the structure.
TP
Is the time at which the upstream water level peaks at the structure.
TOBV/FL
Is the time on the falling limb when the upstream water level drops back to the obvert
level of the structure.
240
Blockage of Hydraulic
Structures
chapter. This chapter assumes that a top down blockage will be simplistically modelled by
lowering the obvert of the structure over the tabulated time to then reflect the tabulated
blockage level. Where the consequences of this form of blockage are high, and more
realistic simulation is deemed necessary, it may be necessary to develop a site specific
procedure. More information on this process can be found in Parola (2000), US DOTFHA
(2005) and USGS (2013).
While the temporal pattern of a structure's blockage when it blocks prior to the flood peak in
a system with little flood storage will have minimal impact on downstream peak flows or
upstream peak flood levels, it can substantially alter the duration that upstream flood levels
are above a certain level (floor or structure overtopping) level. In a system with significant
flood storage, the timing of a structure's blockage can significantly alter upstream peak flood
level, downstream peak discharge and overtopping duration. Consideration of the temporal
pattern of a blockage can therefore be extremely important in realistically simulating the
hydraulic impact of a blockage.
In establishing the key timings referred to in Table 6.6.9, it will normally be necessary to first
run a simulation with estimated blockage levels and timings in place.
When modelling a historic event, hydraulic analysis will need to reflect (as far as available
data permits) the actual blockage mechanism that developed at the structure during the
event. It should be noted that this may vary significantly from what this chapter provides as
the ‘most likely’ blockage scenario for the structure, such is the impact of near random
chance on the many parameters influencing actual blockage. However, where data for
multiple historic events is available and blockage appear to consistently differ from these
chapter recommendations, further investigation is warranted, with historic data, if of
reasonable quality, being given precedence.
To minimise the adverse impacts of debris blockage on bridges the following design
considerations should be given appropriate consideration:
• Minimise the exposure of services (i.e. water supply pipelines) on the upstream side of the
bridge, and/or minimise the likelihood of debris being captured on exposed services.
To minimise the effects of debris blockage on culverts the following design consideration
should be noted:
• Take all reasonable and practicable measures to maximise the clear height of the culvert,
even if this results in the culvert hydraulic capacity exceeding the design standard. This
minimises the likelihood of debris being caught between the water surface and obvert, and
also minimises the risk of a person drowning if swept through the culvert (i.e. the culvert is
more likely to be operating in a partially full condition).
• The risk of debris blockage can also be reduced by using single-cell culverts, or in the
case of floodplain culverts, spacing individual culvert cells such that they effectively
operate as single-cell culverts without a common wall/leg (Figure 6.6.1 and Figure 6.6.2).
241
Blockage of Hydraulic
Structures
• One means of maintaining the hydraulic capacity of culverts in high debris streams is to
construct debris deflector walls (1V:2H) as shown in Figure 6.6.3 and Figure 6.6.5. The
purpose of these walls is to allow the debris that normally collects around the central leg to
rise with the flood, thus maintaining a relatively clear flow path under the debris. Following
the flood peak, the bulk of the debris rests at the top of the deflector wall allowing easier
removal (Figure 6.6.4).
242
Blockage of Hydraulic
Structures
• Sedimentation problems within culverts may be managed using one or more of the
following activities:
• Formation of a multi-cell culvert with variable invert levels such that the profile of the
base slab simulates the natural cross section of the channel (Figure 6.6.6).
• Installation of sediment training walls on the culvert inlet (Figure 6.6.3 and Figure 6.6.5).
Sediment training walls reduce the risk of sedimentation of the outer cells by restricting
minor flows to just one or two cells.
Figure 6.6.5. Sediment Training Walls Incorporated with Debris Deflector Walls (Catchments
& Creeks Pty Ltd)
243
Blockage of Hydraulic
Structures
Figure 6.6.7. Debris Deflector Walls and Sediment Training Wall Added to Existing Culvert
• Where space allows, a viable alternative to increased culvert capacity (in response to the
effects of debris blockage) may be to lengthen the roadway subject to overflow (i.e. the
effective causeway weir length).
• Where high levels of floating debris are present and frequently become trapped on hand
rails, collapsible hand rails may be considered. Such systems typically include pins or
bolts designed to fail when water becomes backed up by the handrails and therefore
require ongoing maintenance. If used as traffic barriers the downstream rail fixing can be
problematic. They can however limit rises in floodwater levels upstream of the structure.
244
Blockage of Hydraulic
Structures
other debris can collect, or (c) diversion structures designed to provide safe bypass of debris
or water. Such structures can occasionally be incorporated into a water quality management
plan for a catchment.
Where debris control structures or at-source control measures have been implemented,
these should be incorporated into the assessment of the drainage system, which could mean
a reduction in the allowance that needs to be made for blockage. Ongoing maintenance is
however fundamental to the successful operation of these measures. Unless a deliberate
maintenance program is in place and has been demonstrated to work, it would not be
prudent to lower design blockage levels.as a consequence of such works.
Care should also be taken to ensure that the hydraulic impact of the debris control structure
does not itself aggravate flooding in the system.
6.7. Conclusion
The inclusion of blockage in the analysis of hydraulic structures in drainage systems is an
important consideration in the realistic simulation of flood behaviour. The impact of blockage
is however a complex and difficult problem to analyse. It is important to ensure that the
estimate of blockage used in analysis is probability neutral and not over or under-estimated
as this can influence the performance of the total system. This chapter has presented an
approach to the assessment of design blockage that has been developed in consultation
with Australian experts and provides a consistent and logical approach to assist in the
effective planning and design of drainage systems. Future investigation will refine this
approach.
For further information on the background to this chapter, readers are referred to the
following bibliography.
6.8. References
Parola, A.C. (2000), Debris Forces on Highway Bridge Issue 445, Prepared by the U.S.
Transport Research Board of the National Research Council Queensland Department of
Natural Resources and Water (2008).
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Sundborg, A. (1956), The River Klarålven: Chapter 2. The morphological activity of flowing
water-erosion of the stream bed: Geografiska Annaler, 38: 165-221.
U.S. Geological Survey (USGS) (2013), Potential Drift Accumulation at Bridges. URL: http://
tn.water.usgs.gov /pubs/FHWA-RD-97-028/accumwid.htm.
Weeks, W., Barthelmess, A., Rigby, T., Witheridge, G. and Adamson, R. (2009),'Blockage in
Drainage Structures', Hydrology and Water Resources Symposium, Engineers Australia,
Newcastle.
245
Chapter 7. Safety Design Criteria
Grantley Smith, Ron Cox
7.1. Introduction
The safety of people in floods is of major concern in floodplain management for both rural
and urban areas. Consideration of the circumstances for individual flood fatalities, both in
Australia and internationally, indicates that flood fatalities occur most commonly when people
enter floodwaters either on foot or in a vehicle (French et al., 1983; Coates and Haynes,
2008; Haynes et al., 2016). However, where floodwaters rise rapidly and unexpectedly in
flash flood areas, people may also perish trapped inside buildings as occurred in the Lockyer
Valley QLD in 2010 (Rogencamp and Barton, 2012) and in Dungog, NSW in 2015. Further,
recent analysis of the Queensland floods in 2011 by the Queensland Commission for
Children, Young People, and Child Guardian (CCYPCG, 2012) and the Queensland Fire and
Rescue Service, 2012 has reinforced the conclusion that floodwaters are extremely
dangerous to both people trapped inside building or wading or driving vehicles of all types in
floods. While floodplain management activities aim to reduce the risk arising from flooding
events, ongoing human interaction with floodwaters during flood events is largely
unavoidable, as significant areas of existing development and transport infrastructure in
Australia remain within flood prone regions.
Records for past floods show that exposure of the community to flooding can result in
significant death tolls and 1859 flood fatalities have occurred nationally between 1900 and
2015 (Haynes et al., 2016). Flood fatalities are significantly higher in flash flood events with
rapidly rising violent flood flows than in comparably slower rising and moving riverine
flooding. Two hundred and six (206) flash flood fatalities occurred in Australia between 1950
and March 2008 (Coates and Haynes, 2008). The cause of death for the majority of these
cases was drowning. Other fatalities were a result of heart attacks or overexertion, or indirect
causes such as electrocution or fallen trees (Coates and Haynes, 2008). Similarities have
been observed in the United States, where 93 % of flash flood deaths can be attributed to
drowning (French et al., 1983). Details about the activity of flash flood victims immediately
prior to death are available for just under 50 % of the victims. Of these, almost 53 %
perished attempting to cross a watercourse, either by wading/swimming, or by using a bridge
or ford (Coates and Haynes, 2008). These values include those in vehicles. The motivation
behind the activity leading to the death was known for 47 % of the study group. Of these,
almost 22 % were undertaking business as usual, either attempting to reach a destination,
ignoring the flood warnings or unaware of the flood intensity (Coates and Haynes, 2008).
The majority (31 %) of the Australian flash flood fatalities, for which the mode of transport is
known, were inside a vehicle at the time of death. Similar results have been observed
around the world, 42 % of the 93 % US flash flood drowning fatalities were vehicle-related
(French et al., 1983) and 63 % of US riverine and flash flood fatalities were found to be
vehicle-related (Ashley and Ashley, 2008). Jonkman and Kelman (2005) noted that vehicle-
related fatalities occurred most frequently (33 %) in European and US floods.
The Lockyer Valley floods of January 2011 dramatically demonstrated that sheltering in a
residential building was also not a safe option where flood flows have high force and
246
Safety Design Criteria
damage potential. Of the nineteen people who perished in the Lockyer Valley floods, thirteen
were sheltering in buildings that were either completely inundated or collapsed under the
force of the flood flows (Rogencamp and Barton, 2012).
Regardless, the high numbers of people that die in vehicles or on foot highlights the
considerable risk in fleeing flash flood events. In many cases, people become exposed to
greater risk when attempting to flee a flood affected area (Ashley and Ashley, 2008; Coates,
1999; Drobot and Parker, 2007; Jonkman and Kelman, 2005). The risks to those fleeing are
not just the floodwaters themselves, but also include poor driving conditions, the danger of
being hit by falling debris, electrocution from fallen power lines, lightning and mudslides
(Haynes et al., 2016).
Jonkman and Kelman (2005) highlighted that in most floods, people are more likely to be
killed or injured if they are outside of their home or in their cars during the flood.
Subsequently, undertaking evacuation at inappropriate times, such as when the floodwaters
have risen in depth and velocity, is likely to increase chance of death (Cave et al., 2009).
There are a number of factors to be considered where assessing the hazard associated with
floods. The usual starting point is to predict the flood characteristics and particularly the flow
characteristics of the inundated areas of the floodplain. The main characteristics of interest
typically are the flow depth and the flow velocity. In addition, the assessment of the flood
hazard needs to consider a range of other social, economic and environmental factors,
though these are often more difficult to quantify.
The magnitude of flood hazard can be variously influenced by the following factors:
247
Safety Design Criteria
• Velocity of Floodwaters;
• Depth of Floodwaters;
When quantifying and classifying flood hazard, it is important to understand the underlying
causes of the hazard level. For example, if the hazard level is classified as ‘high’ then it is
important to understand the key reason that it is high e.g. high depth, high velocity, high
velocity and depth in combination, isolation issues, short warning time? If the core reasons
that the hazard is high are not well understood, then attempts to modify and lower the
hazard level may not be successful.
The data used for assessment of the floodplain hazard are presented commonly as maps of
flood depth (see Figure 6.7.1) and flood velocity (speed and direction). Typically, these maps
are shown as an envelope of maxima; a time series of flow behaviour, however, is an
alternative presentation format.
248
Safety Design Criteria
Figure 6.7.1. Example of a Flood Study Depth Map (Smith andWasko (2012))
Smith andWasko (2012) investigated the effects of alternative grid resolutions on prediction
of the flood hazard. The predicted flood hazard (computed as D.V – flood depth times flood
velocity) for two model grid resolutions, namely 1m and 10m, for the 2007 flood event at
Merewether, NSW are shown in Figure 6.7.2. Comparing the predicted flood hazard
estimates, it can be seen that those derived using the 1m grid are higher than those
obtained with the 10m grid. Furthermore, Smith andWasko (2012) report that the predicted
velocity and depth characteristics for the 1m grid more closely replicated those obtained
from a physical model of the same area.
249
Safety Design Criteria
Figure 6.7.2. Comparison of Provisional Flood Hazard Estimates from Numerical Models at
Differing Grid Resolutions (after Smith and Wasko, 2012)
An important conclusion of Smith andWasko (2012) was that predictions from 2D numerical
hydrodynamic models require further interpretation in order to ensure that suitable,
representative flood behaviour information was obtained for application, especially in
emergency planning and management decisions.
As described in the introduction to this chapter, people tend to be at risk in one of three main
categories; on foot, in vehicles or in buildings. Subsequently, in order to further assess the
vulnerability of a flood under the predicted conditions, flood hazard assessment can be
divided into three categories; people stability, vehicle stability and structural stability.
In brief, moment (toppling) instability occurs when a moment induced by the oncoming flow
exceeds the resisting moment generated by the weight of the body (Abt et al., 1989). This
stability parameter is sensitive to the buoyancy of a person within the flow and to body
positioning and weight distribution.
Frictional (sliding) instability occurs when the drag force induced by the horizontal flow
impacting on a person’s legs and torso is larger than the frictional resistance between a
person’s feet and the ground surface. This stability parameter is sensitive to weight and
buoyancy, clothing type, footwear type and ground surface conditions.
250
Safety Design Criteria
Additionally, loss of stability may also be triggered by adverse conditions, which should be
taken into account when assessing safety such as:
• Flow conditions: floating debris, low temperature, poor visibility, unsteady flow and flow
aeration;
• Human subject: standing or moving, experience and training, clothing and footwear,
physical attributes additional to height and mass including muscular development and/or
other disability, psychological factors;
Figure 6.7.3. Typical Modes of Human Instability in Floods (Cox et al., 2010)
While distinct relationships exist between a subject’s height and mass and the tolerable flow
value, definition of general flood flow safety guidelines according to this relationship is not
considered practical given the wide range in such characteristics within the population.
In order to define safety limits, which are applicable for all persons, hazard regimes are
defined based on H.M for representative population demographics. Each classification is
based on laboratory testing of subject stability within floodwaters. The following suggested
classifications, after Cox et al. (2010) are:
Several hazard regimes are recommended based on D.V flow values for each H.M
classification. The hazard regimes, as suggested from laboratory testing of subject stability
and response within variable flow conditions, are:
• Low hazard zones where D.V < 0.4 m2s-1 for children and D.V < 0.6 m2s-1 for adults;
251
Safety Design Criteria
• A Significant hazard zone for children exists where flow conditions are dangerous to most
between D.V = 0.4 to 0.6 m2s-1;
• Moderate hazard zone where conditions are dangerous for some adults and all children is
defined between D.V = 0.6 to 0.8 m2s-1 for adults. This is inferred to define the limiting
working flow for experienced personnel such as trained rescue workers;
• Significant hazard zone where flow conditions are dangerous to most adults and extremely
dangerous for all children is suggested between flow values of D.V = 0.8 to 1.2 m2s-1; and
• Extreme hazard where flow conditions are dangerous to all people is suggested for D.V >
1.2 m2s-1.
Cox et al. (2010) concluded that self-evacuation of the most vulnerable people in the
community (typically small children, and the elderly) is limited to relatively placid flow
conditions. Furthermore, a D.V as low as 0.4 m2s-1 would prove problematic for people in
this category, i.e. the more vulnerable in the community.
These hazard regimes for tolerable flow conditions (D.V) as related to the individual’s
physical characteristics (H.M) are presented in Figure 6.7.4 and Table 6.7.1.
Table 6.7.1. Flow Hazard Regimes for People (Cox et al., 2010)
Maximum depth stability limit of 0.5 m for children and 1.2 m for adults under good
condition. Maximum velocity stability limit of 3.0 ms-1 for both adults and children.
1More vulnerable community members such as infants and the elderly should avoid
exposure to floodwater. Flood flows are considered extremely hazardous to these
community members under all conditions
2Working limit for trained safety workers or experienced and well equipped persons (D.V <
0.8 m2s-1)
3Upper limit of stability observed during most investigations (D.V > 1.2 m2s-1)
252
Safety Design Criteria
Figure 6.7.4. Safety Criteria for People in Variable Flow Conditions Cox et al. (2010)
For physically and/or mentally disabled people, similar intolerance criteria to the very young
children and frail/older persons should be applied as subjects are considered vulnerable to
all flow values. This is because while the H.M values may be similar to regular adults, they
are clearly at a physical (e.g. muscular development, control of limbs) and/or psychological
disadvantage (e.g. cognisant of the real/perceived danger, inability to cope with external
stimulus).
Emergency personnel tasked with carrying evacuees should be aware that the additional
H.M gained by carrying a person is not necessarily a benefit to their stability. This was
demonstrated in a particular laboratory test of human stability criteria, Jonkman and
Penning-Roswell (2008), who note that their test subject (a trained stuntman) considered
balancing in the flowing water more difficult when carrying extra weight such as a child or
elderly person.
It should also be noted that while these criteria are based on experimental data for loss of
stability for persons wading in floodwaters, it is also inherently dangerous to swim through
floodwaters. Swimming through floodwaters should not be attempted.
Where specific training has been undertaken or a subject has recent and relevant
experience, personnel are able to tolerate situations of high D.V (Jonkman and Penning-
253
Safety Design Criteria
Roswell, 2008). A limiting working flow of D.V = 0.8 m2s-1 is suggested for experienced
personnel such as trained rescue workers. These personnel should, where possible, be
equipped for dangerous flow conditions with safety restraints, floatation aids and other safety
apparatus, and be trained to cope with high D.V situations. It is trained emergency personnel
who are likely to be instructing, driving and guiding evacuation paths, and consequently to
whom the upper limit of safety design criteria is directed.
Note that in the context of this discussion friction instability is associated with slow moving or
stationary cars as distinct to hydroplaning, which occurs when a vehicle at high speed
encounters very shallow, evenly distributed water covering a road, typically a highway or
freeway. Hydroplaning is not considered further within this report.
The measure of physical attributes for vehicle stability analysis is the vehicle classification as
based on length (L, m), kerb weight (W, kg) and ground clearance (GC, m). Three vehicle
classifications are suggested:
The measure of flow attributes for vehicle stability analysis is D.V m2s-1, determined as the
product of flow depth (D, m) and flow velocity (V, ms-1)
254
Safety Design Criteria
Limiting conditions exist for each classification based on limited laboratory testing of
characteristic vehicles. The upper tolerable velocity for moving water is defined based on the
frictional limits, and is a constant 3.0 ms-1 for all vehicle classifications.
The upper tolerable depths within still water are defined by the floating limits:
The upper tolerable depths within high velocity water (at 3.0 ms-1) are defined by the
frictional limits:
While specifically equipped vehicles may remain stable in water of greater depths, the
intention of the presented criteria is to focus on the more vulnerable of typical vehicle types
in common use.
Note that for all flow conditions in all vehicle classes, the proposed vehicle safety criteria
remain below the moderate hazard criteria for adults (Cox et al., 2010). This ensures that
adults occupying vehicles are, in principle, safe if exiting a vehicle in floodwaters with
attributes within the specified hazard ranges.
During flood events, the majority of flood deaths are vehicle related, more than half of all
deaths during floods in the United States are vehicular-related (Gruntfest and Ripps, 2000).
Regardless of how often people see the power of water in flash floods or are notified through
community advertising that driving through high water is dangerous, there remain sections of
the community who will continue with ‘business as usual’ irrespective of the flooding
conditions (Gruntfest and Ripps, 2000).
As a result, the hazard criteria provided in this report are identified as interim recommended
limits based on interpretation of existing information. The criteria presented here are subject
to change once acceptable data for modern vehicles becomes available.
255
Safety Design Criteria
Table 6.7.2. Interim Flow Hazard Regimes for Vehicles (Shand et al., 2011)
1At velocity = 0 ms-1; 2At velocity = 3.0 ms-1; 3At low depth
Figure 6.7.6. Interim Safety Criteria for Vehicles in Variable Flow Conditions (Shand et al.,
2011)
Shand et al. (2011) concludes that the available datasets do not adequately account for the
following factors and that more research is needed in these areas:
• Information for additional categories including small and large commercial vehicles and
emergency service vehicles
256
Safety Design Criteria
Investigation and review of the available information concerning the failure of building
structures under flood loads has also been conducted by Kelman and Spence (2004) and
(Leigh, 2008). Amongst a range of relevant conclusions, these reviews noted that while a
series of studies had theoretically analysed incident flood forces compared to the resisting
strength of various building structures, most of these studies had considered components of
flood forces in isolation or in limited combinations e.g. hydrostatic and simplified
hydrodynamic (velocity head) or buoyancy and drag forces.
257
Safety Design Criteria
On this basis, the green curve in Figure 6.7.6 is proposed as a lower threshold for residential
homes, built without consideration of flood forces. This curve can be used as a minimum
criterion for building stability in existing flood affected areas.
The hazard zone between the green curve and the upper limit red curve in Figure 6.7.6
identifies flood hazard conditions where it is considered, if required, possible to construct a
purpose built structure that is an appropriately engineered structure specifically designed to
withstand the full range of anticipated flood forces including:
• Hydrostatic forces
resulting from standing water or slow moving flow around the structure;
• Buoyant forces
• Hydrodynamic forces
• Impulsive Forces
• Uplift forces
due to the accumulation of debris on the upstream side of the structure, which results in
an increase in the hydrodynamic force.
• Wave actions
In locations where timely evacuation is not possible, such purpose built structures may be
required for vertical evacuation, not dissimilar to the process used in Japan for tsunamis.
However, it would be important to ensure the structure was purpose built for the conditions it
would be likely to encounter, up to and including the PMF or a similar extreme flood event.
The bottom floor of such structures may need to be somewhat sacrificial during a flood
event, for example, the windows and doors may ‘blow out’ under high flow conditions,
however the building’s structural members will be required to remain intact.
258
Safety Design Criteria
The red curve in Figure 6.7.6 is a suggested upper limit for all buildings. Buildings in areas
classified with flood hazard above this threshold are considered vulnerable to collapse under
these extreme flood conditions.
7.2.6.
Previous hazard classification curves (e.g. SCARM (2000);HNFMSC (2006)) provided a
single set of hazard curves that divide flood hazard levels into generic classifications of low,
medium, high etc. While the thresholds between these classifications had some basis in data
collected for stability/vulnerability of people and risk to life, in practice, such threshold curves
have been widely interpreted (sometimes mis-interpreted) and applied in myriad ways.
It is interesting to compare the curves summarised for people, vehicle and building stability
compiled for this report. Figure 6.7.7 provides a direct comparison of these three sets of
curves. The first observation to be made is that for slow moving floodwaters at depths
greater than 0.5 m, adults wading through floodwater are generally considered more stable
than vehicles i.e. in most cases, vehicles are equally unstable or more unstable than adults
wading through the same flow conditions. Secondly, the stability limit for an untrained adult
walking through floodwater (D x V = 0.8) is almost the same level as the lower threshold limit
for building stability (D x V = 1.0). Also, that for shallow fast moving flows, building stability
(through foundation erosion/scour) may be less than the stability of a person walking through
the same flow conditions. In some situations, this means that you would be safer to walk out
through the prevailing floodwaters rather than sheltering in a poorly constructed building.
Figure 6.7.8. Comparison of Updated Hazard Curves (after Smith et al. 2014)
The third observation is that the flood water level that is used as the basis of a hazard depth
varies between people and vehicle stability, where the flood depth is referenced to the
ground level and building stability where the flood depth is nominally referenced to the floor
level.
On a practical level, this would mean that once physical flood behaviour has been quantified
in terms of flood depth and velocity, flood hazard could be classified individually for people,
259
Safety Design Criteria
vehicles or building thresholds separately. In many instances, this will suit the requirements
of specific analyses. For example, if the required assessment is to determine whether a road
evacuation route is trafficable for a given flood event, then the vehicle stability threshold
curves should be applied. Likewise, if the assessment is to determine which buildings would
be suitable for shelter in place during a PMF event, then the building stability thresholds for
flood hazard should be used in the analysis.
The combined flood hazard curves presented in Figure 6.7.9 set hazard thresholds that
relate to the vulnerability of the community when interacting with floodwaters. The combined
curves are divided into hazard classifications that relate to specific vulnerability thresholds as
described in Table 6.7.3. Table 6.7.4 provides the limits for the classifications in Table 6.7.3
260
Safety Design Criteria
Table 6.7.3. Combined Hazard Curves - Vulnerability Thresholds (Smith et al., 2014)
Table 6.7.4. Combined Hazard Curves - Vulnerability Thresholds Classification Limits (Smith
et al., 2014)
Hazard Vulnerability Classification Limit Limiting Still Water Limiting Velocity (V)
Classification (D and V in Depth (D)
combination)
H1 D*V ≤ 0.3 0.3 2.0
H2 D*V ≤ 0.6 0.5 2.0
H3 D*V ≤ 0.6 1.2 2.0
H4 D*V ≤ 1.0 2.0 2.0
H5 D*V ≤ 4.0 4.0 4.0
H6 D*V > 4.0 - -
Importantly, the vulnerability thresholds identified in the flood hazard curves described above
can be applied to the best description of flood behaviour available for a subject site. In this
regard, the hazard curves can be applied equally to flood behaviour estimates from
measured data, simpler 1D numerical modelling approaches, through to complex 2D model
estimates with the level of accuracy and uncertainty of the flood hazard estimate linked to
the method used to derive the flood behaviour estimate.
7.2.8.1. Isolation
As outlined in AEM Handbook 7 (AEMI, 2014), flooding can isolate parts of the landscape
and cut-off evacuation routes to flood-free land. This can result in dangerous situations,
261
Safety Design Criteria
because people may see the need to cross floodwaters to access services, employment or
family members. Many flood fatalities result from the interactions of people, often in vehicles,
with floodwaters. Any situation that increases people’s need to cross floodwaters increases
the likelihood of an injury or fatality.
Lack of effective warning time can increase the potential for the exposure of people to
hazardous flood situations. In contrast, having plenty of effective warning time provides the
opportunity to reduce the exposure of people and their property to hazardous flood
situations.
Effective strategic land use planning is about responding to flood risks in a way that
minimises future flood consequences. Consideration of flood hazard is therefore important
262
Safety Design Criteria
so that development of land is encouraged in areas of low or no flood risk wherever possible.
A clear understanding of flood risks early in the strategic land use planning process can help
steer development away from areas that are not sustainable due to the likely impacts of the
development on flood behaviour and guide land use zonings and development controls that
support sustainable development on the floodplain in consideration of the flood risk
The following figures provide both a broad-scale example of the base data (variation in
velocity and depth across a floodplain) and hazard mapping that can be developed as a first
pass using standard two dimensional numerical model outputs from a flood study analysis.
The flood hazard mapping presented in Figure 6.7.1 was developed by classifying the
numerical flood model results shown in Figure 6.7.10 using the flood hazard thresholds listed
in Table 6.7.4.
Figure 6.7.10. Flood Depth Map From Numerical Model Output (Courtesy WMAwater Pty
Ltd)
263
Safety Design Criteria
Figure 6.7.11. Flood Hazard Classification From Numerical Model Output (Courtesy
WMAwater Pty Ltd)
264
Safety Design Criteria
The local council as the consent authority has identified a series of development constraints
including flooding criteria. Amongst other criteria, the development must have:
• All retail floor space above the designated flood planning level defined as the 1% Annual
Exceedance Probability (AEP) flood surface plus 500 mm freeboard;
• A flood free evacuation route for floods above the flood planning level; and
• All car park areas compliant with ARR flood hazard criteria for vehicle stability;
As there was no existing flood study, following consultation with Council, the developer
engaged an experienced flood consultant to undertake flood modelling of the site to estimate
1% AEP flood behaviour. Flood modelling of the site for the 1% AEP event completed using
industry best practice guidance as provided by ARR reference “Two Dimensional Modelling
in Urban and Rural Floodplains” (Engineers Australia, 2012) predicted that while the
warehouse building met the flood planning criteria, the car park was inundated by
floodwaters. Figure 6.7.13 illustrates the extent of flood inundation for the existing site.
265
Safety Design Criteria
Interrogation of the flood model results to determine provisional flood hazard as the product
of flood depth (D) multiplied by flood velocity (V) showed that the peak flood hazard (D.V)
corresponded with the maximum inundation of the site at the peak of the flood hydrograph.
Provisional flood hazard for the 1% AEP flood on the existing site as illustrated by
Figure 6.7.14 showed that flooding in most of the area identified for car parking exceeds the
ARR stability criteria for small cars defined in Table 6.7.2 and illustrated in Figure 6.7.6. In
Figure 6.7.14, areas coloured blue (D.V <0.3 m2s-1) indicate locations where small cars are
likely to resist being moved by flood flows, whereas areas coloured green through red
indicate areas where small cars are very likely to be pushed across the floodplain by
floodwaters, with the flow having the potential to move larger cars closer to the creek
channel. Based on this information, Council’s preliminary advice to the developer was that
the existing flood hazard conditions for the designated car parking area were incompatible
with the nominated use.
266
Safety Design Criteria
Figure 6.7.14. 1% AEP Provisional Flood Hazard Map – Existing Car Park
As the concrete lined channel had low environmental value, Council was not opposed to the
developer’s proposed adjustment of the channel flow conveyance capacity to reduce the
proportion of overbank flow at the site. The developer, in consultation with the flood
consultant ran a range of cut and fill scenarios through the flood model aimed at expanding
the channel capacity while raising the relative ground level of the car park to the flood peak.
Figure 6.7.15 shows the adjustment of the channel and overbank area through the car park
on the longitudinal section identified as ‘A-A’ in Figure 6.7.12.
267
Safety Design Criteria
Flood modelling for the revised site including the proposed earthworks is presented in
Figure 6.7.16. The flood model results show that the site flood inundation area is significantly
reduced with the works in place.
268
Safety Design Criteria
Importantly, analysis of the provisional flood hazard as presented in Figure 6.7.17 shows that
flood hazard for the area designated as car park in the conceptual site design now meets the
ARR flood stability criteria for small vehicles. The nominated car park area has a D.V
product of less than 0.3 m2s-1 as indicated by the blue shaded area of Figure 6.7.17. This
indicates that it is now unlikely that cars inundated by floodwaters in the 1% AEP flood will
be pushed across the floodplain and potentially into the channel creating a possible
downstream blockage hazard.
Council’s planners also suggested to the developer that the yellow zone of Figure 6.7.17
could be landscaped as gardens providing clear separation of the car park from the channel.
From a floodplain management perspective, this suggested change would provide a
significant further reduction in exposure risk with little impact on available car parking space.
269
Safety Design Criteria
Figure 6.7.17. 1% AEP Provisional Flood Hazard Map – Revised Car Park
The developer considers that a centralised detention basin on the site can be designed to
have a dual use and integrated into the site grounds as a bowling green when not operating
as a detention basin. Figure 6.7.18 illustrates the proposed design.
As the catchment contributing runoff to the site is steep, Council’s flood expert considers that
there is some risk in a dual purpose design for the basin due to flash flooding in prevailing
thunderstorms. Council’s advice is that the developer engage a qualified flood expert to
determine whether the basin meets the ARR hazard criteria for people safety.
270
Safety Design Criteria
The basin design philosophy is that the local catchment stormwater will collect both overland
flows and also surcharge from the pit and pipe stormwater system. If the basin capacity is
exceeded, flows spill at a designated location and flow overland to the adjacent creek
channel. Flows that remain in the basin discharge through a grated pit in the lowest location
in the basin floor.
Figure 6.7.19. 1% AEP Provisional Flood Hazard Map - Proposed Flood Detention Basin
An assessment of the provisional flood hazard (flood depth multiplied by the flood velocity) is
presented in Figure 6.7.19. When compared to flood hazard criteria presented in Table 6.7.1
and Figure 6.7.4, the provisional hazard meets the safety criteria for the elderly in most
areas of the basin. This is because at full capacity, the basin is no greater than 0.5m deep.
Analysis shows that a dangerous flood hazard is likely to occur near to the basin’s outlet pit
when the basin begins to drain at full capacity.
Further analysis of the full design presented in Figure 6.7.20 shows that when the basin
capacity is exceeded overflows will cross a public footpath adjacent the creek reserve.
Provisional flood hazard analysis of the overflow path presented in Figure 6.7.21 shows that
the flood hazard in the flow path will exceed the people stability criteria for adults and be a
dangerous hazard to passing pedestrians.
271
Safety Design Criteria
This chapter of Australian Rainfall and Runoff presents a summary with an explanation of
the limitations of recent analysis of flood stability thresholds for pedestrians and vehicles in
floodwaters. Recommended flood hazard criteria for use within Australia based on the
stability of people and vehicles in flood flows has also been presented.
Finally, the presented examples illustrate practical applications of these flood hazard criteria.
While not exhaustive of all cases, the examples show how the criteria can be pragmatically
applied to reduce the community’s exposure to flood danger.
7.5. References
AEMI, Australian Emergency Management Institute. (2014). Technical flood risk
management guideline: Flood Hazard. Australian Emergency Management Institute Barton,
ACT.
Abt, S.R, Wittler, R.J, Taylor, A and Love, DJ. (1989). Human Stability in a High Flood
Hazard Zone, Water Resources Bulletin, American Water Resources Association, 25 (4), pp
881-890.
272
Safety Design Criteria
Ashley, S.T. and Ashley, W.S. (2008), Flood Fatalities in the united States, Meteorology
Program, Department of Geography, Northen Illinois University, DeKalb, Illinois, DOI: http://
dx.doi.org/10.1175/2007JAMC1611.1.
Black, R.D. (1975), Flood proofing rural residences. Department of Agricultural Engineering,
Cornell University.
Queensland Commission for Children, Young People, and Child Guardian (CCYPCG)
(2012), https://www.fire.qld.gov.au/communitysafety/swiftwater/default.asp - accessed 11
August 2012
Cave, B., Cragg, L., Gray, J., Parker, D., Pygott, K. and Tapsell, S. (2009), Understanding of
and response to severe flash flooding. Environment Agency, Bristol. Science Report No.
SC070021.
Clausen, L. and Clark, P.B. (1990), The development of criteria for predicting dam break
flood damages using modelling of historical dam failures. International conference on river
flood hydraulics. John Wiley and Sons Ltd. Hydraulics Research Limited, Wallingford,
England.
Coates, L. and Haynes, K. (2008), Flash flood shelter-in-place vs. evacuation research:
Flash flood fatalities within Australia, 1950 - 2008. Report prepared for the New South Wales
State Emergency Service.
Cox, R.J., Shand, T.D. and Blacka, M.J. (2010), Appropriate Safety Criteria for People in
Floods, WRL Research Report 240. Report for Institution of Engineers Australia, Australian
Rainfall and Runoff Guidelines: Project 10. 22 p.
Cox, R.J., Yee, M. and Ball, J.E. (2004), Safety of People in Flooded Streets and Floodways.
8th National Conference on Hydraulics in Water Engineering, Gold Coast. The Institution of
Engineers, Australia.
Dale K.W., Edwards M.R., Middelmann M.H. and Zoppou C. (2004), Structural vulnerability
and the Australianisation of Black's curves, in RISK 2004: Proceedings of a National
Conference, Melbourne, 8-10 November, 2004.
Drobot, R. and Parker, D.J. (2007), Advances and Challenges in Flash Flood Warnings,
Envrionmental Hazards, 7(3), 173-178, DOI:10.1016/j.envhaz.2007.09.001.
Engineers Australia (2012), Two Dimensional Modelling in Urban and Rural Floodplains.
Project 15, Mark Babister, editor.
French J., Ing, R., Von Allmen, S. and Wood, R. (1983), Mortality from Flash Floods: a
Review of National Weather Service Reports, 1969-81. Public Health Reports: Nov-Dec
1983, 98(6), 584-588.
Gruntfest, E. and Ripps, A. (2000), Flash floods: warning and mitigation efforts and
prospects. In: Parker, D.J. (Ed.), Floods, vol. 1. Routledge, London, pp: 377-390.
Haynes, K. Coates, L., de Olivera, F.D., Gissing, A., Bird, D., van den Honert, R., Radford,
D., D'Arcy, R. and Smith, C. (2016), An analysis of human fatalities from flood hazards in
273
Safety Design Criteria
Jonkman, S.N. and Kelman, I. (2005), An Analysis of the Causes and Circumstances of
Flood Disaster Deaths. Disasters, 29: 75-97, doi: 10.1111/j.0361-3666.2005.00275.x.
Jonkman, S.N. and Penning-Rowsell, E. (2008), Human Instability in Flood Flows. Journal of
the American Water Resources Association, 44(4), 1-11.
Leigh, R. (2008), Flash flood shelter-in-place vs. evacuation research: review of literature on
building stability. Report prepared for NSW State Emergency Service.
Rogencamp, G. and Barton, J. (2012), The Lockyer Creek flood of January 2011: what
happened and how should we manage hazard for rare floods, 52nd Annual Floodplain
Management Association Conference. available at http://www.floodplainconference.com/
papers2012.php., Vol.10.
Shand, T.D., Cox, R.J., Blacka, M.J. and Smith, G.P. (2011), Appropriate Safety Criteria for
Vehicles - Literature Review. ARR Report Number: P10/S2/020. Report for Institution of
Engineers Australia, Australian Rainfall and Runoff Guidelines: Project 10. ISBN
978-0-85825-948-5.
Smith, G.P. and Wasko, C.D, (2012), Two Dimensional Simulations in Urban Areas:
Representation of Buildings In 2d Numerical Flood Models. Australian Rainfall and Runoff
Revision Project 15. Final Report, February 2012.
Smith, G.P., Davey, E.K. and Cox, R.J. (2014), Flood Hazard UNSW Australia Water
Research Laboratory Technical Report 2014/07 30 September 2014.
274
BOOK 7
cclxxvii
Application of Catchment
Modelling Systems
cclxxviii
Application of Catchment
Modelling Systems
cclxxix
List of Figures
7.2.1. Conveyance Comparison – Different Manning’s n Combinations ............................... 4
7.2.2. Steps in the Application of a Catchment Modelling System ........................................ 6
7.3.1. Conceptual Relationship between Data Availability, Model Complexity
and Predictive Performance (Grayson and Blöschl, 2000) ......................................... 13
7.9.1. Representation of Relative Uncertainty of Outcome to Uncertainties Using a (a)
Tornado Diagram and (b) Spider Plot .................................................................. 65
7.9.2. General Framework for the Analysis of Uncertainty using Monte Carlo
Simulation. ....................................................................................................... 70
7.9.3. Monte Carlo simulation of transposed flood volumes ................................................ 71
7.9.4. Uncertainty in parameters of the log-Normal distribution (high and low bars
represent 5% and 95% limits, the high and low boxes represent 25% and 75%
limits, and the central bar shows the median). ................................................ 73
7.9.5. Uncertainty in levels estimated from the regression equation. .................................. 75
7.9.6. Confidence limits on the flood level frequency curve determined using the
general framework for the analysis of uncertainty using Monte Carlo
simulation. ................................................................................................... 76
cclxxx
List of Tables
7.1.1. Terminology of Book 7 ................................................................................................. 2
7.9.1. Errors Flood Volumes Estimated Using a Transposition Model for
a Range of Assumptions. .................................................................... 67
7.9.2. Errors Flood Volumes Estimated Using a Runoff Coefficient Model for a Range
of Assumptions ........................................................................................ 68
7.9.3. Errors Flood Volumes Estimated Using a Water Balance Model for a Range of
Assumptions ............................................................................................ 68
7.9.4. .................................................................................................................................. 69
cclxxxi
Chapter 1. Introduction
Mark Babister, Monique Retallick, Isabelle Testoni
• Can be used with no data, limited data and in data rich situations
• Well designed catchment modelling systems will reproduce flood behaviour over a range
of floods
• Helps to document the process of how the flood estimation was carried out
Because of the wide variety of flood estimation problems no modelling framework is suitable
for all problems.
1
Introduction
An example of this divide is that routing does not only occur in hydrologic models and in this
book when routing is referred to it includes routing in a hydraulic model.
A catchment modelling system refers a set of modelling processes or components that are
used together to produce estimates of flood characteristics. These modelling processes can
be available within a single modelling platform (such as a runoff-routing model) or can be the
combination of a number of modelling platforms (where a runoff-routing model is used to
generate inflows to a hydraulic model). Table 7.1.1 defines some key terminology for Book 7.
2
Chapter 2. Use of a Catchment Model
Isabelle Testoni, Mark Babister, Monique Retallick
2.1. Introduction
A catchment model system is a very useful way of estimating how a system will perform
under a number of different conditions. Catchment modelling systems are usually built from
the series of modelling elements that are described in Book 4, Book 5 and Book 6. These
are combined to replicate the key processes for a particular flood estimation problem.
The catchment modelling system can be used probabilistically (for estimating design flood
behaviour) or can be used to estimate observed or historic flood behaviour. The catchment
modelling system can be used to represent existing, historical or altered catchment
conditions.
It is important when developing a catchment modelling system that the possible future uses
of the model are properly identified so that the key processes are properly considered. The
challenges in modelling are the need to represent various processes which introduces
complexity, against the data available for calibrating these process and parameter and
component interaction (Book 7, Chapter 3).
There are often subtle differences in how some of the key processes perform in frequent
events than in rarer events. These differences mean that only rarer events can be used for
the calibration which limits the data available for the calibration of complex models. Two
simple examples are:
• during very frequent rainfall events the storage capacity of the soils is very important but
during rarer intense events the rate of infiltration becomes more important ; and
• the hydraulics of a stream change significantly when flow moves from in-bank to the
floodplain.
In both cases calibrating to just frequent events can give a very poor estimate of larger
events.
3
Use of a Catchment Model
Catchment modelling system results can be sensitive to the chosen parameter values.
Different combinations of parameters can give the same answer at a single point.
However as is often the case, when extrapolating to larger events they give different
answers and very different representations of the flow behaviour.
In many situations it is never completely clear what the correct combination of overbank and
channel Manning’s n is. The following example shows the results from a simple hydraulic
model where the overbank and channel Manning’s n were selected to match the 1% AEP
flow and level. While different combinations give identical results at the adopted 1% AEP
level and flow they give very different velocity distributions. The cases also give very
different level vs flow relationships for different sized events. This is one of the key reasons
why its important not to adopt models for problems outside the range they were designed for.
Figure 7.2.1 depicts the difference in conveyance, K (Book 6, Chapter 2, Section 7) for a
range of levels. At extreme flows the conveyance ranges from 11 500 to 16 000 m3/s. At
lower levels the flow can double.
4
Use of a Catchment Model
practitioner should first analyse the problem presented before deciding how to solve it. The
data available must also be investigated, as it is likely that insufficient data exists for the
ideal solution that the practitioner has already come up with.
• Interpretation of the Results and Understanding the Reliability and Uncertainty (Book 7,
Chapter 9 and Book 7, Chapter 8).
These steps can be applied to an individual process, but it is important to apply them to the
overall catchment modelling system. You need to confirm performance the overall CMS
rather than just the individual components. Optimising individual components might not
provide an overall robust CMS. The development of the CMS is constrained by the data that
is available, the time/cost and experience of the practitioner.
5
Use of a Catchment Model
6
Use of a Catchment Model
7
Use of a Catchment Model
Results can be inaccurate if any key processes or features are misrepresented in the
catchment modelling system, which is not always easy to determine. This misrepresentation
can be due to practitioner error, model error or incomplete and inaccurate data. The
uncertainty surrounding design flood estimates should not be overlooked (discussed in Book
7, Chapter 9).
8
Chapter 3. Conceptualisation and
Selection of a Catchment Modelling
System
Monique Retallick, Isabelle Testoni, Mark Babister
Chapter Status Final
Date last updated 14/5/2019
3.1. Introduction
A well thought out model conceptualisation and selection stage can result in significant
project savings and the avoidance a lot of costly rework. While it is not possible to identify all
potential issues, as learning experiences during a modelling project can identify issues and
these can be addressed during this initial stage. This allows the limitations to be better
understood and factored into decision making.
The conceptualisation stage in the catchment modelling process can be broken down into a
series of steps that lead to informed decision making:
• Identifying the key process/es that must be modelled to understand and model the
problem;
• Selecting a level of modelling complexity that can be justified by the data available to
calibrate or parameterise the modelling processes; and
• Selecting a modelling approach that matches these considerations with project constraints
including, time, cost, model choices and modeller experience.
• Floodplain studies – inclusive of flood studies all the way through to mitigation impact
assessment. This may include defining flood behaviour for land use planning.
• Flood Emergency Response – Model results can be used to enable emergency services
to better prepare and respond to flood events by identifying potential flood hazard and
planning evacuation routes. Model outputs can also enhance mapping outputs and
improve flood intelligence for both responsible agencies and the community, leading to a
reduction in flood impacts. Whilst not commonly used at present, it is possible that 2D
models may be utilised more commonly for real-time flood warning in the future.
9
Conceptualisation and
Selection of a Catchment
Modelling System
• Urban drainage studies – in such applications the hydraulic model may also perform the
routing functionality typically carried out by a hydrologic model. The 2D model provides the
“major” drainage layer and interfaces also with the “minor” drainage system (i.e. pits and
pipes) dynamically;
• Dam Break assessments - Often a hyraulic model is used to route dam break
hydrographs. 2D models are well-suited to this application as the flowpaths resulting from
a dam break are often unexpected or different to typical flowpaths;
• Sizing of a spillway;
• In-bank river flow modelling in 1D or 2D. This may be carried out in 2D in order to provide
flow velocity that varies over the cross-section or in 1D in which velocity will be averaged
over the cross-section. This approach is often used in ecosystem/habitat assessment;
• Wetland modelling - where routing paths are ill-defined and filling and draining processes
are complex.
• Lake or estuary studies – often at the lower end of river systems the floodplain interacts
with a lake or estuary and subsequently ocean or lake dynamics become important (tide,
storm surge, or seiching).
• Water quality and sediment transport studies – these applications build on the two-
dimensional hydrodynamics to provide information on water-dependent processes such as
pollutant transport and river morphology.
Along with a specific purpose problems it is necessary to define the spatial extent and either
the probability range of interest or parameter range. For example the spatial extents could
be limited to just a dam, or a distance up and downstream. The following items should be
defined at the start of the project:
• Spatial extent (note this might not be the same as the model extent);
• Parameter range;
• Types of outputs (flow, volume, level, rate of rise, warning time). These may be presented
as either:
• Peak;
• Hydrograph;
• Animations.
The required outputs may be specified by the client, in the study brief.
While as part of the study a model of the entire catchment may be established typically a
smaller specific location is the main focus of the study. If there are self-cancelling errors or
10
Conceptualisation and
Selection of a Catchment
Modelling System
bias in areas of the model not influencing the specific location of interest then the practitioner
might not be concerned.
An important step in the conceptualisation of the problem is determining the likely scenarios
that will need to be run (Book 7, Chapter 3, Section 2). For example if a future development
scenario is to be run with urbanisation then a hydrologic model will be required which allows
a change in pervious to impervious area.
• Existing conditions;
• Historic conditions;
• Infrastructure (assessing and mitigation of the impact of a road and railway line);
• Climate change;
• Ocean interaction.
If the project requires only the definition of flood behaviour under existing conditions then
this step can be ignored and focus is on the identification of key processes (Book 7, Chapter
3, Section 3). While in many situations, it will not be possible to identify all the scenarios at
the conceptualisation stage that will need to be assessed, it is possible to identify the types
of solutions, measures or works that are typically used to identify, mitigate or manage the
problem. The ability to model scenarios is one of the powerful features of a catchment
modelling system.
11
Conceptualisation and
Selection of a Catchment
Modelling System
What Has Been Defined So Far
• The Problem;
• Rainfall Models;
• Runoff generation;
• Overland flow;
• Hydraulic routing.
The key processes in flood estimation have been defined in Book 4, Book 5 and Book 6. The
key design inputs have been defined in Book 2.
It is important to decide which key processes have the most influence on the scenarios of
interest. For example, if the scenario of interest is land use changes then the key processes
are runoff generation from different landuse types, catchment response from different land
use types, resistance to flow for different landuse types. Therefore the chosen modelling
platforms and catchment modelling system must be able to model these processes and
allow for changes to parameters representing these processes.
12
Conceptualisation and
Selection of a Catchment
Modelling System
Figure 7.3.1. Conceptual Relationship between Data Availability, Model Complexity and
Predictive Performance (Grayson and Blöschl, 2000)
Consideration must also be given to the resolution of modelling required. For example for a
large catchment coarse representation may be sufficient. Therefore large subareas in the
hydrologic model and a relatively large grid in the two dimensional hydraulic model may be
used. For complex studies fine scale detail may be important and small grid and subareas
may be needed in order to represent the hydraulic controls and key features.
An assessment of the available data and what can be achieve with it must be made.
Following this some compromising on how key processes are represented must be
made. It is a non-linear process.
13
Conceptualisation and
Selection of a Catchment
Modelling System
• Client preference;
• Standardisation;
In many cases more than one modelling platform is often used. This is often the case where
limited data is available as some modelling platforms are more suitable for ungauged
catchments.
The other key inputs that must be considered at this stage are the project timeline, budget,
experience with, and availability of modelling platforms. There is a certain art to modelling
and there is no substitute for experience with a particular modelling platform. On many
projects it is not practical to develop a job specific model and it is necessary to select one or
a set of existing modelling platforms. This has major impacts on cost and timing. Likewise,
selecting a modelling platform that the practitioner is familiar with can have significant
impacts on cost timing and the reliability of results. Typically leading to a better outcome.
The advantages of selecting a platform that the practitioner is experienced with includes
knowledge of appropriate parameter ranges, faster set up time, and knowledge of key
features.
The selection of the appropriate type of hydraulic model is a critical decision in the
application of catchment modelling systems process. In this step the physical system
flow behaviour, which can commonly involve complex highly turbulent flows, must be
reduced to an equation, or set of equations, describing the main characteristics of the
flow. Here assumptions have to be made as to whether the flow can be considered as
being one-dimensional (1D), two-dimensional (2D), or a combination of both, and
whether the flow can be described as being steady (ie. constant with time), or unsteady
(time-varying). In virtually all rural or urban floodplain modelling, vertical accelerations
in the flow field are considered to be negligible and a hydrostatic pressure distribution
is assumed, with computations and results based around a depth-averaged velocity.
Further details are provided in Book 6, which outlines the governing equations utilised
in hydraulic models. More detail on the application and selection of a hydraulic model is
provided in Australian Rainfall and Runoff Supporting document – Two dimensional
Modelling of Rural and Urban Floodplains (Babister and Barton, 2016).
A catchment modelling system has been chosen for the defined problem which makes
the best use of available data. Consideration is given to model complexity and model
representation of key processes.
3.6. References
Babister, M. and Barton, C. (eds) (2016). Australian Rainfall and Runoff Support Document:
Two dimensional modelling in urban and rural floodplains. Project 15
14
Conceptualisation and
Selection of a Catchment
Modelling System
Grayson, R.B. and Blöschl, G. (2000), Spatial Patterns in Catchment Hydrology:
Observations and Modelling. Cambridge University Press, p: 404
15
Chapter 4. Catchment Representation
in Model
Mark Babister, Monique Retallick, Erwin Weinmann, Isabelle Testoni
Chapter Status Final
Date last updated 14/5/2019
4.1. Introduction
Once a catchment modelling system has been conceptualised and modelling platforms have
been selected it is necessary to represent the catchment and floodplain in the modelling
platforms. This requires a series of important decisions where all the key features and
previously identified processes need to be represented. Model selection may need to be
revised once data collection and analysis is undertaken. This chapter outlines all the key
steps in establishing a catchment modelling system.
The guidance within this chapter is divided into generic catchment modelling systems and
that specific to hydrologic model and hydraulic models for ease of use.
• The original model was calibrated for a range of frequent floods and is not suitable for very
frequent floods (or vice versa);
• the model might represent the processes for the existing case but cannot be adapted
easily for new scenarios (changed catchment and floodplain conditions) that need to be
run; and
The model is calibrated for rarer events and a different mechanism is dominant for smaller
flows.
4.3. Data
The amount of historic data and terrain information available for the development of a
catchment modelling system has a large impact on the model establishment. Book 1,
16
Catchment Representation in
Model
Chapter 4 provides a detailed discussion of the types of data available and issues that the
practitioner should look out for when using the data. Book 4, Chapter 2 discusses the
balance between data availability and model complexity.
• Landforms, vegetation and land use catchment areas influencing runoff response;
• Weirs;
• Levees;
• Flow diversions;
There are different ways of representing these key features within a modelling platform.
Sometimes there are multiple options and it is important to select the method of
representation that best suits the problem. Key features are identified so that most of the
model effort is focused on them instead of other features that don’t have a material effect on
flow behaviour.
There is a temptation to spend modelling effort on those features that can be readily
measured. Yet it is often that the features that are hard to measure and quantify have a
significant effect on flood behaviour. For example in a large river model small culverts
will have little effect on flood behaviour.
17
Catchment Representation in
Model
• Catchment size
• Presence of natural features or man-made structures that have a major influence on flood
formation and need to be represented in the model
While lumped flood hydrograph estimation models have the advantage of simplicity, they are
limited in their application to the following situations:
• catchments with relatively uniform spatial rainfall, loss and baseflow characteristics or
where the variation of these characteristics between events is relatively minor, so that the
18
Catchment Representation in
Model
derived unit hydrograph or other model parameters are applicable to a range of design
events
• applications that do not require extrapolation to the range of Very Frequent or Very Rare to
Extreme floods.
• applications where a flood hydrograph is only required at the catchment outlet, as for the
design of drainage structures on roads and railways
As lumped models do not represent the internal structure of the catchment explicitly and do
not have direct links to physical catchment characteristics, they depend on the availability of
observed flood hydrographs for their calibration. The scope for application to ungauged
catchments is thus more limited.
Most of the modelling platforms in common use in Australia belong to the group of semi-
distributed models, owing to their flexibility and efficiency in representing the key factors that
determine the flood formation under a broad range of catchment conditions. As explained in
more detail in Book 5, Chapter 2 and Book 5, Chapter 6, the catchment is represented in the
model through a network of nodes and links.
It is important that the development of the network structure used in the model is guided by a
good understanding of the key catchment features described in Book 7, Chapter 4, Section
4. The catchment subdivision into model subareas should follow topographic features and
match the degree of variation of the key influencing factors (spatial rainfall variability, soil and
land use characteristics). The conceptualisation and level of detail in the representation of
the flood producing and flood modifying processes (Book 7, Chapter 4, Section 7) should
reflect their relative importance and their influence on the flood hydrograph outputs.
In large catchments the distributed runoff inputs experience a large degree of smoothing as
they are combined and routed progressively through the stream or channel network to the
hydrograph output location. This means that recorded flow hydrographs at the catchment
outlet will provide only limited information on the flow contributions from different parts of the
catchment and the influence of individual catchment features. However, the role played by
different catchment features in the formation of flood hydrographs can be expected change
for different flood magnitudes, and this needs to be reflected in the catchment
representation.
19
Catchment Representation in
Model
For more detailed guidance on the representation of catchments in node-link type models
users should consult the user manuals of specific modelling platforms.
In principle, distributed modelling allows the influence of spatial variability in rainfall inputs,
runoff production and routing characteristics to be captured in more detail than in node-link
type models. However, for this potential to be fully realised, the conceptualisation of the
runoff producing processes has to be well matched to the scale of the basic model elements
(grid cells) and has to reflect the change in processes as the cells are wetted up and the flow
efficiency increases with flood magnitude.
The capabilities and limitations of distributed (rainfall-on-grid) models are further discussed
in Book 5, Chapter 6, Section 5.
In the event-based flood estimation methods, the influence of all the pre-event rainfall is only
reflected in the initial catchment conditions (that determine initial loss), and in the delayed
runoff contribution from baseflow which is modelled separately. The event rainfall is then
divided into rainfall loss and rainfall excess which produces the surface runoff component
that is modelled in detail.
The detailed guidelines for modelling losses and baseflow are provided in Book 5, Chapter 3
and Book 5, Chapter 4, respectively. The different approaches for modelling the production
of runoff hydrographs from model subareas and for routing these through the drainage
network to points of interest are discussed in Book 5, Chapter 6.
20
Catchment Representation in
Model
• Ensuring that the model extent is sufficient to cover the likely inundation extent of the
largest event to be modelled. The key here is that for the largest event to be modelled
(typically the Probable Maximum Flood), the model extent does not artificially restrain
water movement at its boundaries, and that the topographic data within the model extent
also extends beyond the inundation areas. For a hydrologic model the model must cover
the contributing area.
• Ensure that boundary conditions are located sufficiently far away so as to not unduly
influence results within the area of interest.
If the likely maximum extent of the inundated area is difficult to define (e.g. very flat terrain or
dam break studies) defining the extent can be an iterative procedure. A recommendation is
to always start with a large model area and then narrow the model domain based on
feedback of model results, as this is far less problematic than the reverse process. Using a
coarser grid/ mesh resolution to reduce run times during these earlier stages of the
modelling process can be an effective and efficient approach, especially for large model
areas. Run time is typically not an issue for hydrologic models other than distributed runoff-
routing models.
In most cases there is a difference between the model extent and the area that the
model can be used to produce reliable results. Just because a model extends over a
certain area does not mean reliable results can be extracted in all areas of the model.
Often these fringe areas are modelled just so that the model boundary conditions are
sufficientlyfar away from the study area.
When using direct rainfall water is applied to every grid cell so all calculation points are
located on a boundary. A common problem is that the depression storage within the
model which can be a combination of numerical and actual can be overlooked when
applying losses.
21
Catchment Representation in
Model
As an example, the water level at a boundary condition is typically defined as a time series
of recorded level. The other flow conditions are assumed or calculated based on the general
assumptions or pre-defined conditions, such as an assumed flow direction across the
boundary and assumed water surface slope. In this example, these assumptions, when
combined with the water level time series, allow a discharge to be estimated across the
boundary.
The specification of these conditions on the boundary introduces errors into the model
predictions. Over time, these errors propagate through the model domain and may
eventually pass through the model domain and out through another boundary. In a well
developed and tested model, these errors become dampened as they propagate through the
model domain. If the boundary conditions are located remotely then the errors become
insignificant at the area of interest.
The most common boundary conditions applied in hydraulic models are external boundary
conditions with a flow or discharge boundary defined along the upstream boundary of the
model and a water level defined at the downstream external boundary.
The boundary condition type can be described using one of the following for specifications:
• Flow time series specified which is distributed across the model boundary grid/mesh
points;
• Water level time series which is assumed to be constant across the model boundary;
22
Catchment Representation in
Model
• Flow and water level specified in combination as an input time series and distributed along
the boundary;
• Flow or water level specified as a one dimensional line of values along the boundary for
each time step;
• Transfer boundary where the water level, flow, velocity and water surface slope are
provided from another model; and
• Rating curve along a model boundary (combination of water level and flow).
The combination of boundary types is important and must be considered in combination with
the specification of initial conditions. In general, the boundary conditions for hydraulic models
should be designed with upstream inflow or discharge boundaries and downstream water
level boundaries. This ensures that any errors or uncertainties associated with initial
conditions are “washed out” of the model. If other combinations of boundary conditions are
used then the initial conditions will not necessarily be “washed out” of the model. The initial
conditions will then significantly affect model simulation results and the results may not be
reliable.
If general the practitioner should approach the schematisation of external model boundary
conditions in a similar manner to how a boundary condition would be conceived for a
physical model. The practitioner should consider the physical flow characteristics at the
boundary in the real world and should attempt to schematise so as to minimise any artificial
flow behaviour that is induced by the boundary condition. Issues that should be considered
include:
• Align the model grid to be normal to the boundary flow streamlines if possible;
• Avoid placing the boundary where turbulent flows are likely to be crossing the boundary;
• Minimise the wetting and drying on the boundary if the flooded boundary changes in width
substantially during the simulation;
• Ensure that the boundary condition does not restrict or expand the flow substantially at the
boundary; and
• Preference for specifying an upstream inflow discharge boundary and a downstream water
level (or rating curve) boundary in combination.
As discussed, the boundary conditions should be located as far from the area of interest as
possible. This will minimise the possibility of boundary effects and errors influencing the
model results within the study area. The specification of the boundary conditions will
therefore have a significant influence on the grid/mesh resolution. In general, the boundary
23
Catchment Representation in
Model
condition should be identified as the first task that is carried out when conceptualising and
schematising a model.
The primary issue in defining internal inflow boundary points is to ensure that the flow rate is
compatible with the grid or mesh resolution. There should be sufficient conveyance into or
out of the element(s) where the boundary condition is specified to allow the model to accept
the flow without introducing significant disturbance to the natural flow streamlines. If a large
flow is forced as a boundary through a relatively small cell element with limited flow area; the
model will produce an excessively large velocity and water level gradient to achieve
continuity with the flow volume. If this occurs then significant momentum can be artificially
introduced to the model at this location which will then influence water levels and flow
patterns for a relatively large distance away from the boundary cell.
Internal control boundary conditions are a special form of boundary condition and are
generally not recommended unless there is a strong compelling case for their use. An
internal boundary condition will force the model to reproduce a predefined hydraulic
behaviour within the model domain. The most common internal boundary condition is a
forced rating curve at an internal cross-section of a one dimensional model. These boundary
conditions are highly “reflective” and will introduce distortion and disturbance of the flow
behaviour far from the actual boundary point. It is not recommended the use of this type of
boundary for most catchment modelling applications.
Excessively long run-times can introduce a significant bottle-neck in the study timeline and
the decision to accept an excessively long model run time should be made carefully.
Timeliness may be particularly affected during the calibration phase, where a large number
of iterative simulations are necessary, mostly in series rather than parallel. With excessive
run times the calibration essentially relies on the skill of the practitioner and their knowledge
of likely calibration parameters.
In addition, the total number of runs required can be an important consideration if there are
many scenarios to be considered, such as different event durations and Annual Exceedance
Probailities, development scenarios, blockage scenarios or scenarios to parameter
sensitivity tests (refer to Book 7, Chapter 7, Section 2). During the planning stage, the
practitioner will need to consider the following factors to estimate the efficiency and
timeliness of the study.
24
Catchment Representation in
Model
• The ability of the computers to undertake multiple runs in parallel or not; and
The type and availability of computational resources can provide a real practical constraint. It
may limit the number of design runs that can be achieved, the length of event that can be
simulated, or the achievable resolution of the model. Such limitations and the resulting
implications need to be identified as soon as possible in the process.
Consideration of run times can be particularly important for rural flood studies, or for studies
involving continuous simulation of long flow periods. Such studies may require simulation of
floods or flow sequences lasting several months. In these situations it may be appropriate to
consider the use of a modelling package that can implement an adaptive timestep, using a
longer timestep during periods of relatively steady flow conditions, which may significantly
reduce computational run times. Adaptive timesteps are discussed further in Book 7,
Chapter 4, Section 12.
The fast run times of hydrologic models lend themselves to Monte Carlo modelling.
However, run times of two dimensional hydraulic models are somewhat prohibitive at this
point in time. Fast run times are possible one dimensional hydraulic models. One alternative
is to Monte Carlo or Ensemble hydrologic models then apply a selection of events to the two
dimensional hydraulic model. Book 2, Chapter 5 recommends running an ensemble of ten
temporal patterns in a hydrologic model and the selecting the pattern closest to the average
(flow or volume depending on the problem of interest) through a hydraulic model.
• One Dimensional Hydraulic Models - the resolution is based on the space between cross-
sections; and
Given other considerations such as run time it may not be possible to have the model
resolution fine enough to represent the key features in the perfect detail. It is sometimes
necessary to compromise on model resolution. For example, A levee is 30 m wide but
chosen cell size is either 20 m or 40 m. Engineering judgement should be applied to decide
which cell size should be used. An adjustment to the resolution of the model may be
required in order to properly represent the flow behaviour.
25
Catchment Representation in
Model
There are generally two choices for selecting a model time step which are:
The fixed regular time step allows the practitioner to pre-determine the model run time and to
set the saving step (in which model results are saved) as a regular multiple of the simulation
time step. However, the time step will need to be set at the shortest time interval necessary
for stability of the model during the most energetic or deepest flows during the simulation.
This typically occurs for only a very short period of time during the peak of the flood
hydrograph. Consequently the model simulation time is longer than is necessary as it is fixed
for the entire simulation. However, the practitioner can be sure that the simulation will
complete within predetermined run time.
The adaptive time step allows the model to determine the appropriate time step necessary to
maintain stability as defined by the Courant condition. The practitioner will typically set a
maximum and minimum time step allowable. This allows the model to time step at relatively
longer time steps when the flow is shallow or less energetic and shortens the time step
during the peak of the flow event. In theory, this should allow the shortest run time for the
simulation to be achieved whilst maintaining model stability. However, in practice the
adaptive time step method can often lead to excessively long run times. This is due to the
impact of a few minor locations in the model where short lived energetic fluctuations in the
flow can lead to the minimum time step being selected for excessively long periods of time.
Run times can also become excessive if the period that it takes for the flood wave to
propagate through the model is very long. For example, simulations of large river systems or
of flat terrain where the critical rainfall duration is long, will have propagation times in the
order of days, if not weeks. However, small catchments with short critical durations may only
have propagation times in the order of hours. Therefore, some idea of the likely propagation
period is needed before finalising the model resolution and extent.
The model saving step needs to be sufficient short to be able to define the shape of the
hydrograph in time. The model save step also needs to be sufficiently short to enable the
observation of stability issues that may occur during the simulation. If a model is being saved
at a longer time interval than a higher frequency oscillation in the model then it would not be
easily identified and could be missed. It is important that the model is checked thoroughly by
saving all time steps at specific points or at small regions in the model domain. This allows
for the observation and checking of stability issues without the need to save the entire model
at all time steps. It is generally impractical to save all results at all time steps in a 2D model
and it will typically exceed the limitations of most computer storage and hardware to do so.
4.13. References
Babister, M. and Barton, C. (eds) (2016). Australian Rainfall and Runoff Support Document:
Two dimensional modelling in urban and rural floodplains. Project 15
26
Chapter 5. Determination of Model
Parameters
William Weeks, Erwin Weinmann, Mark Babister
5.1. Introduction
Following the selection of the catchment modelling system and the catchment and floodplain
representation required, the next step is the estimation of appropriate model parameters to
apply to the model platforms in the required application.
A flood model is a representation of the physical catchment processes affecting floods and
the implementation is defined by a parameter set to apply the model to the specific problem
being considered. The estimation of these parameters is often referred to as the calibration
process.
While the term calibration strictly applies only where there is observed data to calibrate
against, in this chapter calibration is defined here in general terms as the process for
determining appropriate model parameters for the hydrologic and hydraulic models to ensure
that they can be applied to the design flood estimation problem being considered. It involves
varying model parameters to ensure that model results match observed data, to confirm that
the model is performing adequately and is consistent with the records. Calibration can be
carried out in a variety of ways and this chapter discusses appropriate methods of calibrating
hydrologic and hydraulic models.
While the models used for these applications will generally have some representation of the
physical characteristics of the catchment, meaning that the model parameters should be
based on these physical features, there will always be uncertainty and the parameters will
need to be estimated using available data to ensure that the model is at least consistent with
the observed catchment performance. If the model represents physical processes closely,
parameter values could be measured from catchment characteristics, but this is an
uncommon situation.
The parameter estimation process may be based on recorded data (if there are suitable
records in the project area) or may be based on regional estimates if the local catchment is
ungauged or data is limited. There is a gradation between these two extremes however, it is
rare that there is absolutely no available information to assist in setting parameters. It is also
rare to find that there is sufficient data to allow a precise parameter determination, so the
objective in determining parameters is to ensure that as much data as possible is used in
this exercise.
Flood investigations usually require both hydrologic (calculation of design flood discharges)
and hydraulic (calculation of flood levels, velocities and flow distributions as well as design of
drainage systems) modelling applications, so this chapter covers both of these.
This chapter describes the different approaches to determining model parameters for the
range of flood investigations and for the different amounts of available data.
27
Determination of Model
Parameters
5.2. Overview
The approach to parameter determination will depend on a number of factors which will
determine the approach to the calibration and the level of detail sought in the process. This
includes the model platform and the design problem.
During calibration, the parameters that are physically based should be defined using the
catchment characteristics, while the other parameters can be varied so that the model
results match the observed data, ensuring that the values remain within reasonable and
acceptable limits.
The sensitivity of model parameters is also variable and some parameters have a greater
influence on the model output than others. In some cases, there may be inadequate data to
allow an accurate determination of actual parameter values. Therefore, these parameter
values must be set using knowledge of model and catchment processes. There is a concern
though that some parameters (e.g. Non-linearity parameters in runoff-routing models) may
be important in design situations where rare floods are to be modelled but the observed data
does not include any floods of the required magnitude. Therefore these parameters may
appear insensitive during calibration but they have a major influence in the design situation.
• All Available Data both formal and anecdotal should be considered in the calibration and
the best use should be made of this data, in-line with its assessed accuracy and reliability.
This data is the only way to ensure that the model application can be consistent with
available local information. It is also important to carefully review the data to ensure that it
is consistent and there are no obvious errors that will affect model performance.
• When calibrating the model parameters, it is important that the practitioner has an
understanding of the role and relative importance of the different parameters and how they
influence model operation. During calibration it is then important to concentrate on the
most influential parameters, especially those that affect the model performance in the
areas of particular concern for the specific model application.
• Model parameters when fitted to the data should be reasonable and within the range
expected for the model platform and should be consistent with the physical features of the
catchment being considered. If parameters are not within this typical range , the model
conceptualisation could be incorrect and while the model may appear reasonable during
calibration, there will be serious concerns for design events modelled where the event
28
Determination of Model
Parameters
magnitude is run outside the range of that used for calibration. It is also possible that
parameters outside the typical range may indicate errors in the observed data, and the
calibration may be attempting to fit the model to these errors. The data quality and
consistency then needs to be reconsidered and the calibration reanalysed accordingly.
Even if the data is of poor quality and incomplete, it is important that the model calibration be
at least consistent with the available information, especially local or anecdotal information
where formal data collection is lacking. Even very poor quality observations may be sufficient
to apply a ‘common sense test’ and to ensure that even an essentially uncalibrated model
can be a reasonable representation of local conditions.
There are four basic approaches that will normally be dictated by the available calibration
data and sometimes by the project budget and timeframe, and this classification is not as
simple as the division into gauged or ungauged catchments.
a. No Data - This is the lower limit to data availability, and having no data at all is probably
not a common situation. In this situation, regional methods of some type are required. In
addition to formal regional methods, parameters can be determined from experience with
applications on similar systems or where the physical characteristics are similar.
b. Very Limited Data - The limited data may be some anecdotal records (Book 1, Chapter 4).
An example for the design of drainage structures on a road or railway would be reports on
the frequency of closure by flooding. In this case, it is possible to develop parameters that
mean that the model is at least consistent with local observations. It may also be the case
that the limited data is apparently inaccurate or inconsistent, though the exact source of
this inaccuracy may be difficult to detect. In any case, efforts should be made to
incorporate any information available in accordance with its assessed accuracy and
reliability, no matter how limited this may be.
c. Some Data - In this case there may be a streamflow gauge with a very short period of
record, records of flood levels for a single flood event or there may be records for a very
frequent flood event. Some rainfall gauge information may be available. In this case, there
will be a greater degree of confidence in the calibration, but the limited data means that
there will still be uncertainty in the model performance, especially when the model is used
for extrapolation to larger design events outside the range of the limited data or applied to
alternative development scenarios.
d. Extensive Data - In this case, there is extensive data throughout the floodplain and
catchment of interest. Data is available for a range of flood magnitudes and conditions
and the flood data is accurate, reliable and consistent. In this case, the model calibration
will be reliable and the model can be confidently used for design flood investigations.
29
Determination of Model
Parameters
These four categories blend together and there will be a gradation from one to the other.
Projects where data is totally lacking are not common and projects with extensive data are
also unusual. The objective is to consider all available data and to make the best use of all
available information.
In the following sections the term ‘calibration’ is applied to parameter estimation approaches
c) and d), where the availability of flood data is sufficient to allow a calibration process that
compares model results to observed flood data.
• Records (photographs) of bed, bank and floodplain vegetation levels to assist with
interpretation of roughness and provide record of prevailing conditions;
• Gauged water level hydrographs, rating curves and derived flow hydrographs at
streamflow gauge sites;
• Streamflow gauging at gauge sites and over the side of bridge structures (rare, but useful);
• Flood mark levels, location and measure of reliability. For example, debris marks,
watermarks on/in buildings;
• Observations of the rate of rise of flood waters and the time of peak;
• Records and photography of the extent of inundation, noting the time of the photos; and
A flood occurred whilst calibrating a model. One of the local landowners phoned and
asked if there was anything he could do? Make as many flood marks as you can, and if
possible try to record when the marks were made. The local diligently went round
hammering nails into trees until the flood peaked. After several weeks trying to
calibrate to this fantastic data set, the practitioners were desperate, and visited the
landowner. The model is always showing much higher levels than you’ve recorded.
After a while the landowner took them over to the creek bank and showed them a levee
hidden amongst the trees. Don’t tell anyone he says, as I’m not sure if it’s legal. In the
end he agreed to have it surveyed, and lo and behold the model calibrated beautifully!
30
Determination of Model
Parameters
• Discussions with Local Residents on their Recollections and Observations - For example,
they may have experienced a flood event and have noted features such as flow directions,
water speeds and the timing of the flood’s rise and fall. This information can be valuable to
help check that the model’s representation of flow behaviour is realistic; and
• Information From Stakeholders - For example, a road or railway authority may be able to
advise how frequently a crossing is inundated and/or for how long. While this may not
provide event specific observed data, it could be useful as to whether the model is in the
right general area of performance.
An old timer recalled how his grandfather remembered a large flood in the 1860s that
broke across a ridge in two locations. Today, this would isolate the hospital and be a
significant flood risk to homes. The 1% AEP flood did not show this flood behaviour,
however, when the 0.2% AEP event was run, these floodways developed. This helped
convince the old timer that the modelling was good, and the local council incorporated
these floodways into their flood risk management planning.
During a resident survey a local shop owner took the practitioner to look at a tree. “See
that fork up there; well that was where a pig got stuck.” Fortunately, the modelling for
that event showed flooding to that height, and was proof to the local that the model was
“doing the right thing”.
It is therefore desirable to have data for more than one flood event available for calibration,
and hopefully several floods where a range of conditions is covered. Having floods of
different magnitudes so that the flooding covers in-bank and floodplain flows and where
flooding occurs in different seasons and with different rainfall distributions and catchment
conditions will build confidence in the model performance. Successful calibration on a wide
range of calibration events means that the model can be extrapolated to a wider range of
design flood situations confidently.
The hydrologic component, which is the model used to calculate flood peak discharges or
flood hydrographs, is the more critical of the two, as any errors from hydrologic modelling will
31
Determination of Model
Parameters
also transfer to the hydraulic modelling component. Calibration of the hydrologic model
requires recorded flood flows, and these generally require a streamflow gauge. Availability of
a streamflow gauge measuring discharges is less common than having flood level
observations which may be provided by local residents and other non-experts. In some
cases observed flood levels can be converted to flood flows by application of a stage-
discharge relationship derived by a theoretical method (Book 1, Chapter 4), but this
introduces another level of uncertainty into the calculation of discharge. The hydrologic
model should be calibrated to ensure that the model can calculate flood flows to match the
recordings. Calibration of hydrologic models must consider the accuracy of the recorded
data and the consistency between different observations. These issues are discussed further
below.
The hydraulic modelling process involves setting relevant parameters so that the modelled
flood levels or flood hydrographs match the observed data. Observed flood levels are more
commonly measured than flood discharges so there is often more extensive data. However,
flood levels may be matched with a hydraulic model where the calculated discharges and
hydraulic model parameters (primarily hydraulic roughness) are both incorrect and the errors
compensate. While this is not necessarily a problem for the actual historical flood used for
calibration, this can lead to significant errors when using the model for design applications
over a larger range of flood events.
In many cases though, the hydrologic and hydraulic models may be calibrated together, ie.
Joint calibration. In this situation, there may be observed flood levels but no recorded
discharges, and the parameters for both the hydrologic and hydraulic models are adjusted
together and the discharge determined such that the final flood levels are matched. As with
the calibration of hydraulic models, this situation may lead to compensating errors in the two
models, and the calibration may appear reasonable but the compensating errors mean that
flood estimation for floods of different magnitude may be significantly in error. The
compensating errors mean that the flood discharge is too low and the roughness is too high
and the flood levels match, or the opposite.
• Amount, type and quality of suitable data available for each event; and
• Magnitudes of the events as to whether they are of a similar size to that of the primary
design events.
Each calibration event must have sufficient historic flood observation and reliable
topographic information and boundary data at the time of the flood. Often this means that
events used for calibration are relatively recent, as the data sets are likely to be more
complete. Larger floods that may have occurred longer ago may not be suitable for
calibrating to due to the lack or scarcity of key data sets.
Calibration events should ideally also span the magnitude range of the intended design
events with a preference for the more important design floods (e.g. 1% Annual Exceedance
Probability event). This instils confidence in the ability of the model to replicate flow
behaviour over the full range of event magnitudes. For example, a frequent flow event that is
confined to the channel and drainage infrastructure will have a substantially different
behaviour to a rare flood event that has broken the banks and is flowing overland. If the
32
Determination of Model
Parameters
model has only been calibrated to the in-bank flow magnitude, confidence in its ability to
replicate overland flow will be lower.
For tidal sections of a flood model, a tidal calibration is a useful additional calibration step,
and is particularly recommended where storm tide inundation and interaction with catchment
flooding is important. Tidal calibration data often exists, or can be readily measured, and is
usually an accurate data set. It also provides a check that the model can reproduce any tidal
amplification.
The 1998 flood in Katherine was larger than a 1% AEP event. There were extensive
water level measurements taken throughout the town, many photographs and videos
and the flood discharge was gauged at the gauging station. Therefore, the data
available for calibration at Katherine for this event could be regarded as ideal: a large
recent event with a reliable and extensive dataset.
When there is good quality data, there are automatic calibration algorithms that follow a
defined search procedure to result in an “optimum” parameter set. While in theory, this
procedure can result in a good quality parameter set with a minimum of effort, this approach
is not as straightforward as first impressions indicate. The first step is to define an objective
function that must be minimised for the optimisation. This may be minimising the root mean
square error for the differences between observed and modelled flows. While this function
may lead to a generally overall reasonable result, it may be more important to concentrate
on high flows for example (a common requirement for flood studies), the rising limb of flood
hydrographs (required for flood forecasting) or hydrograph volumes and shape (commonly
needed for floodplains with extensive floodplain storage). These secondary details are often
equally important and it is generally found that a purely automatic optimisation procedure
does not converge to the optimum parameter set for a particular application, unless the
objective function of the optimisation procedure has been carefully chosen.
33
Determination of Model
Parameters
Because of this, the most appropriate means of model parameter estimation should involve
both automatic and manual parameter estimation where the modeller uses experience and
understanding to estimate parameters appropriate for the particular application and the
automatic procedures can refine and polish the optimisation.
Calibration, especially for large and complex models, may require a long process and tests
on a large number of parameter combinations and variations. In this situation (which is
common, except for the most simple situations) it is important that the practitioner maintains
a log of calibration tests so that the impact of parameter changes can be understood and the
calibration can proceed without retracing previous calibration tests.
A common application is to fit the model results to observed flood levels or flood
hydrographs. Obviously, the objective is to fit the observations as closely as possible.
However, the model will often show that it is over-estimating for some points and under-
estimating for others or one flood may be consistently over-estimated while another is
consistently under-estimated.
The aim therefore should be to provide the best “overall” match, though this is hard to define.
Points to consider are that there should not be any consistent error, there should be some
recorded points above the model results and some below and points of lower accuracy
should not be weighted as heavily as those regarded of high accuracy. Estimates of rare
design floods are most often required for flood studies, so the optimisation should normally
be weighted towards the larger calibration events.
In most situations, flood peak levels are the most important objective, but in some cases, the
hydrograph shape or flood volume may be of as much significance as the flood peak levels,
so the model application must be considered when deciding on the objective function.
The objective function may be a mathematical parameter, such as minimising the sum of
squares of the errors, or the function may be based more simply on fitting “by eye”, where
judgement can be used to determine the quality of fit for different features of the observed
flood record. There is a place for both of these approaches, even in a single application.
The calibration is assisted when the practitioner has a good understanding of the model
processes and the influence of all parameters in the model. Knowledge of which parameters
are most influential, and the influence of each parameter on different aspects of the flood
process, is important in ensuring that the model parameters are maintained with realistic
values and that efforts are not wasted working on insensitive parameters. Models with
multiple parameters will usually exhibit interaction between the parameters so that it is
possible that a similar calibration performance is achieved with different parameter sets.
With incorrect parameter combinations, while the calibration performance may be similar,
there are likely to be major differences in the design application results when the model is
applied to conditions outside the range used for calibration. It is important therefore to have
an understanding of the model operation and the relationship between parameters and
physical characteristics to help keep parameters within reasonable bounds, especially when
considering interactions between parameters.
34
Determination of Model
Parameters
Therefore a single objective function cannot be recommended for all model calibrations, a
variety of methods will be applicable for particular applications.
While it is important to critically review the quality of the data available for calibration, it is
also important to carefully review all available data and maximise the information available in
this data to ensure the best possible calibration process. Formal data collection programmes
are an immediately obvious source, but all available information should be examined. For
example, old historic records from newspapers may be available to give an indication of
major historic floods from before official records are available. These old records though do
need careful study, since the survey datum may be hard to identify but some “detective”
work can yield valuable information.
Careful review of the quality and properties of the data being applied for calibration is
essential to ensure that it is appropriate and that the practitioner has a good understanding
of the availability and applicability of the data. This is especially important for older historic
data. Issues can include:
• The datum used for level survey where older data may use a different datum or two sets of
survey data may be to two different datums;
• Streamflow gauge records of water levels are often measured to a local datum, which may
be difficult to relate to topographic data;
• Stream channels may scour or silt up over time so current conditions may be different
from those when the flood records were collected; and
• Floodplain roughness may vary with time, for example, sugar cane fields may be bare
ground or very dense sugar cane depending on the time of year when the flood occurs.
Many types of informal data collection can assist in ensuring that model calibration is as
accurate as possible, and these are discussed in Book 1, Chapter 4, where the value of data
in all types of flood estimation is identified.
35
Determination of Model
Parameters
• dam construction;
The last major river flood in one coastal area occurred in 1974 and resulted in
extensive inundation of the floodplains. At this time, the floodplain was mostly utilised
as grazing land. That land is now developed with extensive canal and flood mitigation
works. While model calibrations for these rivers must rely on data from the 1974 flood,
the drastically changed conditions mean that calibration results must be treated with
appropriate caution.
A 2D model was constantly producing flood levels that were too low in the upper tidal
reaches of one branch of a coastal river. However, modelled flood levels matched
recorded well in all other locations. Not even extremely high Manning's n values would
lift modelled levels to those recorded. It was initially suspected that the recorded levels
were erroneous, but this was proved incorrect when the recorded flood levels were
independently resurveyed and found to be accurate. It was later revealed by a long
term resident that a weir that had been installed to prevent saline water penetrating
upstream, had never been completely removed and was still controlling flows. Once
this partial weir was included in the model, a good fit was obtained with the same
parameters used elsewhere in the model.
36
Determination of Model
Parameters
It is far more important to understand why a model may not be calibrating well at a
particular location than to use unrealistic parameter values to ‘force’ the model to
calibrate.
• Accuracy of Calibration Data - The quality of calibration will depend on the assessed
accuracy of the calibration data (refer to section on Data Issues above). For example, if
the calibration of a hydraulic model is based on flood levels from observed debris marks,
these levels may not be more accurate than ± 300 mm, so working towards matching a
number of levels to a higher level of accuracy cannot be justified. Even where there is a
streamflow gauge located on the catchment, the quality of the measured discharge will
depend on the quality of the rating curve, which could cause quite significant inaccuracy in
this measured data.
• Model Response and Catchment Consistency - The calibration of models relies on the
available data and the estimated parameters are based on the data used to estimate the
parameters. However, the catchment conditions that applied during model calibration,
especially if rare historic floods have occurred, may not be completely representative of
conditions required for design applications. Because of this the model parameters required
for design should be “generic” parameters based on the calibration but applicable for the
design application. The exact catchment conditions for design applications may not be
consistent with the particular conditions that applied for the calibration process. For
example, vegetation coverage on a floodplain or the channel conditions in water courses
will vary from time to time, so the conditions that applied for a single calibration flood event
may not be representative of long term average conditions. Parameter values therefore
must be modified to account for the expected future design conditions, rather than an
unrepresentative calibration event.
• Consistency of Data - Review of data may indicate that the recorded data is inconsistent.
For example, recorded flood levels for two different floods may be impossible to model
with the same parameter set. There are several possible reasons for this possibility. For
example, the recordings may be inaccurate, the catchment or floodplain may have
changed between flood events or the model may be inappropriate for the analysis
required. The effort should then be concentrated on resolving the source of the
inconsistency rather than pursuing further calibration.
37
Determination of Model
Parameters
• Requirements for Model - The calibration acceptance may vary depending on the
application required. For example, if the model is required for a bridge design, the
calibration is only really critical for the bridge site, but model performance over a wider
extent of the catchment is needed for floodplain planning. Also if the model is required for
assessment of frequent floods, the performance for major overbank flooding is not as
relevant so poor performance for these events is not a serious concern.
• "Overfitting" - This is the process where the model calibration process is taken to an
extreme, and the model parameters are extended to possibly unrealistic values and can
vary unrealistically throughout a catchment or floodplain to ensure that the model fit is
close for all data points and all events. This situation may result when there are unrealistic
calibration acceptance criteria adopted for the project and the only way of meeting the
criteria is by an extreme and unrealistic parameter set. While the resulting model
calibration may appear to be high quality and does meet calibration performance criteria,
the resulting model parameters will not improve the performance of the model for
extrapolation to the design situation.
It is extremely rare that a flood model will fit all data well. This usually means one of the
following:
1. The model has been overfitted to the data with unrealistic parameter values; and
2. Some of the data that does not fit well, has been ignored and not presented.
It is extremely unlikely that your simple model is perfectly representing the complex real
world well, all your data has been collected without error, or is unaffected by local
factors.
For these reasons, it is difficult to define an acceptance criterion for model calibration and
the quality of calibration may vary depending on particular conditions. It is important though
to consider all the issues covered here when deciding on calibration performance.
Unrealistic calibration criteria do not lead to an improvement in model design applications so
the criteria need to be tailored for the particular application and local situation.
The quality of calibration depends on the quality of the data applied so the model application
and results should consider this in interpretations of model results.
It is recommended that specifications for flood studies should not be prescriptive in defining
calibration criteria, but should aim for realistic and applicable criteria.
It is important to note that a calibration process may not always result in a parameter set that
is suitable for application to design conditions, and it is always necessary to approach
calibration data critically. In these cases, the calibration process must be supplemented with
other information such as regional parameter estimates as discussed in Book 7, Chapter 6.
Sensitivity testing of inputs and parameter values is a good way of understanding and
resolving the importance of the input/parameter on the model’s calibration results. This is
discussed further in Book 7, Chapter 7.
38
Determination of Model
Parameters
Following a large flood event that occurred in 1984, Council organised the survey of
over 400 peak flood marks across the floodplains of the affected catchment. These
were primarily flood debris marks. Prior to model calibration, Council specified that the
calibration criteria was for modelled peak water levels to be within 300mm of recorded.
However, calibration was accepted with 50% of points meeting this criterion in
recognition of significant proven uncertainties in debris mark levels and some of the
model inputs.
When calibrating a model to peak flood levels for one historic event, a good match
between modelled and observed was obtained for all levels with the exception of the
one recorded by the most upstream automatic gauge. The datum of the offending
gauge was checked and no problem was found. In order to match this gauge,
Manning's n values needed to be set at values that were outside the normal range and
very different to elsewhere in the model. In addition, the peak level at this gauge looked
out of place on a longitudinal plot of the river profile. Despite a strong desire to have
the model calibrate well to this one gauge level, the client accepted the practitioner’s
advice that confidence in the accuracy of the observed level was low and it would be
compromising the model to fit the data. Not long after the study was complete, a larger
flood occurred and the model fitted all gauge data very well, including the troublesome
gauge. It was concluded that something had gone wrong with the automatic gauge in
the earlier event.
Also, fundamental to a good calibration is the demonstration that the model reproduces the
timing of flood events. This may be achieved through calibrating to recorded water level
hydrographs (if available), and to observations by locals (e.g. “the flood peaked around
midday”). Water level hydrographs give the added benefit of showing whether a model is
reproducing the shape (rise and fall) of the flood.
Calibrating to information on the timing of the flood shows that the flood dynamics are being
reproduced, and this only occurs if the model’s input data and schematisation are
satisfactory, parameter values are within typical ranges, the software is suited to the
application, and most importantly, the hydrologic method is also reproducing the correct
timing. The latter is particularly important when it comes to calibrating a hydraulic model. If
the hydrologic method is inaccurate with respect to timing and/or magnitude, satisfactory
calibration of the hydraulic model will be difficult, if not impossible. For this reason, jointly
calibrating the hydrologic and hydraulic modelling is always recommended.
If parameters such as hydraulic roughness are outside standard values, the calibration may
be “acceptable” for that particular event, but will very likely be compensating for inaccuracies
in the hydrologic modelling, input data and model schematisation. In this case, the
“calibrated” model is not suited to representing floods of smaller or larger size than the
calibration event, and will be of limited use.
39
Determination of Model
Parameters
It is important to note that should flow/discharge hydrographs exist for a study area, the
flows are not “recorded” but “derived”. A rating curve is used to convert the water levels
recorded by the stream gauge into flows. Details on this process and its limitations are
provided in Book 1, Chapter 4. However, it is worth reiterating that the reliability of discharge
data is limited by the number and quality of manual gaugings undertaken at the site, the
extent of extrapolation beyond the highest gauging of the rating curve and the means by
which the rating curve is developed by the hydrographer. In undertaking a calibration using
flow discharge hydrographs, it is essential to consider the quality and reliability of the rating
curve used to derive the flows. Inaccurate rating curves produce inaccurate flows that will
potentially mislead the practitioner into using inappropriate parameter values.
While many applications are required on totally ungauged catchments, it is common to have
at least some minimal records of flooding available. The minimal descriptive data availability
is discussed further in Book 1, Chapter 4, but where there is some anecdotal data, the
parameter determination process must use this data to at least ensure that the model
performance is consistent with this minimal data even if the data is insufficient to provide a
calibration.
An important issue with the estimation of parameters for ungauged catchments is that the
methods rely on transfer of parameter values from neighbouring catchments. The methods
therefore rely on the assumption that the catchments used to estimate parameters are
sufficiently similar to the catchment being analysed. It is important to carry out as many
checks as possible to confirm that this is the case, but there will always be some uncertainty.
There are several different methods of estimating model parameters for ungauged
catchments.
• Regional Relationships - These are developed for particular model parameters and for
particular regions. For example, there are published relationships for runoff-routing
parameters which are related to catchment area, for several regions of Australia. In some
cases, specific regional relationships are developed for particular project areas from
limited data and then adopted for the whole project area.
40
Determination of Model
Parameters
The principal issue with parameter estimation for ungauged catchments is to use whatever
data may be available, no matter how poor quality this may be, understand the physical
processes represented by the models to ensure that the parameters are realistic, and to use
regional relationships and information from neighbouring catchments to the maximum extent
possible. The uncertainty in the resulting model operation must be considered in any model
application for ungauged catchments, since this will be greater than would be the case for a
well gauged catchment.
Often the calibration process will result in different parameter sets applying for different
calibration events.
In general, this is not allowable, since a single parameter set will be required for application
so after completing calibration on a number of different flood events, the calibration process
must continue to calculate a single parameter set to best fit all of the available data.
Therefore a procedure is needed to select a representative parameter set for application to
the design situation.
The simplest approach would be to “average” the parameters, which will result in parameters
that are representative, but may not result in a model that “averages” performance. An
alternative approach to simple averaging would be to average them with a weighting towards
the rarer floods. It is also possible to adopt the parameter set that has been estimated from
the historic flood that is most similar to the design flood requirements, which may be the
largest flood event.
Whichever technique is adopted to interpret the calibration results and adopt parameters for
a design application, these adopted parameters should then be used with the model on all of
the design flood events to confirm the performance for all the data. The results from this
should show at least a reasonable performance for all of the calibration flood events and no
bias in the results, that is the calibration on all historic floods should not be all under- or over-
estimations.
It is desirable to compare the adopted parameter set from the calibration events with
parameter estimates from catchments and flooding areas with similar characteristics and
with parameter values obtained from regional parameter estimation procedures. If there are
any significant discrepancies between the parameter estimates from different sources, the
possible reasons should be investigated and the final parameter selection decision made in
the light of the findings from these considerations.
41
Determination of Model
Parameters
Once the calibration has been accepted, the model should then be transferred to the
validation phase, where the parameters are confirmed and determined to be available for
application.
42
Chapter 6. Regional Relationship for
Runoff-routing Models
Michael Boyd
6.1. Introduction
Regional relationships can be used to estimate parameter values on ungauged catchments
but they can also be used to test the plausibility of parameters derived from limited data.
Where no data is available some insight can also be gained from comparing how adjoining
catchments with data compare to the regional relationship. Relationships between model
parameters and catchment characteristics have been derived for several regions. The most
recent relationships available for Australia are given in the following section.
In all cases the reliability of regional relationships is likely to be less than parameter
estimates derived from calibration from several recorded flood events on the catchment of
interest. Regional relationships should be used with due caution, as most derived relations
incorporate considerable scatter of the data from individual catchments. Also, different forms
of relationships have been found to give equally good fits to the one set of data, but would
give widely different estimates in some other cases (Sobinoff et al., 1983).
Regional relationships will contain some scatter about the fitted equation, partly due errors in
rainfall and streamflow data, including insufficient spatial rainfall gauge coverage, but also
due to limitations in the models themselves. Loy and Pilgrim (1989) quote typical errors of
10-20% for rainfall and 25% for streamflow data, with larger errors being quite possible, and
note that as a consequence high correlation is unlikely to be obtained in the resulting
regional relationships.
Scatter in the relationships can also be caused by different methods of treating the data
when parameters were originally calibrated. These include different assumptions when
separating baseflow, and different rainfall loss models, for example proportional loss as
opposed to continuing loss rate. These different assumptions can lead to different calibrated
parameter values, and hence contribute to scatter in the regional relationship. This problem
will be reduced if the regional relationship is developed using consistent methods of treating
the data. However, when parameters are combined from several different studies to develop
a regional relationship, care should be taken to ensure that consistent parameter values are
used.
Another cause of scatter can result from different parameters being derived from calibrations
using floods of different magnitudes. Wong (1989) found that calibrated values of the RORB
parameter kc were larger for larger floods, when overbank flow became established,
43
Regional Relationship for
Runoff-routing Models
compared to smaller in-bank flows. Similar effects are likely in all runoff-routing models. The
use of a single catchment parameter value in regional relationships, without regard to the
magnitude of the flood, may therefore call into question the validity of the relationship (Wong,
1989).
It is important to note that that the value of the lag parameter k (Book 5, Chapter 5, Section
4) (or the corresponding kc, C, B and � parameters in RORB, WBNM, RAFTS and URBS
respectively) depends on the values of other parameters adopted during calibration of the
model. The values of these lag parameters used in regional relationships will be dependent
on the values of, for example, the nonlinearity parameter m, as well as the stream channel
routing method used and the stream channel parameter values adopted. Another cause of
variation in the lag parameter can occur if the basic model is modified, for example by
allocating proportionally greater lag time to subareas and less to stream reaches (Kneen,
1982) in which case the calibrated k values will not be consistent with those calibrated for
the same events using the basic model.
It is possible to obtain an approximate adjustment for k (or Kc, C, B or �) values which have
been derived using other values of m so that they correspond to a base value, for example,
m = 0.8 (Morris, 1982). This is done by adjusting k so that the same overall lag time K is
maintained for the different m values, using Equation (7.6.1). This requires an average or
representative discharge for the particular flood, which will be half or less than half of the
peak discharge Qp. Pilgrim (1987) used an average discharge equal to Qp/2, giving the
following adjustment:
� − 0.8
��
�0.8 = �� (7.6.1)
2
Most regional relationships relate the lag parameter to one or more physical characteristics
of the catchment. These are most commonly the area A, stream length L and stream slope
Sc, although other measures, such as elevation, average rainfall and drainage density are
sometimes used. Different studies sometimes use different definitions of stream slope Sc so
that caution is needed to ensure that the correct definition is used when applying the
relationships. Measurements of stream length L are dependent on the map scale used
(Cordery et al., 1981) and this should also be considered when applying the relationships.
Stream length is strongly correlated with catchment area and stream slope is moderately
correlated with area, so that relationships involving area A alone, or stream length L alone
are often sufficient to describe the regional relationship.
Most of the relationships are of similar form and involve only the single catchment variable,
area A in km2, since this has been found to be the dominant variable. To allow comparisons,
44
Regional Relationship for
Runoff-routing Models
relationships developed by various researchers are presented, together with the number and
size range of the catchments used (where available). The recommended regional
relationships for each region are then given in boxes.
6.2.1.1. Queensland
Relationships have been developed by Weeks and Stewart (1978), Morris (1982), Hairsine
et al (1983), Weeks (1986) and Titmarsh and Cordery (1991). For 14 catchments (158 to
3430 km2) generally covering the coast, plus one catchment near Mt. Isa, Weeks and
Stewart (1978) obtained:
�� = 0.69�0.63 (7.6.2)
m = 0.73 (7.6.3)
For 25 catchments (56 to 5170 km2), with parameters adjusted to m = 0.75, Morris (1982)
obtained:
�� = 0.35�0.71 (7.6.4)
For four catchments in the Darling Downs (2.5 to 50 km2) with m = 0.8, Hairsine et al (1983)
obtained:
�� = 0.80�0.62 (7.6.5)
For nine small catchments in south-east Queensland (0.002 to 50 km2) with m = 0.8,
Titmarsh and Cordery (1991) obtained:
�� = 0.83�0.35 (7.6.6)
For 88 catchments (2.5 to 16,400 km2), covering both coastal and inland areas of
Queensland, with parameters adjusted to m = 0.80, Weeks (1986) obtained:
�� = 0.88�0.53 (7.6.7)
Although Equation (7.6.2) to Equation (7.6.8) appear to be quite different, when plotted
together, with each relationship covering its range of catchment sizes, they conform to a
general trend and can be viewed as different samples from the population of Queensland
catchments. The relationship of Weeks (1986), equation Equation (7.6.8), is a good average
to all relationships and is recommended. Weeks (1986) also investigated possible variations
of Kc within the various regions of the study, and also any effects of catchment slope, but no
relationships were found.
�� = 0.88�0.53 (7.6.8)
45
Regional Relationship for
Runoff-routing Models
�� = 1.09�0.45 (7.6.9)
No regional trends were apparent, except for some lower values of Kc in the Upper Hunter
valley. Addition of slope to the regressions did not improve the fitted relationships
appreciably. Walsh and Pilgrim (1993) added to the data of Kleemola (1987) and derived
relationships for 46 catchments (0.1 to 13,000 km2). Relationships were derived using area
A, stream length L and stream length divided by slope (L/S0.5). The fit of these various
relationships to the data were similar, and a relationship involving area A was considered to
be the most logical one to adopt. The relationships were:
West of Great Dividing Range, upper western slopes and tablelands (12 catchments, 100 to
4770 km2)
�� = 0.80�0.51 (7.6.10)
�� = 1.18�0.47 (7.6.11)
Since the relationships are very similar, a combined relationship was derived for all 46
catchments:
NSW catchments
�� = 1.18�0.46 (7.6.12)
Walsh and Pilgrim (1993) found that most catchments had values of m in the range 0.75 to
1.0 and adopted a fixed value of m = 0.8 for all catchments. No trends for Kc to vary with
event size were evident, indicating that the nonlinearity was adequately described by
adopting m = 0.8. Weighted average and direct average Kc values were calculated from all
events on each catchment, with little difference being apparent.
When Equation (7.6.9) to Equation (7.6.12) are plotted to cover their range of catchment
sizes, all equations are very similar, and equationEquation (7.6.12) is recommended for
catchments both east and west of the Great Dividing Range.
Equation (7.6.13) is recommended for catchments both east and west of the
Great Dividing Range.
NSW catchments
�� = 1.18�0.46 (7.6.13)
6.2.1.3. Victoria
Regional relationships have been developed by Morris (1982) and Hansen et al. (1986).
46
Regional Relationship for
Runoff-routing Models
Morris (1982) developed relationships for 16 catchments (20 to 1924 km2) with m = 0.75:
�� = 1.37�0.59 (7.6.14)
Region with mean annual rainfall greater than 800 mm (19 catchments, 38 to 3910
km2, mainly the eastern part of Victoria):
�� = 2.57�0.45 (7.6.15)
Region with mean annual rainfall less than 800 mm (21 catchments, 20 to 1924
km2, mainly the western part of Victoria):
�� = 0.49�0.65 (7.6.16)
The relationships of Morris (1982) and Hansen et al. (1986) for RF > 800 mm are reasonably
consistent, while the Kc values for the drier part of the state are somewhat lower, particularly
for the smaller catchments. Comparing the Hansen et al. (1986) relationships for the eastern
and western parts of Victoria, predicted Kc values are similar for catchments greater than
about 2,000 km2, but the eastern region values are approximately double for catchment
areas near to 100 km2.
�� = 0.60�0.67 (7.6.17)
�� = 1.09�0.51 (7.6.18)
For the northern and western regions, corresponding to zone 5 of the ARR design storm
temporal patterns, Australian Rainfall and Runoff (Pilgrim, 1987) recommended for flat to
undulating country:
�� = Coeff.�0.57 (7.6.19)
where the Coefficient ranges from 1.2 to 1.7 for equal area stream slopes ranging from 1.0
to 0.2%.
For the northern and western regions, undulating to steep country, with slopes greater than
1%, (Equation (7.6.25)) for the wheatbelt, north-west and Kimberley regions of Western
Australia was recommended by ARR 1987 (Pilgrim, 1987). However, the more recent
relations developed by Kemp (1993) are now recommended for these arid regions.
47
Regional Relationship for
Runoff-routing Models
Kemp (1993) derived relationships for low and high rainfall areas, using m = 0.8.
For average annual rainfall RF less than 320 mm (7 catchments, 170 to 6020
km2):
�� = 7.06�0.71 (RF/1000)2.79 (7.6.20)
For average annual rainfall greater than 500 mm (17 catchments, 5 to 690 km2):
�� = 0.89�0.55 (7.6.21)
For the higher rainfall south-east region, near Adelaide, There is a good agreement between
Equation (7.6.17), Equation (7.6.18) and Equation (7.6.21). Equation (7.6.20) for areas with
annual rainfall less than 320 mm also agrees with these equations for RF near to 320, the
top of the applicable range. For the drier interior of the state, Equation (7.6.19) predicts
higher Kc values, while Equation (7.6.20) predicts lower Kc values. Since the Kemp (1993)
study is the most extensive, Equation (7.6.20) and Equation (7.6.21) are recommended for
South Australia, but with the note that Equation (7.6.20) predicts quite low Kc values for the
drier interior of the state.
�� = 1.23�0.91 (7.6.22)
The nonlinearity parameter m was also calibrated on each catchment, overall being near to
m = 0.75. Kc values for western Australia were found to be considerably higher than for the
eastern states, which they attributed to the observed more sluggish response to rainfall of
these catchments. Morris (1982) for 24 catchments (28 to 5950 km2), using m = 0.80
derived:
�� = 2.48�0.47 (7.6.23)
Flavell et al. (1983) derived relationships for 52 catchments (5 to 6526 km2) in 4 regions of
the state. A non-linearity parameter of m = 0.8 was found to give best results for the south-
west, and was adopted for the entire state. Variables used in the regressions were
catchment area A, main stream length L, main stream equal area slope Se (m/km), and
percentage of land cleared. Generally, regressions involving stream length L were better
than those using area A. For the south west (26 catchments, 29 to 3870 km2) relations for
sub regions with different soil types were similar.
48
Regional Relationship for
Runoff-routing Models
The following equation is recommended for all jarrah forest catchments in the south
west:
�� = 1.49�0.91 (7.6.24)
Relationships for the wheatbelt, north-west, and Kimberley regions were similar and the
following is recommended, based on 26 catchments (5 to 6526 km2) :
�� = 1.06�0.87 S�−0.46 (7.6.25)
L was converted to A through the relationship established by Boyd and Bodhinayake (2005)
to allow Equation (7.6.24) and Equation (7.6.25)to be plotted against catchment area A. The
relationships of Morris (1982) and Flavell et al. (1983) for the south west region are very
similar and Equation (7.6.24) is recommended. Equation (7.6.25) for the wheatbelt, north-
west and arid regions predicts Kc values which are considerably lower than for the south-
west region. The earlier relation of Weeks and Stewart (1978) was based on only
sixcatchments and predicts Kc values which are considerably higher than those of Flavell et
al. (1983), and is not recommended.
For the humid zone, north of latitude 15° S, with equal area slope Se in m/km the
following equation is recommended:
0.55
�� = 1.8(�/��0.5) (7.6.26)
�� = 0.35�0.64 (7.6.27)
Equation (7.6.26) and Equation (7.6.27) are similar for catchments greater than 2500 km2,
but with smaller Kc values predicted for smaller catchments in the transition zone compared
to the humid zone.
For the arid zone, below latitude 17.5° S ARR 1987 (Pilgrim, 1987) recommended the
relationship for the northern and western regions of South Australia (Equation (7.6.19)) be
used. Equation (7.6.19) predicts higher Kc values than both the humid and transition zones,
and as discussed below, is not recommended. Equation (7.6.25), for the wheatbelt, north-
west, Kimberley and arid interior of Western Australia, lies within the range of values for the
drier interior of south Australia (Equation (7.6.20)) and is close to Kc values derived by Board
et al. (1989) for two catchments in the arid zone near to Alice Springs.
Therefore Equation (7.6.25) is recommended for the arid interior of the Northern
territory.
49
Regional Relationship for
Runoff-routing Models
Predicted Kc values for the arid regions of South Australia (Equation (7.6.20)), Western
Australia (Equation (7.6.25)) and Victoria (Equation (7.6.16)) are lower than Kc values for the
higher rainfall areas of these states. Similar trends for lower Kc values in lower rainfall
regions have been found by Yu (1990) and Kemp (1993). Equation (7.6.19) appears to be an
anomaly since it predicts higher Kc values for the arid zone of the Northern Territory
compared to the humid and transition zones.
Equation (7.6.25) for the wheatbelt, north-west and Kimberley region of Western Australia
lies within the range of Kc values predicted by Equation (7.6.20) for the arid zone of south
Australia. The data of Board et al. (1989) also agrees with equation Equation (7.6.25).
6.2.1.8. Tasmania
Morris (1982) developed the following relation for 17 catchments (63 to 1780 km2) using m =
0.75:
�� = 4.86�0.32 (7.6.28)
Australian Rainfall and Runoff (Pilgrim, 1987) presents a relation developed by the
Tasmanian Hydro Electric Commission for western Tasmania, with m = 0.75:
�� = 0.86�0.57 (7.6.29)
Equation (7.6.28) and Equation (7.6.29) are in good agreement for catchments near to 1000
km2 but Equation (7.6.28) predicts larger Kc values for smaller catchments.
50
Regional Relationship for
Runoff-routing Models
between Kc and measures of area and stream length, the area-standardised lag parameter
should be essentially independent of the catchment size. The area-standardised lag
parameter can then be seen as analogous to lag parameters C , B and β in the WBNM,
RAFTS and URBS models respectively.
Yu (1990) found that Kc/dav increased as mean annual rainfall increased in Victoria (30
catchments) and western Australia (51 catchments), but not in New South Wales,
Queensland or the Timor sea region of the Northern Territory (41 catchments in total). For all
122 catchments, the average value of Kc/dav was found to be 1.09. Kemp (1993) found a
similar increase of Kc/A0.57 as mean annual rainfall increased for South Australia, Victoria,
Western Australia and the Alice Springs region of the Northern territory. The effect appears
to be more pronounced in the drier winter rainfall regimes.
Pearse et al. (2002) combined the data of Hansen et al. (1986), Dyer et al. (1995) and Yu
(1990), for more than 220 catchments in Queensland, New South Wales, Victoria, Tasmania
and Western Australia. The non-linearity parameter was set at m = 0.8. The mean value of
Kc/dav was found to range between 0.96 and 1.25, depending on the particular region. The
results of Yu (1990) and Pearse et al. (2002) allow Kc to be estimated for any catchment by
first calculating the average flow distance dav, then multiplying it by the appropriate area-
standardised lag parameter value.
These results also indicate that many of the regional relations across Australia Equation
(7.6.2) to Equation (7.6.29)) could be fitted by either a single relation or a small number of
similar relations. For example, the data of Hansen et al. (1986) for 30 catchments (20 to
3910 km2) produces the following relation between dav (km) and A (km2):
Combining equation Equation (7.6.30) with the range of Kc/dav values from 0.96 to 1.25
produces a general relation:
�� = Coeff.�0.54 (7.6.31)
Equation (7.6.31) can be plotted for the mid-range coefficient value 1.08, together with the
recommended relationships for the various regions of Australia, and is seen to lie in the
middle of these relationships.
51
Regional Relationship for
Runoff-routing Models
Parameter values have been derived for WBNM by Boyd et al. (1979), Boyd et al. (2002),
Sobinoff et al. (1983), Bodhinayake (2004) and Boyd and Bodhinayake (2005). Plots of
parameter C against catchment area A have shown no trend for C to vary with catchment
size, indicate that the power of area A is satisfactory. The lack of dependence between the
area-standardised form of the lag parameter in RORB and catchment area has also been
noted by Pearse et al. (2002). Additionally, plots of C against the peak discharge of the
recorded flood have shown no trend for C to vary with flood size, indicating that the non-
linearity parameter m = 0.77 is satisfactory.
Values of the lag parameter C calibrated for south and eastern Australia are:
For 207 storms on 17 coastal catchments in Queensland (164 to 7300 km2), ranging from
the North Johnstone to the Mary River, Bodhinayake (2004) obtained a mean value of
parameter C of 1.47.
For ten catchments in the coastal region of NSW (0.4 to 250 km2), Boyd et al. (1979)
obtained a mean lag parameter C of 1.68. For 17 catchments (0.1 to 800 km2) in the
Newcastle, Sydney-Wollongong region Sobinoff et al. (1983) obtained a mean C of 1.16.
Recent calibration of WBNM for 205 storms on 19 coastal catchments of NSW (0.2 to 6910
km2) by Boyd and Bodhinayake (2005) obtained a mean C of 1.74.
For 59 storms on six catchments in Victoria on the coastal side of the Great Dividing Range,
ranging from Bairnsdale to Ballarat (0.1 to 153 km2) plus 45 storms on four catchments
inland of the Great Dividing Range near Healseville and Stawell (63 to 259 km2), Boyd and
Bodhinayake (2005) obtained a mean value of C = 1.74.
For 90 storms on eight catchments in the Adelaide Hills near to Adelaide (4 to 176 km2),
Boyd and Bodhinayake (2005) obtained a mean value of C = 1.64.
The small range of these mean parameter values corresponds to the similar small range of
the area-standardised parameter Kc/dav in RORB found by Pearse et al. (2002). Boyd and
52
Regional Relationship for
Runoff-routing Models
With no strong regional trends being apparent, and no strong relationships between
parameter C and catchment or storm characteristics, an overall mean value of
parameter C = 1.60 is recommended for these states of Australia.
Measures of stream length, such as L and dav (Equation (7.6.30)) are strongly related with
catchment area, and it is reasonable to assume that stream segment lengths are also
strongly correlated with subcatchment areas. Replacing the Li term by A0.55, it is seen that
parameter C of WBNM should be directly proportional to Kc/dav of RORB. From the previous
sections the average value of C is close to 1.60 and the average value of Kc/dav is 1.1 (range
0.96 to 1.25). Therefore a relationship between these two parameters is:
Similar analysis indicates that parameter � of URBS should be directly proportional to Kc/dav.
For RAFTS the proportionality coefficient should be related to Kc/dav but with an adjustment
required for the slope term S.
It should be noted that the correspondence between the area-standardised lag parameters
of the various models depends slightly on the power to which area A is raised, as well as the
non-linearity parameter m, however these are not too dissimilar in the four models. The
particular ratio between the parameters will depend on the way in which the lag parameter is
incorporated into flood routing in the particular modelling platforms, as well as the method
adopted for stream channel routing. The ratio 1.45 between RORB and WBNM will not apply
to RAFTS and URBS.
Increased runoff volumes from paved surfaces result from decreased rainfall losses.
53
Regional Relationship for
Runoff-routing Models
The decrease in lag time can be attributed to replacement of vegetated overland flow
surfaces and natural stream channels by more hydraulically efficient paved surfaces, gutters,
pipes and channels. Ratios of lag times in urban compared to otherwise equivalent natural
catchments typically range from 0.1 to 0.5 (Cordery, 1976b; Codner et al., 1988; Mein and
Goyen, 1988; Boyd et al., 1999; Boyd et al., 2002). Decreases in lag time have been related
to the fraction of the catchment which is urbanised by Aitken (1975) and NERC (1975).
Other studies by Rao et al. (1972), Crouch and Mein (1978), Desbordes et al. (1978),
Schaake et al. (1967) and Espey et al. (1977) relate lag time to the impervious fraction. A
survey of these relations is given by Boyd et al. (1999) and Boyd et al. (2002).
All of these studies show a decrease in lag time as the catchment becomes more urbanised.
Typically, the lag reduction is expressed in terms of the urban fraction U in the form (1+U)z
where z ranges between –1.7 and –2.7, with an average near to –2.0. Equation 5.3.4.19
adopts z = -1.97 for RAFTS, while equation 5.3.4.20 adopts z = -2.0 for URBS. A value of z
= -2.0 in this relation produces a lag ratio of 0.25 for a fully urbanised catchment.
The urban fraction urban often does not fully describe the state of urbanisation, since a
100% urban catchment can be residential with typically 30% impervious surfaces, or it can
be a high density commercial centre with close to 100% impervious. Typical relationships
between the impervious and the urban fraction are given by Boyd et al. (1993) and Boyd et
al. (2002). The RAFTS model accounts for this by allocating an equivalent urban fraction U
to each level of impervious fraction. For a fully impervious surface, this produces a lag ratio
of 0.11.
Urban catchments have other features which need to be considered when setting up a
model. During large storms flows may be diverted out of the catchment’s stream network to
form new overland flow routes. This can happen particularly when culvert or bridge openings
become blocked by debris. The model should be set up to reflect these alternative flow
paths. Another feature requiring consideration is that runoff from small development sites,
and particularly when onsite detention storage is used to reduce flood peaks, will require
routing calculations at small time steps and with small discharges. The stream network
runoff-routing models currently used in Australia all have these capabilities.
6.4. References
Aitken, A.P. (1975), Hydrologic investigation and design of urban stormwater drainage
systems, AWRC Technical Paper No. 10, Canberra: Australian Government Publishing
Service.
Board, R.J., Barlow, F.T.H. and Lawrence, J.R. (1989), Flood and Yield Estimation in the Arid
Zone of the Northern Territory. Hydrology and Water Resources Symposium 1989.
Christchurch, New Zealand, pp: 367-371
54
Regional Relationship for
Runoff-routing Models
Boyd, M.J. and Bodhinayake, N.D. (2006), WBNM runoff routing parameters for south and
eastern Australia, Australian Journal of Water Resources, 10(1), 35-48.
Boyd, M.J., Pilgrim, D.H. and Cordery I. (1979), A storage routing model based on basin
geomorphology, Journal of Hydrology, 42: 209-230.
Boyd, M.J., Rigby, E.H., VanDrie, R. and Schymitzek, I. (2002), WBMN 2000 for flood
studies on natural and urban catchments, In 'Mathematical Models of Small Watershed
Hydrology and Applications', pp: 225-258.
Boyd M.J., Rigby E.H. and van Drie, R. (1999), Modelling urban catchments with
WBNM2000, Proceedings of the Water 99 Conference, Institution of Engineers Australia,
Brisbane.
Boyd, M.J., Bufill, M.C. and Knee, R.M. (1993), Pervious and impervious runoff in urban
catchments. Hydrological Sciences Journal, 38(6), 463-478.
Codner, G.P., Laurenson, E.M. and Mein, R.G. (1988), Hydrologic Effects of Urbanisation: A
case study. Hydrology and Water Resources Symposium, ANU Canberra, 1-3 Feb 1988,
Institution of Engineers Australia, pp: 201-205.
Cordery, I, Pilgrim, D.H. and Baron, B.C. (1981), Validity of use of small catchment research
results for large basins. Instn. Engrs. Australia, Civil Engg. Trans., CE23: 131-137.
Crouch, G.I. and Mein R.G. (1978), Application of the Laurenson Runoff Routing Model to
Urban Areas, I.E. Aust. Hydrology Symposium, Canberra, pp: 70-74.
Desbordes, M. (1978), Urban runoff and design storm modelling, Proceedings of the First
International Conference on Urban Storm Drainage.
Dyer, B., Nathan, R.J., McMahon, T.A. and O'Neill, I.C. (1993), A cautionary note on
modelling baseflow in RORB. I.E. Aust. Civil Engin. Trans., CE35(4), 337-340.
Dyer, B.G., Nathan, R.J., McMahon, T.A. and O’Neill, I.C. (1995), Prediction Equations for
the RORB Parameter kc Based on Catchment Characteristics, Australian Journal of Water
Resources, 1(1), 29-38.
Espey, W.H., Altman, D.G. and Graves, C.B. (1977), Nomograph for 10 minutes unit
hydrographs for urban wartersheds, Technical Memo No.32, American Society of Civil
Engineers, New York.
Flavell, D.J., Belstead, B.S., Chivers, B. and Walker, M.C. (1983), Runoff routing model
parameters for catchments in Western Australia, Hydrology and Water Resources
Symposium, Institute of Engineers, Hobart.
55
Regional Relationship for
Runoff-routing Models
Grayson, R.B., Argent, R.M., Nathan, R.J., McMahon, T.A. and Mein, R.G. (1996),
Hydrological recipes - Estimation techniques in Australian Hydrology, Cooperative Centre for
Catchment Hydrology, Monash University, pp.125.
Hairsine, P.B., Ciesiolka, C.A.A., Marshall J.P. and Smith, R.J. (1983), Runoff routing
parameter evaluation for small agricultural catchments, Proceedings of the Hydrology and
Water Resources Symposium, Publication 83/13, I.E. Aust. pp: 33-37.
Hansen, W.R., Reed, G.A. and Weinmann, P.E. (1986), Runoff routing parameters for
Victorian catchments, Proceedings of the Hydrologoy and Water Resources Symposium,
Publication 86/13, Griffith University, pp: 192-197.
Kemp, D.J. (1993), Generalised RORB parameters for Southern, Central and Western
Australia, WATERCOMP, The Institution of Engineers, Melbourne.
Knee, R.M. and Bresnan, S. (1993), Urban Stormwater Design Methods in the ACT, National
Conference Publication, Instituion of Engineers Australia, p: 445.
Kneen, T.H. (1982), A runoff routing model for the Yarra river to Warrandyte. Instn. Engrs.
Australia, 14 Hydrology and Water Resources Symposium NCP82/3, pp: 115-119.
Loy, A. and Pilgrim, D.H. (1989), Effects of data errors on flood estimates using two runoff
routing models. Instn. Engrs. Australia. 19 Hydrology and Water Resources Symposium,
NCP 89/19, pp: 176-182.
Maguire, J.C., Kotwicki, V., Purton, C.M. and Schalk, K.S.(1986), Estimation of the RORB
parameter Kc for small South Australian catchments. Engineering and Water Supply
Department, Adelaide, Jan. 1986, E & WS Library Ref. 86/2, 17 pp.
McMahon, G.M. and Muller, D.K. (1983), Calibration strategies for non-linear runoff-routing
models. Instn. Engrs. Australia, 15 Hydrology and Water Resources Symposium, NCP
83/13, pp: 129-135.
McMahon, G.M. and Muller, D.K. (1986), The application of the peak flow parameter
indifference curve technique with ungauged catchments. Instn. Engrs. Australia, 17 Hydrol.
and Water Resources Symposium, NCP 86/13, pp: 186-191.
Mein, R.G. and Goyen, A.G. (1988), Urban runoff. Civil Engineering Transactions, 30:
225-238.
Morris, W.A. (1982), Runoff routing model parameter evaluation for ungauged catchments.
Instn. Engrs. Australia, 14 Hydrology and Water Resources Symposium, NCP 82/3, pp.
110-114.
Natural Environment Research Council (NERC; 1975), Flood Studies Report, 5 Vols, Natural
Environment Research Council, Institute of Hydrology, Wallingford, UK.
Netchaef, P., Wood, B. and Franklin, R. (1985), RORB parameters for best fit – catchments
in the Pilbara region. Instn. Engrs. Australia, 16 Hydrology and Water Resources
Symposium NCP85/2, pp: 53-57.
56
Regional Relationship for
Runoff-routing Models
Pearse, M., Jordan, P. and Collins, Y. (2002), A simplemethod for estimating RORB model
parameters for ungauged rural catchments. Instn. Engrs. Australia, 27th Hydrology and
Water Resources Symposium, CD_ROM, 7 pp.
Perera, B.J.C. (2000), Evaluation of post-ARR87 flood estimation inputs for ungauged rural
catchments in Victoria. Australian Jour. Water Resources, 4(2), 99-110.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Rao, A.R., Delleur, J.W. and Sarma, P.B.S. (1972), Conceptual hydrologic models for
urbanizing basins. ASCE Jour. Hydraulic Engg., 98(7), 1205-1220.
Schaake, J.C., Geyer, J.C. and Knapp, J.C. (1967), Experimental examination of the
Rational method. ASCE Jour. Hydraulics Div., 93(6), 353-370.
Sobinoff, P., Pola, J.P. and O'Loughlin, G.G. (1983), Runoff routing parameters for the
Newcastle-Sydney-Wollongong region. Instn. Engrs. Australia, 15 Hydrology and Water
Resources Symposium, NCP 83/13, pp: 28-32.
Titmarsh, G.W. and Cordery, I. (1991), An examination of linear and non linear flood
estimation methods for small agricultural catchments, Australian Civl Engineering
Transactions, 33(4), 233-242.
Walsh, M.A. and Pilgrim, D.H. (1993), Re-assessment of some design parameters for flood
estimation in New South Wales. Instn. Engrs. Australia. 21 Hydrology and Water Resources
Symposium, NCP93/14, pp: 251-256.
Weeks, W.D. (1986), Flood estimation by runoff routing - model applications in Queensland.
Instn. Engrs. Australia, Civil Engg. Trans., CE28(2), 159-166.
Weeks, W.D. and Stewart, BA. (1978), Linear and non-linear runoff routing for ungauged
catchments. Instn. Engrs. Australia, Hydrology Symposium, NCP 78/9, pp: 124-128.
Wong, T.H.F. (1989), Nonlinearity in catchment flood response. Instn. Engrs. Australia, Civil
Engg. Trans., CE31(1), 30-38.
Yu, B. (1990), Regional relationships for the runoff routing model RORB revisited. Instn.
Engrs. Australia, Civil Engg. Trans. CE31(4), 186-191.
57
Chapter 7. Validation and Sensitivity
Testing
William Weeks, Erwin Weinmann
7.1. Validation
After the catchment modelling system calibration has been finalised, the final step in
acceptance of the model is the validation process. The calibration has resulted in model
parameters that are suitable for application to the design problem, but validation provides a
means of ensuring that the parameters are suitable and that the catchment modelling
system can be applied to the design problem required. The validation process is therefore a
confirmation that the calibrated catchment modelling system is fit for the required purpose.
Validation can be associated with independent verification of the model parameters. In this
process, the calibrated catchment modelling system is tested with an independent data set
that was not used in the parameter estimation process. While this does provide additional
confirmation that the catchment modelling system is performing adequately, calibrations are
usually very limited in the availability of data, and there are usually insufficient events to
allow this independent assessment.
Validation therefore is a careful review of the catchment modelling system and its application
to the problem at hand, so must consider the suitability of both the catchment modelling
system and the parameters.
The first step is to review whether the catchment modelling system applied is appropriate for
the application required. The questions are as follows.
• Does the model include sufficient detail in the spatial coverage of flooding?
• Does the model represent the flooding questions with sufficient accuracy to answer the
required questions?
• Can the model be extrapolated accurately to rarer (or sometimes smaller) floods from the
flood magnitudes used to establish it?
• Can the model be used to represent the range of design conditions (such as developed
conditions or flood mitigation options) that are required in the design applications?
In addition to the review of the model and calibration, additional validation can be considered
by reconciling the model performance with an alternate independent estimate. For example,
for hydrology calculations, rainfall based methods can be reconciled with streamflow based
methods or two alternative models may be calibrated separately and the results compared.
A special form of validation is for hydrologic models that are used to estimate probability
based design flood characteristics. In these cases the main performance criterion for the
58
Validation and Sensitivity
Testing
model and the adopted parameter set is that the model is able to transform the probability
based design inputs (design rainfalls, design losses and baseflows) into probability based
flood outputs (flood hydrographs and flood levels) without introducing any probability bias ie.
Probability neutral. Here the validation has to be against independent design flood
estimates, e.g. from the flood frequency estimation procedures covered in Book 5.
For models that are well-calibrated to a range of flood events and later verified, considerable
confidence can be had in the model’s ability to reproduce accurate flood levels. This in turn
means that factors of safety such as the design freeboard applied to flood planning levels
can be kept to a minimum.
However, for uncalibrated or poorly calibrated models less confidence can be had in the
model’s accuracy, and greater factors of safety (e.g. larger freeboards) should be applied to
reflect the greater uncertainty (further discussion on uncertainty can be found in Book 7,
Chapter 9). To quantify these uncertainties, sensitivity testing could be carried out where a
model’s calibration is non-existent or poor.
• For downstream boundaries, not at a receiving water body such as the ocean, vary the
stage discharge or water level upwards to check that the water levels in the area of
interest are not greatly affected;
• Apply blockages and greater losses to hydraulic structures and inlets; and
• Making the model’s resolution finer to check that results do not demonstrably change; and
59
Validation and Sensitivity
Testing
Sensitivity testing is also a very important part of developing a modeller’s knowledge base
and should be encouraged wherever possible.
After a few weeks of pulling their hair out trying to calibrate to a well-defined flood mark
in a house (the model was calibrating well elsewhere), the modellers called the owner
of the house. After chatting for a while the owner suddenly remembered “my Dad had
the house raised after that flood”. Once the flood mark was adjusted by how much the
house was raised, a good calibration was revealed! The modellers regretted not
making that call a few weeks earlier…
60
Chapter 8. Application to Design
William Weeks, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
8.1. Overview
Once the practitioner is satisfied with the calibration and validation, the next step in the
application of a catchment modelling system is to apply it to the design problem. Australian
Rainfall and Runoff is principally concerned with design flood estimation problems where
floods of defined probabilities are required, but other applications are required for flood
forecasting and warning or for assessment of impacts. A concern is that the calibration and
validation processes concentrate on recorded historic flood events, whereas the design
applications are more theoretical probabilistic events.
As discussed in Book 7, Chapter 7, the parameters selected for application to the design
conditions must be appropriate for the required application as well as consistent with the
calibration to the available historic data.
There are therefore three conditions where model parameters may be required:
• Historic Floods - These are the floods where the data has been used to estimate
parameters and to validate the models. Where there is more than one flood event, there
may be a variety of conditions represented, with different spatial and temporal rainfall
distributions possible. The flood events will sample a limited range of conditions that have
applied during the period when data could be collected and these may not necessarily be
representative of the conditions where the model must be applied. In addition, catchment
conditions may have changed since the historic flood event occurred. Historic floods may
also be analysed in the “design” application of models where flood impacts may be
required to assess how development would have affected a historic flood for example.
• Design Applications - This is the main application where models discussed in Australian
Rainfall and Runoff will be required and require results for floods of defined probability to
be calculated. This is a more theoretical application than the analysis of historic floods and
the parameters need to be established to ensure that the probability is calculated correctly.
It is likely that the probabilistic floods calculated will be larger than the historic floods used
to estimate the model parameters. The probabilistic design flood estimates must consider
the relevant requirements. In some cases, flood peaks may be the only requirement, while
flood hydrographs or flood volumes may be relevant at other times. There are different
issues for each requirement.
• Real Time Flood Estimation - This is the requirement to use the model for a flood
forecasting and warning application. This is different from the design application since
timing is critical and the parameters must be available to carry out the analysis as the
61
Application to Design
event is occurring. This is a far more complex application than the design situation, and
while similar conditions apply in model application, this chapter concentrates on the design
conditions.
This chapter concentrates on the probabilistic applications, though there are some
similarities with the others.
A common assumption is that the model parameters calibrated on relatively frequent events
remain constant for all rarer events. This may or may not be correct, so this assumption
should be checked with regard to the representativeness of the calibration floods and with
the model processes and whether there is a change in response for rarer flood events.
In general constant parameters are recommended for the range of design events
unless there is some evidence otherwise.
During calibration the practitioner must carefully review the properties of the historic floods
used for calibration and determine how appropriate these are to be applied to the design
problem. For example, the available calibration floods may be localised on a part of the
catchment while the design flood event should be a more widely distributed event. On larger
catchments, floods may be usually produced from a part of the catchment and the actual
contributing section may vary from one event to another. The design case must therefore
allow for the different catchment properties while estimating the probabilistic floods correctly.
Some of these issues are discussed in Book 4 but there may be an impact on the transfer of
the model parameters from the historic calibration events to the design flood events.
The calibrated parameters are for the situation/time when the calibration event occurred.
However, there may be significant changes from one event to another. For example, in
agricultural regions, the pattern of cropping may be different from one event to another.
These varying catchment conditions may be considered in the individual calibration events,
but they then need to be generalised for the design application. There are questions
concerning how this is implemented. For example, sugar cane agriculture has areas of very
62
Application to Design
high floodplain roughness in some locations and areas of fallow ground in other parts. These
patterns vary from year to year and are difficult to determine for historic events. There is a
question of the “average” conditions that should apply for the design application. A common
approach is to adopt an average value of two very different conditions and then carry out
some sensitivity tests to assess the impact of changes in the pattern of agriculture.
Similarly in arid areas, the antecedent conditions may have a major impact on catchment
conditions for individual flood events, but these conditions then need to be represented in
the design situation.
It is therefore important to confirm the model performance with probabilistic results. For
situations where sufficient streamflow gauging is available, the model parameters can be
confirmed using the Flood Frequency Analysis results to confirm that the model is
representing the probabilistic flood discharges. Where there is insufficient streamflow
records for a Flood Frequency Analysis, the model can be cross checked with a regional
flood frequency results (Book 3, Chapter 3). Similarly, the model output can be confirmed
with other anecdotal data, to confirm that the parameters are appropriate for the design
application.
When applying hydrologic and hydraulic models to design situations, there are additional
details that add complexity to the process. Often the historic floods are calibrated to the
conditions that apply when the flood event occurred, so there are set values for antecedent
conditions, losses, baseflow and the particular conditions that applied in the event, such as
spatial or temporal patterns of rainfall. These additional factors are often not a part of the
calibration process but must be incorporated into the design conditions.
63
Chapter 9. Uncertainty Determination
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
9.1. Introduction
An overview of the various sources of uncertainty relevant to flood estimation and their
treatment is provided in Book 1, Chapter 2. This guidance distinguishes between two broad
types of uncertainty, namely:
Practical procedures for dealing with aleatory uncertainty are provided in Book 4, Chapter 4,
whereas the focus of this chapter is on the assessment of epistemic uncertainty. Procedures
for dealing with epistemic uncertainty for some specific methods are described elsewhere in
ARR, and in particular it is worth noting the rigorous procedures provided for estimates of
peak discharges using flood frequency and regional flood prediction methods, as described
in Book 3.
It is perhaps a common view amongst practitioners that uncertainty analysis is too difficult to
undertake. It is certainly true that assessing uncertainty takes additional time and effort, but
there are uncertainty assessment frameworks with generic applicability to a range of
practical problems (e.g. Pappenberger and Beven (2006); Pappenberger et al. (2006);
Doherty (2016); Kuczera et al. (2006); Palisade Corporation (2015); Vrugt and Braak (2011)).
For those with the necessary skills and interests, it is reasonable to assume that the effort
required to become proficient with such tools will return benefits across a range of projects.
That said, it would appear that specialists who are comfortable with uncertainty analysis tend
to underestimate the depth of tacit knowledge required to implement and interpret such
schemes, and the entry hurdle for many practitioners is a material one. Regardless, at this
point in time it is acknowledged that the available hydrological and hydraulic models
commonly used in Australia do not include the capability to assess uncertainty. It is expected
that this situation will improve with time.
The intended audience for this Chapter are interested practitioners who do not have
specialist training in the application of uncertainty techniques. The procedures described in
this Chapter are not intended to cover the steps required to estimate the true uncertainty
associated with input data, model parameters, and model structure. Rather, a small number
of practical procedures are presented in the hope that these will allow practitioners to better
understand (and communicate) the nature of uncertainty associated with selected key
aspects of the flood estimates provided.
Book 7, Chapter 9, Section 2 discusses the role of sensitivity analysis in the assessment of
uncertainty, and this is followed by a discussion (Book 7, Chapter 9, Section 3) of some
simple analytical approaches relevant to error propagation. Book 7, Chapter 9, Section 4
64
Uncertainty Determination
discusses the application of Monte Carlo methods that can be used to assess uncertainty.
Each method is supported by illustrative examples of their usage.
There are a variety of ways that the sensitivity of an outcome to uncertainties can be
represented, and two simple examples are shown in Figure 7.9.1. The tornado diagram
provides a simple summary of the sensitivity of an outcome to reasonable estimates of upper
and lower ranges, and the spider plot illustrates the dependence of the outcome on the
percentage deviation of the key parameters from their adopted values.
Changes in model input values can affect model outputs in different ways, and any
dependency between inputs can mask the manner in which factors combine to influence the
outputs. In addition, the nature of the factors which most influence the outcome may well
vary with event magnitude. For example, the sensitivity to non-linearity in storage-routing
models is dependent on the degree to which estimates are extrapolated beyond the
magnitude of floods used for calibration; the reasonable range of estimates of roughness
parameters in a hydraulic model may vary with the depths of flow considered. Accordingly,
judgement needs to be used when selecting which factors to vary over a particular set of
conditions, and care is needed to ensure that the range of values considered takes account
of the possible dependencies.
Care also needs to be taken when considering the parameter ranges over which the
sensitivity is assessed. The upper and lower limits of parameter values considered should
reflect a similar range of notional uncertainty in each, otherwise misleading inferences may
be drawn about the sensitivity of their impact on the outcome. More details on the uses and
application of sensitivity analyses can be found in Loucks et al. (2005).
65
Uncertainty Determination
to produce errors in the model outputs. The first order error propagation method can be used
in a similar fashion to sensitivity analysis to firstly identify the relative importance of different
error sources and secondly to assess how the influence of different error sources changes
with event magnitude and frequency.
∂� 2 2 ∂� 2 2 ∂� 2 2 (7.9.1)
�� = �� + �� + � +…
∂� ∂� ∂� �
where sf and sx, sy, sz are respectively the standard deviations of the function and the
independent variables. This approximation assumes that the errors are normally distributed
and independent.
This error propagation equation can be used to gain an approximate indication of how errors
in the independent variables translate into errors in the estimation results. The following
example illustrates this and compares the errors in the estimates from three different
methods.
If an estimate of the flood volume at gauged site Y is available from a frequency analysis
of flood volumes, the flood volume (V) at ungauged site X can be estimated from a scaling
relationship:
�
�� �
�� = �� = �� � (7.9.2)
��
where A denotes the area of each catchment, m is a scaling parameter, and the
subscripts refer to the individual catchments. Assuming that the estimate of the ratio of
the areas of the two catchments (R) is error free but that the volume estimate at site Y
(Vy) and the exponent of the scaling relationship (m) have errors sV and sm, respectively,
then the relative error in Vx can be calculated as:
2
��� ��� 2
= + �� � �� (7.9.3)
�� ��
Estimates of errors in the transposed flood volumes based on Equation (7.9.3) are
provided in Table 7.9.1 for a range of representative input errors and parameter values.
The results indicate that errors in the flood estimate at the gauged site are directly
66
Uncertainty Determination
transferred to the estimate at the ungauged site, so errors increase as AEP decreases.
Scaling to different catchment areas introduces little extra error as long as the catchment
areas differ by no more than about 20 to 30% and the exponent in the scaling equation
can be estimated to within about 10% accuracy. Scaling up and scaling down introduces
similar errors.
Table 7.9.1. Errors Flood Volumes Estimated Using a Transposition Model for a Range of
Assumptions.
AEP Variables Output/ Input Errors Output
Input Error
R m Vx/Vy sVy/Vy sm/Vy sVx/Vx
0.5 to 0.1 0.8 0.7 0.86 0.10 0.10 0.101
0.8 0.9 0.82 0.10 0.10 0.102
0.8 0.7 0.86 0.10 0.20 0.105
0.5 0.7 0.62 0.10 0.10 0.111
0.5 0.7 0.62 0.10 0.20 0.139
1.25 0.7 1.17 0.10 0.10 0.101
1.25 0.7 1.17 0.10 0.20 0.105
2.0 0.7 1.62 0.10 0.10 0.111
2.0 0.7 1.62 0.10 0.20 0.139
0.01 0.8 0.7 0.86 0.20 0.10 0.201
0.5 0.7 0.62 0.20 0.10 0.206
0.5 0.7 0.62 0.20 0.20 0.222
0.001 0.8 0.7 0.86 0.40 0.10 0.400
0.5 0.7 0.62 0.40 0.10 0.403
0.5 0.7 0.62 0.40 0.20 0.412
This method assumes that a certain percentage of the average design rainfall depth P
over the catchment for the given duration and AEP 1 in T is converted to a corresponding
flood volume at the catchment outlet. The flood volume can thus be computed as
�� = ��� (7.9.4)
Assuming that all the three variables have estimation errors associated with them, the
relative error in the estimated flood volume can be approximated as:
2 2 2
��� �� �� ��
= + + (7.9.5)
�� � � �
where sVx and sC, sA, sP are respectively the standard deviations of the estimated volume
and the three independent variables used in the estimate. Estimates of errors in the flood
volumes based on Equation (7.9.5) are provided in Table 7.9.2 for a range of
representative input errors and parameter values. The results show that the relatively
67
Uncertainty Determination
large error in the volumetric runoff coefficient C dominates the error in the estimated flood
volume.
Table 7.9.2. Errors Flood Volumes Estimated Using a Runoff Coefficient Model for a
Range of Assumptions
The flood volume can also be estimated from a water balance equation for the catchment
over the duration of interest, in this case expressed in terms of design values for the
different terms in the equation, (average depths over the catchment area A, in mm):
��
= � − � + �� (7.9.6)
�
where I is the design event rainfall, L is the total loss and BF the total baseflow
contribution over the duration of the flood event. The loss and baseflow values are
assumed to be invariant with AEP.
The relative error in the estimated flood volume can then be calculated as:
��
� 1 2
= � + �2� + �2�� (7.9.7)
�� �� �
Estimates of errors in the transposed flood volumes based on Equation (7.9.7) are
provided in Table 7.9.3 for a range of representative input errors and parameter values.
The results show that for frequent flood events the errors in the loss and baseflow values
play an important role in the flood volume estimates, which can have large errors, while
for very rare events the errors in estimated flood volumes are dominated by errors in the
design rainfalls.
Table 7.9.3. Errors Flood Volumes Estimated Using a Water Balance Model for a Range
of Assumptions
68
Uncertainty Determination
Table 7.9.4.
AEP Input Errors (Relative) Input Errors (Absolute)
sI/I sI/L sBF/BF sI sI sBF svx svx/Vx
0.5 0.2 0.3 0.4 13.0 12.0 8.0 19 0.43
0.5 0.1 0.2 0.3 6.5 8.0 6.0 12 0.27
0.1 0.2 0.3 0.4 20.0 12.0 8.0 25 0.31
0.1 0.1 0.2 0.3 10.0 8.0 6.0 14 0.18
0.01 0.2 0.3 0.4 30.0 12.0 8.0 33 0.26
0.01 0.1 0.2 0.3 15.0 8.0 6.0 18 0.14
0.0001 0.3 0.3 0.4 150.0 12.0 8.0 151 0.31
0.0001 0.2 0.2 0.3 100.0 8.0 6.0 100 0.21
The comparison of the error estimates from the three methods indicates that for relatively
frequent events the transposition model (A) using an estimate based on flood frequency
analysis performs best. Method (B) is dominated by relatively large errors in the runoff
coefficient for all flood event magnitudes and frequencies. Method (C) performs best for rare
to very rare events, where errors in the loss and baseflow play only a minor role.
A general framework for how Monte Carlo simulation may be used to assess uncertainty is
illustrated in Figure 7.9.2. The area of light-blue shading in this figure represents the main
elements used to consider the joint interaction of the factors that are subject to natural
variability (aleatory uncertainty), as discussed in detail in Book 4, Chapter 4 (Figure 4.4.7).
The outer loop (in green) represents the additional simulations undertaken in which the
parameters are stochastically sampled from distributions representing uncertainty in the
inputs. That is, undertaking the inner loop of simulations yields an estimate of exceedance
probability that a particular outcome might be exceeded (step D in Figure 7.9.2), and the
outer loop provides an estimate of uncertainty of the derived quantile (step E). Of course,
this framework could be simplified to provide an assessment of the uncertainty in the
magnitude only of the outputs, in which case only the deterministic modelling within the blue
shaded area is required (step C). The additional simulations required to consider epistemic
uncertainty increases the number of simulations by up to two orders of magnitude. For
example, if a stratified sampling scheme used 5000 simulations to derive a frequency curve
69
Uncertainty Determination
of outputs, then around 500 000 simulations would be required to derive the corresponding
90% confidence limits.
Details of the simulation procedures required to undertake Monte Carlo simulation are
provided in Book 4, Chapter 4. Two examples are provided below which illustrate application
of these procedures. One example is used to assess the errors in the transposition of flood
volumes (model A, as outlined in the preceding section), and the other extends the worked
example presented in Book 4, Chapter 4, Section 4 for the analysis of concurrent tributary
flows. The first example just considers the uncertainty in the magnitude of the outcome, the
second considers the uncertainty in both its magnitude and frequency.
Figure 7.9.2. General Framework for the Analysis of Uncertainty using Monte Carlo
Simulation.
The steps required to do this are described in Book 4, Chapter 4, Section 3. For this
example 2000 random numbers uniformly varying between 0 and 1 are generated for each
70
Uncertainty Determination
of the variables Vy and m, as shown in columns 2 and 5 of Figure 7.9.3. The relative errors
associated with Vy and m are assumed to be 10% and 20%, the ratio of catchment areas (R)
is assumed to be 1.25 and the value of the exponent (m) is 0.7. The standard normal
variates corresponding to the uniform random numbers are computed (columns 3 and 6),
and these are multiplied by the selected variable values and their respective error terms (sm,
sv) to yield 2000 stochastic values of Vy and m (columns 4 and 7). These steps yield a
sample of values with a mean of zero and a standard deviation equal to their respective
errors (sm, sv). Values of Vx are computed using Equation (7.9.2) for each pair of
stochastically generated values of m and Vy (column 8). The standard deviation of these
values represents the error about the mean estimate of Vx, and for the sample shown in
Figure 7.9.3 this is found to be 3.11; when expressed as a proportion of the mean (0.108),
this is similar to the result found by First Order Approximation, as shown in the 6th row of
entries in Table 7.9.1.
The sample size is selected by trial and error such that successive estimates of the
uncertainty change little with repeated stochastic samples. A sample size of 100 yields
estimates of uncertainty that vary by around 10% of the mean value, and that obtained using
a sample of 2000 vary by around only 1%.
The analysis presented in Book 4 demonstrates the use of the Total Probability Theorem in
combination with a stratified sampling scheme to derive quantiles of flood levels below the
71
Uncertainty Determination
confluence. The analysis presented here extends that original analysis, and shows how
Monte Carlo simulation can be implemented in spreadsheet software to derive confidence
limits for the derived flood levels. In essence, this example follows the framework illustrated
in Figure 7.9.2, where the steps analysing aleatory uncertainty (blue shading) are described
in Book 4, Chapter 4, Section 4, and those associated with the analysis of epistemic
uncertainty are presented below.
The analysis is subject to three sources of uncertainty, namely the errors associated with:
• the estimate of correlation between flood maxima in the two streams; and
• the estimates of the corresponding downstream flood levels from the hydraulic modelling.
1. Use the log-Normal distribution obtained from fitting to the N maxima in the historic record
to generate a sample of N synthetic flood maxima (using the parametric sampling
approach described in Book 4, Chapter 4, Section 3)
2. Fit a log-Normal distribution to this synthetic sample (ie calculate the mean and standard
deviations of the logs of this sample)
3. Repeat steps i) and ii) 100 times to obtain 100 sets of log-Normal parameters, where the
90% confidence limits of the parameters are determined simply by calculating the 5% and
95% exceedance percentiles of each sample.
The above steps are applied separately to the flood data available for the mainstream and
tributary. The resulting distributions of the parameters are shown in Figure 7.9.4. It is seen
that the uncertainties in the tributary parameters are slightly wider than those of the
mainstream, which reflects the shorter record length (30 years versus 50).
72
Uncertainty Determination
Figure 7.9.4. Uncertainty in parameters of the log-Normal distribution (high and low bars
represent 5% and 95% limits, the high and low boxes represent 25% and 75% limits, and the
central bar shows the median).
The approach used to characterise uncertainty in the degree of correlation (r) between flood
maxima in the two streams is similar to that used when errors are assumed to be Normally
distributed (as described in Book 7, Chapter 9, Section 4). However, an additional
transformation step is introduced to better conform to the assumed distribution of errors in
estimates of the correlation coefficient (Fisher, 1915). Fisher’s transformation of the
correlation coefficient is approximately normally distributed with a mean (r’) and standard
error (se'r):
1 1+�
�′ = ln (7.9.8)
2 1−�
1
��′� = (7.9.9)
�−3
The correlation between the log-transformed flood maxima in the two streams (r) is
calculated to be 0.6, based on 30 years of concurrent data. The Fisher transformed
estimates of r’ and ��′� are thus calculated to be 0.693 and 0.192. With these calculated, the
steps to generate a stochastic sample of correlations are as follows:
iii. Obtain the quantile (gi) corresponding to p from the inverse of the transformed normal
distribution: �� = 0.693 + ��0.192
iv. Apply the inverse of Fisher’s transformation to gi to obtain a stochastic estimate of the
correlation coefficient (ri), where the inverse transform is calculated from the inverse of
Equation (7.9.8):
73
Uncertainty Determination
2�� − 1
�
�� = (7.9.10)
2�� + 1
�
v. Repeat steps i) to iv) 100 times to obtain a stochastic sample of correlation coefficients.
In this example, the mean of 100 correlation coefficients generated in this manner is found to
be 0.603, where 90% of the sample is found to lie between 0.407 and 0.762.
A pragmatic approach is used to account for errors in the relationship between flows in the
two streams and downstream flood levels. The approach is based on the simple assumption
that the errors are normally distributed and invariant with magnitude, where the adopted
standard deviation of the errors is 0.1m. The magnitude of the error term is based on the
standard error of the regression relationship developed using hydraulic modelling, but this
was increased slightly to reflect the additional uncertainty associated with the hydraulic
modelling. The adopted approach could be modified to allow for errors in the slope of the
fitted regression line and include dependency on flow magnitude, but this simpler approach
provides a useful basis for exploring the sensitivity of the outcome to this source of
uncertainty.
The steps involved in this are identical to that used in the preceding example in Book 7,
Chapter 9, Section 4, as shown in Figure 7.9.3. An illustration of level estimates derived with
the error term included is provided in Figure 7.9.5. The values on the x-axis correspond to
the levels derived using the regression equation between upstream flows and levels
simulated by the hydraulic model (Figure 4.4.15), and those on the y-axis include the
normally distributed errors generated with a standard deviation of 0.1.
74
Uncertainty Determination
The final step required to assess the uncertainty in the derived frequency curve is to derive a
frequency curve for each set of input parameters derived from the preceding three steps.
That is, a flood level frequency curve is derived for the set of stochastic parameters
generated in the preceding three steps, and the uncertainty in the design flood levels is
obtained from the distribution of results.
1. Generate 100 sets of stochastic parameters for the two log-Normal distributions and the
correlation coefficient (this corresponds to step A in Figure 7.9.2)
2. For each set of parameters, generate 1000 stochastic samples of flood maxima in the
mainstream and the tributary, using the procedure described in Table 4.4.1, then calculate
the corresponding downstream flood levels from the regression relationship with a
normally distributed error term added to the level estimates, as described above; these
calculations correspond to steps B and C, shown in Figure 7.9.2.
75
Uncertainty Determination
3. Derive a flood level frequency curve by fitting a simple probability model to the 1000
stochastic maxima, as described in Table 4.4.1 (step D, Figure 7.9.2); this is used to
estimate design levels for a range of exceedance probabilities.
4. Steps ii) and iii) are repeated for each of the 100 sets of stochastic parameters, which
yields 100 estimates of design levels for each of the exceedance probabilities; these
levels are ranked, and 90% of the range is used to represent uncertainty (step E,
Figure 7.9.2).
The final level frequency curve and confidence limits derived using the above steps are
shown in Figure 7.9.6.
Figure 7.9.6. Confidence limits on the flood level frequency curve determined using the
general framework for the analysis of uncertainty using Monte Carlo simulation.
9.5. References
Doherty, J. (2016), PEST, model-independent parameter estimation User Manual,
Watermark Numerical Computing, http://www.pesthomepage.org.
Fisher, R.A. (1915). Frequency distribution of the values of the correlation coefficient in
samples of an indefinitely large population. Biometrika (Biometrika Trust), 10(4), 507–521
Haan, C.T. (2002). Statistical Methods in Hydrology. The Iowa State University Press.
Kuczera, G., Kavetski, D., Franks, S.W. and Thyer, M. (2006), Towards a Bayesian total
error analysis of conceptual rainfall-runoff models: Characterising model error using storm
dependent parameters, J Hydrol, 331(1-2), 161-177.
76
Uncertainty Determination
Loucks, D.P., van Beek, E., Stedinger, J.R., Dijkman, J.P.M., Villars, M.T. (2005), Chapter 9
Model Sensitivity and Uncertainty Analysis, In Water Resources Systems Planning and
Management: An Introduction to Methods, Models and Applications. Paris: UNESCO.
doi:ISBN: 92-3-103998-9.
Palisade Corporation (2015), Risk analysis and simulation add-in for Microsoft Excel, User's
Guide, version 7, Palisade Corporation, New York.
Pappenberger, F., Beven, K.J., (2006), Ignorance is bliss: Or seven reasons not to use
uncertainty analysis. Water Resources 42, 1–8. doi:10.1029/2005WR004820.
Pappenberger, F., Harvey, H., Beven, K., Hall, J., Romanowicz, R., Smith, P. (2006),
Implementation plan for library of tools for uncertainty evaluation. FRMRC Research Report
UR2, Flood Risk Management Research Consortium, Manchester.
Vrugt, J.A., Ter Braak, C.J.F. (2011), DREAM(D): An adaptive Markov Chain Monte Carlo
simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter
estimation problems. Hydrol. Earth Syst. Sci. 15, 3701–3713.
77
Chapter 10. Documentation,
Interpretation and Presentation of
Results
Erwin Weinmann, William Weeks
Chapter Status Final
Date last updated 14/5/2019
10.1. Introduction
Catchment modelling systems for flood estimation are applied to provide information to
decision makers and designers on magnitudes and probabilities of flood characteristics, as a
basis for decisions on flood-related planning, design and operations. The purpose, scope
and required outputs of any flood investigation should be clearly described in the brief or
technical specification for the design problem or flood study (NFRAG, 2014; McLuckie and
Babister, 2015). It is therefore important that the client who commissions flood investigations
should be comprehensive in the preparation of the brief to ensure that all requirements and
objectives are covered in detail. The brief should be detailed but should not specify
unrealistic objectives for the model performance. Unrealistic objectives may include over-
optimistic calibration performance.
The results of any modelling should be documented and presented in a way that satisfies the
requirements of the brief. However, even if such a brief or specification is not readily
available, it is the responsibility of the modelling team to ensure that the modelling process is
well documented and that the results are presented and communicated in a way that will be
clearly understood by the target audience and will avoid any misinterpretation or misuse of
the information. The documentation may need to cover requirements for several different
audiences in particular circumstances, so it must be relevant for these audiences. In some
cases, different reports may need to be prepared for these varied audiences.
• Client - The client is the agency that has commissioned the flood report, and they will be
seeking a report that outlines the whole scope of the report, especially covering the main
issues required, as well as limitations and comments on accuracy and reliability. This
report will be the basis for the client’s requirements, whether this is for planning, feasibility
or design of infrastructure. The report should also clearly demonstrate the methodology
and show that it was appropriate for the requirements, subject to the limitations of the
specification. The client will also need to have a report that will be archived in their
technical library and be available for reference in the future when the flood study may be
reviewed or if later queries arise. All supporting data should also be archived by the client
for future reference.
• Regulatory or Approval Agencies - Where the client is not itself a regulatory agency, these
agencies need to be considered. For example, these may include agencies such as local
78
Documentation, Interpretation
and Presentation of Results
authorities who need to consider impacts of projects on flood levels outside the project
boundary, or environmental agencies who may need to understand any impacts on water
quality or fauna movement. The report needs to demonstrate to these agencies that the
flood study has been carried out to an acceptable technical standard and that their
interests are satisfied.
• Residents and the Public - Local residents will take an interest in the findings of flood
studies, particularly as they affect their individual interests. To meet their interests, the
report should be written in plain English, though still to a high level of technical credibility,
and should clearly outline the impacts on the local community and demonstrate that any
adverse impacts have been mitigated or, if this proves impossible, demonstrate that all
efforts have been made to minimise impacts.
• Other Stakeholders - These may include local community or environmental groups, who
have no direct regulatory interest but who have a community interest in the results of the
flood study. In this case, the report must be written in plain English but it must also be of a
high technical standard, since these stakeholders will often have a high level of technical
expertise.
10.3. Documentation
10.3.1. General
Documentation should be progressive through the different steps of a flood estimation study.
The scope and level of detail of the documentation will depend to some degree on the
nature of the modelling application but should be sufficient to provide the basis for an
independent review of the modelling process and the results produced (DECC, 2007).
As discussed above the documentation needs to consider the requirements of the audience
for the report, noting that there may be more than one audience. The documentation
requirements outlined below apply to both the hydrologic and hydraulic modelling phases of
a flood study. More detailed guidance on interpretation of the results of hydraulic modelling is
provided in the ARR Project 15 report (Babister and Barton, 2016).
It is important that the process of data quality checking and the associated decisions are
clearly recorded, as well as any assumptions or limitations. The documentation should
clearly describe the approach to checking the data and indicate a descriptive understanding
of the data quality and the impacts of this quality on the final outcomes of the project. To the
extent that data ownership allows, a copy of the original data sets and the finally adopted
data sets should be kept.
79
Documentation, Interpretation
and Presentation of Results
It is now quite common in flood study briefs to include as part of the study deliverables a
requirement to provide a copy of the calibrated model (NFRAG, 2014). This should include
the relevant information to allow a third party to run the model and review the modelling
results. More details are provided in Babister and Barton (2016).
The modelling results should be supported by maps and graphs which can illustrate the
procedures and methodology. Maps are an excellent means of allowing a comprehensive
but easily understood interpretation of the results.
As discussed elsewhere in Australian Rainfall and Runoff, inaccuracies can result from a
number of sources including:
• Data Quality - The quality of the hydrologic and hydraulic modelling depends on the quality
of the local data used in the development and testing of the model.
• Model Representation - The model is a theoretical representation of reality and the quality
of this representation should be indicated.
80
Documentation, Interpretation
and Presentation of Results
• Model Extrapolation - The model will be developed using certain available data for
calibration or using regional parameter estimates. The application to design situations then
requires extrapolation to larger floods or alternative catchment development scenarios.
The quality of the model extrapolation into these alternative conditions should be
reviewed.
The process for checking the performance of the model with these concerns will need to
focus firstly on the basis of the model development and implementation. Secondary checks,
which are equally important should focus on the results, where there are several approaches
to checking.
Developing a process for checking that model results are sensible and consistent is a vital
quality control measure for the practitioner. The practitioner needs to satisfy themselves that
the model results are reasonable prior to publishing them in a report. The following is a
checklist that the practitioner should consider when interpreting results:
• Mass Balance – errors greater than 1% to 2% should generally be investigated, and the
cause of the errors identified and rectified where possible;
• Runoff Volumes – the total runoff as a percentage of rainfall volume should be determined
and checked against typical runoff coefficients for similar catchments;
• Runoff Rates – can be used to check that the runoff rates predicted by the hydrologic
model do not significantly diverge from runoff rates predicted by the hydraulic model. If
divergence is significant, reason(s) for such should be determined.
• Stability – the results should be checked for signs of instability, such as unrealistic jumps
or discontinuities in flow behaviour, oscillations (particularly around structures or
boundaries), excessive reductions in time step or iterations required to achieve
convergence. Many models will specify criteria based on the Courant number (refer to
Book 6) that can be checked to assess model instability;
• Model Startup – many models do not perform well from a completely “dry” start during the
initial wetting stage. The practitioner should consider using a suitable “hot-start” condition
if such functionality exists, or should exclude results from the very start of the model run
from their analysis. This can be particularly important near structures;
• Structure Head Losses – head losses through structures such as bridges, culverts,
siphons etc should be checked against suitable hand calculations. More discussion on
how to deal with structures is presented in Book 6, Chapter 3. In particular, consideration
should be made of the amount of expansion/contraction losses that are captured by the
81
Documentation, Interpretation
and Presentation of Results
two dimensional schematisation, and whether the flow regime is adequately handled by
the model; and
• Steep areas/shallow flow – it may be difficult to interpolate flow depths where steep
shallow flow is occurring, particularly if the flow is not sub-critical. It may be necessary to
check results against total energy calculations in such locations.
Results for similar projects in the vicinity should be reviewed to ensure that the results are
consistent with these previous analyses. If there are differences, reasons for these
differences should be sought and explained. If this is not the case, reconsideration of the
model selection or implementation should be considered.
Alternative flood estimation methods, generally a simple regional method should also be
considered again to check consistency. Again where there are inconsistencies, these should
be investigated and reasons found for the differences.
These checks of results are important and increase confidence in the analysis. The flood
study documentation should clearly outline this checking and demonstrate the level of
confidence in the results.
The degree of scatter in results shown up by uncertainty analyses describes the precision of
the flood estimate. However, the accuracy of modelling results depends also the degree of
bias in the results (systematic underestimation or overestimation). Inappropriate
representation of the real system by the adopted model is likely to introduce model errors
(additional uncertainty and bias) into the modelling results, which are not captured by normal
uncertainty analysis. The results of uncertainty analyses should thus be regarded as lower
bound estimates of uncertainty.
An estimate of the likely model errors can be obtained by comparing results produced by
different models of the same system or by comparison of flood estimates obtained by
different flood estimation approaches.
The documentation must include sufficient discussion to allow the client and others who read
the report (including non-experts) to understand the level of accuracy provided and to ensure
that the report is not used to indicate a higher accuracy than can be justified by the model
and the particular application of model calibration. To avoid misinterpretation, modelling
results should be presented to the number of significant figures implied by accuracy
considerations. Where there is uncertainty, this must be described clearly and
understandably so that the client and others can make a reasonable decision on the results.
82
Documentation, Interpretation
and Presentation of Results
In addition to the summary of results, the documentation should include comments on the
accuracy and reliability of the results. It should cover the basic discussion of calibration to
historical flood events as well as extrapolation of the model to the design scenarios and to
assessments beyond the scope of the calibration.
The modelling will have been developed for a specific application and therefore the model
performance for other applications may be limited. This limitation could include the
geographical extent as well as the flood magnitudes considered. For example, if the model
has been developed for design of major infrastructure, it may be prepared for analysis of
large floods, so the calibration may be inappropriate for small in-channel flows which may be
required for another application.
The documentation therefore should clearly describe the limitations and the scope where the
model results may be appropriate.
10.6. References
Babister, M. and Barton, C. (eds) (2016). Australian Rainfall and Runoff Support Document:
Two dimensional modelling in urban and rural floodplains. Project 15
DECC (NSW Department of Environment and Climate Change), (2007), Floodplain Risk
Management Guideline, Residential Flood Damage, Sydney.
McLuckie, D., Babister, M., (2015), Improving national best practice in flood risk
management, Floodplain Management Association National Conference, Brisbane, May
2015.
National Flood Risk Advisory Group (NFRAG) (2014), Guideline for using the national
generic brief for flood investigations to develop project-specific specifications, National Flood
Risk Advisory Group and Australian Emergency Management Institute, Australian
Emergency Handbook Series.
83
BOOK 8
lxxxvi
Estimation of Very Rare to
Extreme Floods
lxxxvii
Estimation of Very Rare to
Extreme Floods
lxxxviii
List of Figures
8.1.1. Design Characteristics of Notional Event Classes ...................................................... 3
8.2.1. Qualitative Indication of the Range of Flood Magnitudes and the Relative
Degree of Reliability Required for Different Applications ............................................. 13
8.3.1. Generalised Long-Duration Probable Maximum Precipitation Method Zones
(Bureau of Meteorology, 2006) .................................................................................... 18
8.3.2. Recommended Regional Estimates for the AEP of PMP .......................................... 20
8.3.3. Schematic Illustration of Interpolation Procedure ...................................................... 24
8.3.4. Hypothetical Frequency Curves for Seasonal and Annual Design Rainfalls Based
on the AEP Assigned to the Annual PMP (adapted from Laurenson and Kuczera (1998),
using Four Seasons of Relative Lengths 0.33, 017. 0.33, 0.17 and Relative Seasonal
PMP Depths of 1.0, 0.85, 0.6, 0.85, for Summer, Autumn, Winter, and Spring,
respectively) ............................................................................................................ 30
8.6.1. Illustration of Derived Frequency Curve Based on Reconciliation with Flood
Frequency Quantiles ................................................................................................. 68
8.6.2. (a) Differential Importance of "Reasonableness" in PMF Assumptions on Dam Safety
Decision-Making, and (b) Use of Simple Extrapolation to Infer Degree of
Reasonableness ........................................................................................................ 71
8.7.1. Schematic Illustration of the Determination of the Probability Interval of Storage
Volume as a Function of Inflow and Outflow ............................................... 78
8.7.2. Illustration of Monte Carlo Framework to Derive Outflow Frequency Curve ............. 80
8.7.3. Illustration of Conditional Empirical Sampling in Which the Storage Volume in an
Upstream Dam is Correlated with the Volume in a Downstream Dam ............ 82
8.7.4. Illustration of the Generation of Variables with a Correlation of 0.7 Based on
Normal Distributions ........................................................................................ 84
8.7.5. Schematic Diagram Illustrating the Conversion of Seasonal Exceedance
Probabilities into Annual Estimates ................................................................. 86
8.8.1. Example Rainfall Frequency Curves ......................................................................... 95
8.8.2. (a) Simulation Framework used to Generate Floods for Selected Stochastic Inputs
and (b) Resulting Flood Frequency Curves ......................................................... 97
8.8.3. Inflow-Outflow-Storage Volume Relationship ............................................................ 98
8.8.4. Probability Distribution of Initial Storage Volume ....................................................... 99
8.8.5. Transition Probabilities between Reservoir Inflow and Outflow Classes ................. 101
8.8.6. Outflow Frequency Curves Obtained using Joint Probability Analysis and a
Median Level of Drawdown ........................................................................... 102
8.8.7. Simulation Framework to Derive Outflow Frequency Curve Based on Variable
Initial Starting Level in Reservoir ................................................................... 103
8.8.8. Calculation of Concurrent Tributary Flows .............................................................. 104
8.8.9. Fitted log-Normal Flood Frequency Curves for Mainstream and Tributary Design Flows 105
lxxxix
List of Tables
8.1.1. Limit of Credible Extrapolation for
Different Types of Data in
Australia (modified after
USBR (1999)) ..................................................................................... 4
8.2.1. Summary of Procedures to Derive Design Rainfalls ................................................... 9
8.2.2. Summary of Procedures to Derive Design Floods .................................................... 11
8.3.1. Subjective Probability Mass Function for Describing Uncertainty in Regional
Estimate of the AEP of PMP (Adapted from (Laurenson and Kuczera, 1999)) ....... 20
8.3.2. Growth Curve Factors for Derivation of Sub-Daily Design Rainfalls Standardised by
the 1 in 100 AEP Rainfall Depth) ............................................................................. 27
8.3.3. Selection of Design Burst Temporal Patterns for Different Regions, Durations
and AEPs .............................................................................................................. 33
8.3.4. Selection of Design Spatial Patterns for Different Regions, Durations and
AEPs ......................................................................................................................... 35
8.4.1. Recommended Loss Rates for Urban Catchments ................................................... 46
8.8.1. Calculation of Areal Design Rainfalls for Rare to Very Rare Events ......................... 93
8.8.2. Parameters Calculated of Areal Design Rainfalls for Very Rare to Extreme Events . 94
8.8.3. Calculation of Areal Design Rainfalls for Very Rare to Extreme Events .................... 95
xc
Chapter 1. Introduction
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
The floods under consideration in this Book are events with an Annual Exceedance
Probability (AEP) of between 1 in 100 and 1 in 107. The emphasis of this Book is on the
estimation of a flood frequency curve between these limits as inputs to risk-based design
rather than on the estimation of a design flood of specific AEP and/or magnitude. The
absolute upper limit of flood magnitude under consideration is the Probable Maximum Flood
(PMF), which is a design concept that cannot be readily assigned an AEP.
These Guidelines are intended to provide a clear statement of what constitutes best practice
in sufficient detail to enable the procedures to be applied to practical problems. Best practice
in this field is constantly evolving, and thus this Book focuses on the general principles that
should be considered when estimating extreme floods rather on the detailed description of
prescriptive procedures. It should be noted that this Book is aimed at practitioners with some
experience in the field, rather than at people new to extreme flood estimation. Worked
examples are provided to illustrate some of the design concepts involved, but overall the
thrust is to state what best practice is, not to explain in detail how to achieve it.
• Spillways: the Australian National Committee on Large Dams (ANCOLD, 2000) provides
recommendations on most aspects of spillway provision and safety levels for all potentially
hazardous structures which store water or other liquids, including flood retarding basins,
service basins and tailings dams. The recommended design floods range from 1 in 100 to
1 in 107 AEP. The ANCOLD guidelines focus on design considerations of such structures,
and refer to ARR for the hydrologic procedures involved.
• Detention Basins: large structures of this type may fall within one of the ANCOLD (2000)
referable dam categories and thus be subject to its recommendations. Even when this
does not apply, it may be desirable to check the performance of a detention basin for the
consequences of a very rare or extreme flood, where the structure is located in a
populated area and if failure could endanger lives or property. This may apply particularly
where a series of structures is constructed on a watercourse and progressive failure could
occur. Detention basins are discussed further in Book 9.
1
Introduction
• Urban Trunk Drainage: while these drains generally are not designed to carry extreme
floods, good practice requires that the effects of an extreme flood should be checked
where lives and property could be endangered, as discussed in Book 9.
• Major Bridges: the Australian Standards relevant to the hydraulic design of bridges (AS
5100.2-2004;(Standards Australia, 2004)) have adopted a limit state approach. For the
Ultimate Limit State Floods, it is necessary to assess flood loading up to and including the
1 in 2000 AEP event.
• Other Major Works: in some cases, it may be desirable to at least check the effects of
extreme floods, even if a smaller flood is used for design. Examples include portals for
tunnels associated with major infrastructure, water supply intakes and sewage treatment
plants where flood damage could cause severe disruption to a community, flat roofs where
blockage of roof drains could cause collapse, or floodplain management studies where
national heritage buildings or other irreplaceable items are endangered.
These Guidelines are intended to provide practitioners with an approach that yields
estimates in the mid-range of the notional uncertainty band indicated in Figure 8.1.1, and it is
recommended that this ‘best estimate’ be adopted rather than a value at the limits of the
uncertainty band. Nevertheless, if there are significant differences in flood consequences
within the range of uncertainty, then the likely range of outcomes must be explicitly
considered in a risk management framework when developing flood management strategies.
The procedures presented herein have been reviewed by experienced designers and
academics from around Australia. They therefore constitute recommended best practice.
Innovation and trialling of new techniques based on additional research (with peer review) is
strongly encouraged for the estimation of Very Rare to Extreme floods but the pragmatic
nature of procedures requires an increasing level of prescription as estimates extend beyond
the credible limit of extrapolation. Details concerning the characteristics of each event class
are provided below.
2
Introduction
The procedures relevant to the analysis of this type of information are largely covered by the
Books related to rainfall and Flood Frequency Analysis, and rainfall-runoff routing (Book 2;
Book 4; and Book 5). However, given that this range of events is often used to 'anchor' the
lower end of frequency curves used to extrapolate to extreme events, some mention of their
estimation is retained in this Book.
The analyses are based on deriving design flood estimates that lie within the upper range of
direct observations, and thus generally involve some degree of extrapolation. A large body of
experience and a great variety of procedures are available to help the practitioner derive
flood estimates within this range, and the associated degree of uncertainty in the estimates
can be readily quantified.
3
Introduction
• 'credible' is used to represent the justifiable limit of extrapolation without the use of other
confirming information from an essentially independent source; and,
• 'extrapolation' is used to denote estimates that are made outside the range of
observations available at a single site.
The credible limit of extrapolation is dependent upon the nature of available data that can be
obtained at and/or transposed to the site of interest. Procedures are often used which are
based on “trading space for time”, in which data from several sites are used to help inform
the estimation of exceedance probabilities at a single site. The defensibility of this form of
extrapolation depends on the strength of the assumptions made, particularly those relating to
the assumed degree of similarity between the sites used in the regional pooling. It is
important to realise that in any given region the credible limit of rainfall extrapolation may
well differ from the limit applicable to floods.
The notional credible limits of extrapolation for a range of data types in Australia are shown
in Table 8.1.1 (modified after USBR (1999)). This table indicates the lower AEP bound
corresponding to both typical and the most optimistic situations, though in most cases the
credible AEP limits are likely to be considerably closer to the typical estimates than the most
optimistic bounds. At present in Australia rainfall regionalisation procedures yield credible
limits of extrapolation of around 1 in 2000 AEP (Green et al., 2016), though when larger
regions are considered within a joint probability framework, the limit can be extended (with
considerable uncertainty) out to limits that are one to two orders of magnitude rarer (Nathan
et al., 2015).
The analyses required to extrapolate estimates to the credible limit require substantial
resources and a high level of specialist expertise, and they are thus generally beyond the
level of resources available to a single study. Practitioners will usually need to rely on
processed information prepared specifically for the region of interest. There is considerable
scope for innovation and trialling of new estimation techniques for this class of events to
reduce the uncertainty of the estimates and perhaps extend the limit of extrapolation.
However, adoption of new estimation approaches will depend on the outcome of detailed
peer review.
Table 8.1.1. Limit of Credible Extrapolation for Different Types of Data in Australia (modified
after USBR (1999))
4
Introduction
Any extensions beyond the credible limit of extrapolation should employ a consensus
approach that provides consistent and reasonable values for pragmatic design. The
procedures relating to this range of estimates should be regarded as inherently prescriptive,
as without empirical evidence or scientific justification there can be no rational basis for
departing from the consensus approach.
The level of uncertainty of these estimates can only be reduced by long-term fundamental
research. Accordingly, it is important that the procedures related to this class of floods be
reviewed periodically to ensure that any advances in our understanding of extreme
hydrological and hydrometeorological processes are incorporated into design practice.
5
Introduction
1.4.2. Terminology
The terminology used in this book is generally in accordance with Book 1, Chapter 1,
Section 2. However, for convenience and consistency of expression in this Book, all AEPs in
the range of Very Rare to Extreme events are expressed in the form of 1 in Y.
1.5. References
ANCOLD (Australian National Committee on Large Dams) (2000), Selection of acceptable
flood capacity for dams, March
Green, J.H., Beesley, C., The, C., Podger, S. and Frost, A. (2016), Comparing CRC-FORGE
estimates and the New Rare Design Rainfalls. Proc. ANCOLD Conference, Adelaide.
Nathan, R.J. and Weinmann, P.E. (2000), Book VI:Estimation of large and extreme floods, In
Australian Rainfall and Runoff: A Guide to Flood Estimation, Engineers Australia, Canberra.
Nathan, R.J., Scorah, M., Jordan, P., Lang, S., Kuczera, M., Schaefer, M. and Weinmann,
P.E. (2015), A Tail of Two Models: Estimating the Annual Exceedance Probability of
Probable Maximum Precipitation. Proc.Hydrology and Water Resources Symposium,
Hobart.
Pilgrim, D.H. and Rowbottom, I.A. (1987), Chapter 13 - Estimation of large and extreme
floods, In Pilgrim, D.H. (ed.) Australian Rainfall and Runoff: A Guide to Flood Estimation, I.E.
Aust., Canberra.
Standards Australia (2004), Bridge Design, Part 2 - Design Loads. AS 5100.2 AP-G15.2/04.
Standards Australia, Sydney.
U.S. Bureau of Reclamation (USBR) (1999), A framework for developing extreme flood
inputs for dam safety risk assessments, Part I - Framework, Draft, February 1999.
6
Chapter 2. Procedures for Estimating
Very Rare to Extreme Floods
Rory Nathan, Erwin Weinmann
The procedures are generally based on the assumption that final design estimates should
incorporate the best and most relevant information available. As emphasised in Book 1 the
use of new procedures and design information is encouraged, especially where these can
be shown to be more appropriate than the guidance provided here.
The adoption of a simplified probability neutral approach is accepted practice for Frequent to
Rare floods, where it is usually possible to derive independent estimates of the design floods
to check that no bias has been introduced into the transformation between rainfall and
7
Procedures for Estimating
Very Rare to Extreme Floods
runoff. Where such independent information is available, simple event based approaches,
i.e. those involving the deterministic application of models based on linear and non-linear
routing with representative values of inputs and model parameters, should be adequate for
many practical purposes. However, it becomes increasingly difficult to obtain independent
estimates of floods in the Very Rare to Extreme range, and thus there is an increased need
to explicitly consider the joint probabilities involved. This is particularly so when considering
the impacts of changing factors (such as revised operating conditions on reservoir levels)
whose variability may be characterised with reasonable confidence but whose influences
may not be reflected in the observed record. Ensemble event approaches have the potential
to mitigate this bias, but these are only likely to be defensible for those problems (linearly)
influenced by a single dominant factor in addition to rainfall. Monte Carlo simulation methods
provide a more flexible and rigorous means of resolving these difficulties, but the
defensibility of these estimates rests upon the representativeness of the inputs and the
correct treatment of correlations which may be present.
While there is scope for considering the use of continuous simulation approaches, their
use for estimation of Vary Rare to Extreme events should only be considered for systems
which are strongly dependent on flood volume in a manner not easily handled by event-
based procedures; this might be the case for the design of tailings dams with small
catchment areas, or cascade systems of storages involving complex interaction of joint
probabilities. The advantages and limitations of continuous simulation approaches are
broadly discussed in Book 1, Chapter 3, but in the context of the estimation of extreme
floods, it is worth noting that their use will require careful generation of stochastic rainfall
inputs that are consistent with design rainfall information provided in Book 2. If the
exceedance probabilities of interest lie in the Very Rare to Extreme event range then
there is little point using a different approach for estimation of Rare floods.
ii. Floods with AEPs beyond 1 in 100 AEP to the credible limit of extrapolation (Very Rare
floods):
8
Procedures for Estimating
Very Rare to Extreme Floods
defensibility will partly depend on the ease with which different estimates of Rare floods
can be reconciled.
iii. Floods with AEPs beyond the credible limit of extrapolation (Extreme floods, including the
Probable Maximum Precipitation Flood):
These flood estimates may be required for direct use in design situations of high risk,
either in terms of risk to human life or economic losses, or where social or political
considerations require a very high level of safety. Estimates should be based on the use
of a flood event model with design rainfalls obtained by interpolation between the credible
limit of extrapolated rainfalls and the Probable Maximum Precipitation (PMP). To avoid
confusion with the Probable Maximum Flood (refer to iv), the flood derived from the PMP
using probability neutral assumptions is here termed the “PMP Flood”.
An additional category of procedures differs from the above in its design objective:
iv. Probable Maximum Flood (the limiting value of flood that could reasonably be expected to
occur):
This may be required for comparison with estimates derived from previous studies or for
some other design objective that usually requires a notional upper limiting value of flood
without an associated AEP. In practice, the magnitude of the PMF will generally be
greater than the magnitude of the flood derived from the PMP using probability neutral
assumptions (the PMP Flood).
A brief summary of the recommended procedures and references to the relevant sections
are presented in Table 8.2.1 and Table 8.2.2. It should be recognised that these tables
represent a summary of procedures that are described in detail in later sections; they are not
intended to be self-explanatory.
• For information on
seasonal estimates
see Book 8, Chapter
3, Section 7
Areal Design Derived from point design Interpolation to • See Book 2,
Rainfall Depths rainfalls by application of areal PMP Chapter 4 for
up to the PMP Areal Reduction Factors estimate based on
9
Procedures for Estimating
Very Rare to Extreme Floods
10
Procedures for Estimating
Very Rare to Extreme Floods
11
Procedures for Estimating
Very Rare to Extreme Floods
• Concurrent
flooding
• Seasonal
floods
• Snowmelt
floods
• Long duration
events
Preliminary Regional Simple log Regional • See Book 8,
design flood procedures and Normal information may Chapter 6, Section 2
estimates Flood interpolation be used to
Frequency may be used estimate the PMP
Analysis (Book to determine Flood
4 Very Rare (conservatively
preliminary assumed to equal
the PMF)
Figure 8.2.1 gives a qualitative indication of the range of flood magnitudes and the relative
degree of reliability required for different applications. An extension of the flood range of
interest is associated with a greater degree of extrapolation and thus larger uncertainty
(lower reliability). The level of expertise and effort required for deriving design floods
increases with increasing level of reliability required.
12
Procedures for Estimating
Very Rare to Extreme Floods
Figure 8.2.1. Qualitative Indication of the Range of Flood Magnitudes and the Relative
Degree of Reliability Required for Different Applications
As shown in Figure 8.2.1, the qualitative indication of the range of flood magnitudes and
associated relative degree of reliability can be divided into four groups of applications. For
the first three groups of applications, approximate or simplified flood estimation procedures
may be applicable; however, the practitioner has to apply engineering judgement in deciding
on the degree of detail and accuracy required for a specific application. The fourth group of
applications demands the most accurate estimates and hence the greatest level of effort.
(i) Planning and feasibility studies, initial screening of options, preliminary designs:
For these types of applications, where decisions based on the flood estimates are only
moderately sensitive to estimation uncertainties, approximate design flood estimates can be
derived by preliminary methods (refer to Book 8, Chapter 6, Section 2).
Many design codes require safety checks for conditions exceeding the design objective.
Approximate estimates of floods from 1 in 100 AEP to an absolute limit of 1 in 2000 AEP can
be obtained by use of design rainfalls (Book 2, Chapter 3) in combination with a flood event
model configured using regional estimates of losses and routing parameters. The rainfall
frequency curve may be extended to the AEP of the PMP by deriving site-specific estimates
of the PMP. Flood estimates based on use of regional parameters without calibration or
additional confirmatory estimates are subject to considerable uncertainty, and
correspondingly greater responsibility then rests with the practitioner to ensure that the
estimates are consistent with any relevant flood estimates for the region.
13
Procedures for Estimating
Very Rare to Extreme Floods
While risk-based design requires the assessment of the contribution of Rare to Extreme
floods to the total expected flood damage figure, the low probability of floods in this range
means that the contribution from this range of floods to the total expected flood damage is
relatively low in most situations. A lower degree of reliability of flood estimates in the Rare to
Extreme range is therefore acceptable for these applications.
(iv) Final design of major works and assessment of the adequacy of existing infrastructure,
where failure would result in serious consequences or possible of life:
For this group of applications, efforts should be made to reduce uncertainties in design flood
estimates to the minimum possible. The further the AEP range of interest extends beyond
the Rare floods, the more important it is for the practitioner to consider in detail the
guidelines in Book 8, Chapter 4 and Book 8, Chapter 5 on extrapolation of hydrograph
model characteristics. For all applications where the range of interest extends to extreme
floods, and where large uncertainty in flood estimates would impact significantly on design
decisions, detailed flood studies are justified.
14
Chapter 3. Estimation of Very Rare to
Extreme Rainfalls
Rory Nathan, Erwin Weinmann
3.1. General
3.1.1. Overview of Requirements and Sources of Design
Rainfall Information
In general, estimates of Very Rare to Extreme floods are derived using rainfall-based flood
estimation methods (possible exceptions to this are discussed in Book 8, Chapter 2, Section
2). Information is required on the average depth of rainfall over the catchment for a range of
rainfall event durations, its distribution in space (spatial pattern) and its distribution in time
during the event (temporal pattern). Design floods are generally calculated separately for
each duration, including routing through any reservoirs or other storages, to determine the
critical rainfall durations that produce the maxima for the flood characteristics of interest
(peak inflow/outflow, flood volume or possibly duration of flooding). Short duration design
rainfalls may be required even on large catchments to check that their occurrence on only
part of the catchment area does not produce a critical flood, and to check that the
magnitudes of the calculated floods vary in a regular manner as the duration of the rainfall
increases.
Book 2, Chapter 3 provides details of design rainfall depths at a grid of points over the whole
of Australia for the range of AEPs and durations of interest for the estimation of Very Rare to
Extreme floods. Except for the PMP, these design rainfall depths are point rainfalls at the
grid point location; they need to be converted to average catchment rainfalls by application
of the areal reduction factors (ARFs) provided in Book 2, Chapter 4.
General guidance on design spatial rainfall patterns is provided in Book 2, Chapter 6. This
guidance applies to design spatial patterns for the estimation of Very Rare to Extreme floods,
with some more specific guidance provided in Book 8, Chapter 3, Section 9. The limited
information available on spatial patterns of extreme storm events generally precludes the
application of an ensemble of spatial patterns; spatial patterns can be sampled in an
ensemble fashion in stochastic simulation frameworks, but generally a single representative
pattern derived from design rainfall fields (or observed storms) in a larger region is sufficient
for most design purposes.
The guidance provided in Book 2, Chapter 5 for the selection of design temporal rainfall
patterns also generally applies to the range of Very Rare to Extreme floods, with more
specific guidance provided in Book 8, Chapter 3, Section 8. Given the sensitivity of flood
estimates to the high degree of natural variability in the temporal patterns of actual storms, it
is recommended that an ensemble of temporal patterns rather than a single ‘representative’
temporal pattern is applied.
15
Estimation of Very Rare to
Extreme Rainfalls
The AEP range covered by regional estimates, referred to as the ‘credible range of
extrapolation’, depends on the number of stations in a region and the length and quality of
their records. For the relatively well gauged parts of Australia this range has been taken as
extending to the 1 in 2000 AEP.
Practitioners should recognise that making available design rainfall estimates for a dense
grid covering the whole of Australia has been achieved at the cost of potentially reduced
accuracy at locations for which long and reliable rainfall records are available. However, the
results of frequency analysis of local records should only be used to fine-tune regional
design rainfall estimates if there is strong evidence confirmed by peer review. In such a
situation the shape of the rainfall frequency curve in the range of Very Rare events should
closely follow the shape indicated by the regional estimate.
Estimates of PMP rainfall data have been developed by the Bureau of Meteorology. There
are three generalised methods appropriate for different locations and durations:
16
Estimation of Very Rare to
Extreme Rainfalls
i. the Generalised Short Duration Method (GSDM) is applicable for durations up to six hours
and areas up to 1000 km2 (Bureau of Meteorology, 2003a);
ii. the Generalised Tropical Storm Method (GTSMR) is used to estimate PMPs for durations
up to 120 hours and areas up to 150 000 km2 in the region of Australia where tropical
storms are the source of the greatest depths of rainfall (Bureau of Meteorology, 2003b);
and
iii. the Generalised Southeast Australia Method (GSAM) is used for durations up to 96 hours
and areas up to 100 000 km2 for the region of Australia where tropical storms are not the
source of the greatest depths of rainfall (Bureau of Meteorology, 2006).
The zones of application for the GTSM and GSAM methods are shown in Figure 8.3.1. For
the west coast of Tasmania, data constraints and the size of region have prevented the
development of a generalised method, and thus site-specific advice should be sought from
the Bureau of Meteorology. It should be noted that the PMP estimates provided by the
Bureau of Meteorology are for design rainfall bursts rather than complete storm events,
though these can be adjusted to include likely pre-burst rainfalls using information provided
by Minty and Meighen (1999) and Book 2, Chapter 5.
All PMP estimates are based on a set of simplifying assumptions applied when extrapolating
from the hydrometeorological conditions of observed large events to “maximised conditions”.
They thus represent operational estimates of the PMP and should not be interpreted as
being equivalent to a theoretical upper limit on rainfall for that location i.e., there is a very
small, but finite, probability that the estimates may be exceeded (Book 8, Chapter 3, Section
4).
17
Estimation of Very Rare to
Extreme Rainfalls
The method proposed to assign an AEP to the PMP is based on the review by Laurenson
and Kuczera (1999) of the procedures recommended in the 1987 edition of ARR and
subsequent work conducted in both Australia and overseas. More recent research into
regional estimates for the inland zone of south-east Australia (Nathan et al., 2015) provides
some evidence to suggest that the Laurenson-Kuczera recommendations might be slightly
conservative, though the authors concluded that there is insufficient justification to consider
changing either the best estimate or the inferred width of the confidence intervals.
18
Estimation of Very Rare to
Extreme Rainfalls
• the recommended AEP values plus or minus two orders of magnitude of AEP should be
regarded as the notional upper and lower limits for the true AEP;
• the recommended AEP values plus or minus one order of magnitude of AEP should be
regarded as the confidence limits with about 75% subjective probability that the true AEP
lies within these limits; and
• the recommended AEP values should be regarded as the best estimates of the AEPs.
The notional 75% confidence and upper and lower limits are shown on Figure 8.3.2. While
the recommended error bands are undoubtedly wider than is desirable, they are regarded as
a realistic assessment of the true uncertainty.
19
Estimation of Very Rare to
Extreme Rainfalls
Table 8.3.1. Subjective Probability Mass Function for Describing Uncertainty in Regional
Estimate of the AEP of PMP (Adapted from (Laurenson and Kuczera, 1999))
Class Interval (log10(AEP) - log10(Recommended Subjective probability mass in
AEP)) class interval
Class bounds Mid-point
-2.00
-1.875 0.010
-1.75
-1.625 0.022
-1.50
-1.375 0.038
-1.25
-1.125 0.055
-1.00
-0.875 0.073
-0.75
-0.625 0.090
-0.50
20
Estimation of Very Rare to
Extreme Rainfalls
In order to incorporate this uncertainty into a risk analysis, Laurenson and Kuczera (1999)
recommend the construction of a probability mass function that provides a 75% chance that
the true AEP lies within one-order-of-magnitude of the recommended AEP, and a 100%
chance that the true AEP lies within two-orders-of-magnitude of the recommended AEP.
Table 8.3.1 presents an example of a probability mass function which meets these
requirements. For example, if the recommended AEP were 1 in 106, then there is an 11.0%
chance that the true AEP lies between 1 in 106 and 1 in 105.75, and there is a 42.4% chance
that it lies between 1 in 105.5 and 1 in 106.5; the first example corresponds simply to a single
probability interval adjacent to the mid-point of 0.00 in Table 8.3.1, and the second example
corresponds to the central four probability intervals. Although the probabilities are subjective,
they do reflect the considerable uncertainty in the AEP estimates. The uncertainty can be
directly incorporated into a risk analysis by performing an assessment for each of the AEPs
in Table 8.3.1 and weighting the results using the associated subjective probability.
21
Estimation of Very Rare to
Extreme Rainfalls
the joint distributions of the independent components that combined to produce PMP, and
this was applied to the coastal GSAM methodology by Pearse and Laurenson (1997).
More recently (Nathan et al., 2015) described the development and application of two largely
independent methods for deriving site-specific estimates of the AEP of PMP. One method
uses the total probability theorem to combine the probabilities of extreme storms occurring in
the transposition region with the likelihood that they were positioned in a manner that would
equal or exceed the estimated target depth on the catchment for the specified duration. The
other method involved the development of a stochastic regression model to estimate
catchment rainfalls from point rainfalls at the key sites, and is based on an approach
developed and applied by Schaefer over a number of years (e.g. MGS Engineering and
Applied Climate Services (2014)).
These studies are mentioned to make the point that methods are available to derive site-
specific estimates that are potentially more defensible than the regional recommendations
described in the preceding section. While there remain a number of research questions
which, if resolved, may increase confidence in such estimates, the undertaking of site-
specific studies does merit practical consideration. Until the required methodology is more
mature such studies would need to be undertaken by specialists with peer review. It is
expected that this option is of most relevance to a minority of cases which involve the design
of infrastructure on large catchments (> 1000 km2) with high potential consequences of
failure.
Estimates of rainfalls for Extreme events beyond the credible limit of extrapolation are
predicated on the following two design rainfall characteristics, namely:
(ii) the rainfall depth and slope of the rainfall frequency curve at the credible limit of
extrapolation.
As discussed above, estimates of the AEP of the PMP are subject to a high degree of
uncertainty and are based on the interpretation of the PMP values as operational estimates
that can be exceeded, rather than upper limiting values of rainfall. This interpretation of the
PMP implies that the frequency curve should not be asymptotic to the horizontal at the
estimated PMP, but rather extend through the PMP at a slope consistent with the shape of
the lower sections of the frequency curve.
22
Estimation of Very Rare to
Extreme Rainfalls
• at the starting point of interpolation, the slope of the interpolated curve matches the slope
defined by design estimates from the upper segment of the frequency curve bounded at
the upper end by the credible limit of extrapolation; and,
• the slope of the interpolated curve through the PMP estimate is not constrained to the
horizontal but is determined by the shape of the frequency curve at AEPs more frequent
than that assigned to the PMP.
With reference to Figure 8.3.3, the AEP of 1 in Y2 represents the starting point of the
interpolation (the credible limit of extrapolation), and the AEP of 1 in Y1 represents a lower
value such that between 1 in Y1 and 1 in Y2 the frequency curve can be assumed to be
linear in the log-log domain. XY1 and XY2 represent the design rainfalls with AEPs of 1 in Y1
and 1 in Y2. The slope of the frequency curve at the commencement of the transition, Sgc, is
determined by the slope between the two design values at AEPs of 1 in Y1 and 1 in Y2.
23
Estimation of Very Rare to
Extreme Rainfalls
The end point of the interpolation is the AEP of the PMP, which is denoted 1 in YPMP. For
consistency of nomenclature, the magnitude of the PMP is here denoted as XPMP.
��log �Y2
�� = 10 (8.3.1)
where RY is defined by the parabola fitted to the coordinates of the two end points (ie
between X Y2 , Y2 and XPMP, YPMP) and the slope of the lower end of the frequency curve (ie
the straight line between XY1, Y1 and XY2, Y2).
�PMP
�� = log (8.3.3)
�2
�
log �2
�� = (8.3.4)
��
24
Estimation of Very Rare to
Extreme Rainfalls
�Y1
log �Y2
�gc = �1 (8.3.5)
log �Y2 log �2
log �PMP
−1
log �Y2 (8.3.6)
�gap =
��
Siriwardena and Weinmann (1998) recommend that the slope of the frequency curve at the
commencement of the interpolation should be defined by the 1 in 1000 AEP and 1 in 2000
AEP events, i.e. Y1 = 1000 and Y2 = 2000. Thus the start point of interpolation is the credible
limit of extrapolation obtained from the upper limit of design rainfalls obtained from the
regional LH-moments approach (Book 2, Chapter 3).
• different starting points for interpolation (i.e. the AEP of the credible limit of extrapolation
may vary);
• different AEPs assigned to the PMP (ranging from 10-4 to 10-7, as discussed in Book 8,
Chapter 3, Section 4); and,
• different ‘shape parameters’ defined by the ratio of the slope of the upper end of the
directly determined frequency growth curve, Sgc, and the slope between the two end
points of the ‘gap’, Sgap (the ‘shape parameter’ Sgc/Sgap ranges between 0.25 to 2.0).
The above concepts are schematically illustrated in Figure 8.3.3. Siriwardena and
Weinmann (1998) have tested the above interpolation procedure on 25 catchments ranging
in size from 25 to 15000 km2 with diverse characteristics. The resultant frequency curves
were shown to be plausible and well behaved for all test catchments. However, it is worth
noting that Hill et al. (2000) reported that the above interpolation approach did not yield
plausible results for GSAM-derived storms for 13 small catchments in South Australia. They
observed that inconsistencies in the relationship between rainfall depth and catchment size
for short- and long-duration events resulted in physically infeasible frequency curves (i.e.
values of Sgc/Sgap exceeded 2.0). This problem was largely obviated by undertaking the
above interpolation procedure in the log-Normal domain (i.e. using the standard normal
variate of the exceedance probabilities rather than the log of the inverse of AEP); in a few
cases it was also necessary to slightly increase the estimate of the AEP of the PMP, but the
degree of change was well less than the notional uncertainty involved. Thus, while the
recommended interpolation procedure has been found to generally yield plausible results, it
may be necessary to make pragmatic adjustments to the method where dictated by
circumstances.
25
Estimation of Very Rare to
Extreme Rainfalls
There are three broad design categories for which non-standard durations may be required:
• Very Rare event rainfalls for durations intermediate to multiples of 24 hour periods;
• Very Rare event design rainfalls for durations less than 24 hours; and
• Design rainfalls for very long durations (ie durations longer than those obtainable from any
design rainfall database).
The Bureau of Meteorology is scheduled to produce very rare rainfalls for durations less than
24 hours and when available these should be used to estimate rainfall up to the credible
limit. Until this information is released, the growth factors derived by Jordan et al. (2005)
should be used. These estimates are based on the analysis of data from ten pluviograph
sites around Australia. Melbourne had the longest period of record at 130 years. Five of the
stations used (Darwin, Sydney, Hobart, Adelaide, and Perth) had over 80 years of record
each. The frequency analysis was undertaken using the simple “station year” method as the
data satisfied the required assumptions of independence and homogeneity (the storms were
largely derived from thunderstorm or deeply convective events). For the ten stations
analysed, this pooled data set represents a sequence of around 800 station years. Non-
dimensional frequency curves were derived for eight durations varying between 0.5 and 12
hours. The mean growth curve obtained from these distributions fell well within the 90%
confidence limits (refer to Table 8.3.2).
Pending the outcome of more comprehensive analyses, it is recommended that the growth
curve factors in Table 8.3.2 be used for design purposes. Rainfall depths for durations
26
Estimation of Very Rare to
Extreme Rainfalls
between 0.5 and 12 hours can be obtained by simply multiplying the relevant 1 in 100 AEP
design rainfall by the frequency factors shown in Table 8.3.2. It should be noted that these
factors represent the characteristics of events that are associated with thunderstorms, or
deeply convective, storm activity and are derived from analyses that are largely independent
of the data and procedures described in Book 2, Chapter 3. Accordingly, in some locations
there may be the potential for significant discontinuity in growth factors between the values
in Table 8.3.2 and those for longer duration events (24 hours and longer). If this is the case
then it may be necessary to smooth the growth factors to ensure that the tails of the
frequency curves do not cross, and that the rainfall depths vary in a consistent manner
across storm duration and exceedance probability.
Table 8.3.2. Growth Curve Factors for Derivation of Sub-Daily Design Rainfalls Standardised
by the 1 in 100 AEP Rainfall Depth)
The approach to solving design problems involving long critical durations is in essence a
joint probability problem. In special circumstances the problem may involve the assessment
of joint probabilities of extreme storm sequences, but when considering issues associated
with reservoir outflow floods, the issue of storm sequences over extended periods may be
implicitly solved by undertaking a joint probability analysis of inflow floods and initial reservoir
volume (Book 8, Chapter 7, Section 2).
In addition, Scorah et al. (2015) undertook an analysis of areal antecedent rainfalls in the
inland GSAM region for periods ranging between 7 days and 24 months using 113 years of
gridded data. The analysis was undertaken for storm areas of 750 km2 and 1860 km2. They
concluded that there is no correlation between pre-storm rainfalls and storm severity for the
extremes considered, and thus the two processes could be treated independently in joint
probability analyses.
27
Estimation of Very Rare to
Extreme Rainfalls
Book 8, Chapter 3, Section 7 describes typical design situations where seasonal estimates
of design rainfalls for Rare to Extreme events may be required. While it would appear
sensible in these design situations to deal explicitly with seasonal effects, there are a
number of practical and theoretical issues that are not easily resolved. Some of the issues
related to the derivation of seasonal design rainfalls are discussed.
28
Estimation of Very Rare to
Extreme Rainfalls
et al., 2006). Similar analyses could be undertaken using seasonally censored data for other
hydrometeorologically homogeneous regions, where the likelihood of rainfalls occurring in
different seasons could then be applied to the Very Rare design rainfalls which have been
derived on an annual basis (as described in Book 8, Chapter 3, Section 7).
The first approach is based on the assumption that factors other than dew point (and factors
deriving from that) affecting the value of PMP do not significantly vary with season. This is
consistent with the Bureau of Meteorology’s assumption that each season has its own PMP;
in other words, the magnitude of the seasonal PMP is different for different months of the
year. It also follows that the probability of experiencing a PMP event of different magnitude in
any month of the year is equal.
This interpretation means that the exceedance probability of a PMP event occurring in a
specific season of the year is proportional to the fraction of the year occupied by that
season, but it does not yield directly an estimate of the exceedance probability of a seasonal
PMP. The additional constraint to be considered follows from an argument based on extreme
value theory, namely that the sum of the exceedance probabilities of events of the same
magnitude for the different seasons should add to the AEP of the annual rainfall event of that
magnitude.
In practice, an iterative approach needs to be adopted, using the product of the AEP of the
annual PMP and the fraction of the year occupied by the season as an initial (lower bound)
estimate of the exceedance probability of the seasonal PMP estimates. These initial
estimates are shown as hollow circles in Figure 8.3.4. A segment of the complete design
rainfall frequency curve for each season then needs to be drawn between the rainfall depths
of the largest and smallest seasonal PMP estimates (indicated by broken lines in
Figure 8.3.4). Over the upper range of the seasonal rainfall magnitudes, the curve segments
can be assumed to be parallel to the annual frequency curve. The addition of the AEPs
corresponding to the annual PMP estimated from each seasonal rainfall curve will generally
yield an AEP less than the AEP assigned to the annual PMP. The ratio of these two AEP
estimates defines the correction factor (> 1.0) that needs to be applied to each of the initially
estimated AEPs of the seasonal PMP. This correction is indicated by arrows in Figure 8.3.4,
and the final AEP estimates of seasonal PMPs are shown as filled in circles.
29
Estimation of Very Rare to
Extreme Rainfalls
Figure 8.3.4. Hypothetical Frequency Curves for Seasonal and Annual Design Rainfalls
Based on the AEP Assigned to the Annual PMP (adapted from Laurenson and Kuczera
(1998), using Four Seasons of Relative Lengths 0.33, 017. 0.33, 0.17 and Relative Seasonal
PMP Depths of 1.0, 0.85, 0.6, 0.85, for Summer, Autumn, Winter, and Spring, respectively)
The second approach proposed by Laurenson and Kuczera (1998), which is not fully
developed at this stage, does not use the upper limit concept, but recommends the
derivation of separate extreme rainfall frequency curves for each season, using the joint
probability method (Pearse and Laurenson, 1997).
In the absence of better design information, and noting the foregoing discussion, the
following recommendations should prove adequate for most design problems where
seasonal effects are important.
30
Estimation of Very Rare to
Extreme Rainfalls
expressed as fractions of the annual estimates. The seasonal fractions can then be
converted to design rainfall depths by multiplying by the (annual) design rainfall values
obtained from the standard information provided in Book 2. Note that the inherent
uncertainties in fitting the tails of the distributions to observed seasonal data may mean that
for a given rainfall magnitude the sum of the seasonal exceedance probabilities do not equal
the annual exceedance probability. If this problem occurs one or more of the seasonal
frequency curves will need to be adjusted to ensure that the seasonal and annual
exceedance probabilities are consistent.
The concept of a single ‘representative’ temporal pattern that allows a probability neutral
transformation of design rainfall inputs to flood outputs of the same AEP is basically flawed,
as this transformation is quite sensitive to the routing characteristics of the catchment. This
sensitivity can best be allowed for by applying an ensemble of typical temporal patterns
rather than a single design temporal pattern, as can be done in the Ensemble Event and
Monte Carlo Event approaches.
31
Estimation of Very Rare to
Extreme Rainfalls
• short duration point rainfall patterns from GSDM (Bureau of Meteorology, 2003a; Jordan et
al., 2005);
• long duration rainfall patterns for use across Australia (Book 2, Chapter 5); and
• areal temporal patterns developed for the generalised GSAM and GTSM-R PMP methods.
Ensemble sets of areal temporal patterns are available for the latter two sources of data and
there are advantages and limitations to using both sets. The prime advantage to using areal
temporal patterns derived for use with PMP estimates is that they are based on careful
hydrometeorological analysis of storms that are known to be the most extreme in the
historical record. The disadvantage of them is that more extreme storms may have occurred
since development of the methods, and – particularly in the inland GSAM region – there are
a disparate number of patterns in the different combinations of storm areas and durations.
Conversely, as described by Podger et al. (2016) the areal patterns provided in Book 2,
Chapter 5 are based on the largest storms that have occurred in eleven regions across
Australia. The limitation of these patterns with respect to extreme events is that they were
selected on the basis of the depth (rather than rarity) of their associated rainfalls, and also
that they were derived for smaller regions than used in development of the PMP methods.
As such, it is likely that these patterns are associated with events that are less severe than
those considered in the PMP analyses. Conversely, their main advantage is that they may
have included extreme events that have occurred since completion of the PMP analyses,
and also that they provide a consistent set of ten patterns for a more comprehensive range
of storm area and duration combinations.
Further analysis of the efficacy of these different data sets for application to Very Rare and
Extreme event is warranted, but at present it is recommended that the PMP method areal
temporal patterns be used to derive all design events rarer than 1 in 100 AEP. That said, it
may be appropriate to use the Book 2, Chapter 5 areal patterns in lieu of the PMP patterns
to reconcile flood estimates with independently derived design flood estimates (as described
in Book 5, Chapter 3, Section 4). If so, it may be prudent to adopt a changing mix of areal
temporal patterns from both sources of data over the Very Rare range to ensure a smooth
transition in flood response over this range of exceedance probabilities. Also, where there is
a paucity of information on areal temporal patterns – such as for some durations in the
inland GSAM zone – it may be necessary to supplement the adopted ensembles using the
patterns provided in Book 5, Chapter 3, Section 4. At present, the best source of ensemble
temporal patterns for use with short duration Very Rare to Extreme events are those derived
by Jordan et al. (2005); these patterns were derived specifically from storms associated with
thunderstorm or deeply convective events.
32
Estimation of Very Rare to
Extreme Rainfalls
weighting all the ordinates of the hydrograph is not recommended as the resulting
hydrograph may exhibit a lower peak than either of the individual hydrographs.
In the transition zone between the GTSMR and GSAM regions, temporal patterns from both
the GSAM and GTSMR methods should be applied separately (in conjunction with the
corresponding spatial patterns), and the largest flood adopted.
Table 8.3.3. Selection of Design Burst Temporal Patterns for Different Regions, Durations
and AEPs
33
Estimation of Very Rare to
Extreme Rainfalls
Except for catchments with marked rainfall gradients, the spatial distribution of rainfall
generally has less influence on the shape and size of the resulting hydrograph than temporal
patterns. Thunderstorm and tropical patterns can have an appreciable effect on flood
magnitude, particularly if the catchment contains extensive drowned reaches resulting from
reservoir inundation. For such catchments, small variations in the spatial distribution of
design rainfall may have a marked impact on the magnitude of the flood peak. It is worth
assessing the sensitivity of the catchment floods to variations in spatial patterns, and if this is
not easily resolved then it would be necessary to include spatial patterns as an ensemble in
Monte Carlo analyses.
34
Estimation of Very Rare to
Extreme Rainfalls
ii. Extreme short duration events.- The GSDM thunderstorm patterns (Bureau of
Meteorology, 2003a) should be used. The spatial pattern should generally be centred over
the catchment and orientated in such a way as to overlap the catchment boundary with
the smallest possible ellipse.
iii. Extreme long duration events in south-eastern Australia.- The spatial patterns provided
with GSAM estimates (Minty et al., 1996) should be applied to all Very Rare to Extreme
events. The spatial patterns are based on modified 72 hour 50 year ARI intensity fields of
design rainfalls from Book 2, and they incorporate the combined effect of variations in
elevation, slope, aspect and geographical location. These patterns should not be rotated
or translated.
iv. Extreme long duration events in tropical regions.-The spatial patterns provided with
GTSMR estimates (Bureau of Meteorology, 2003b) should be applied to all Very Rare to
Extreme events. The spatial pattern should be positioned to maximise the rainfall depth
within the catchment.
v. Extreme long duration events in the transition zone. In the transition zone between the
GSAM and GTSMR regions, both sets of spatial patterns should be used (in conjunction
with the corresponding temporal patterns) and the highest resulting flood should be
adopted.
Table 8.3.4. Selection of Design Spatial Patterns for Different Regions, Durations and AEPs
Descriptive Range of Storm duration and source of design information
event class AEP
Short Long durations in Long
durations for southeast Australia durations in
Wwhole of (GSAM method) tropical
Australia regions
(GTSMR
(GSDM) method)
Up to 3 or 6 Intermediate 24 hours Longer than
hours durations and longer 6 hours
duration between
GSDM and
GSAM
Very Rare Beyond 1 in Based on design rainfalls for Very Rare events derived
100 to the separately for each sub-catchment
credible
limit of
extrapolatio
n
Extreme Beyond the GSDM Both GSAM GSAM spatial GTSMR
credible spatial and GSDM patterns spatial
limit of patterns spatial patterns
extrapolatio patterns
n
35
Estimation of Very Rare to
Extreme Rainfalls
3.10. References
Bureau of Meteorology (2006), Guidebook to the Estimation of Probable Maximum
Precipitation: Generalised Southeast Australia Method. Hydrometeorological Advisory
Service, Bureau of Meteorology.
Durrant, J.M. and Bowman, S. (2004), Estimation of Rare Design Rainfalls for Western
Australia: Application of the CRC-FORGE Method, Surface Water Hydrology Report Series
Report No. HY17, Department of Environment, Government of Western Australia.
Durrant, J.M, Nandakumar, N. and Weinmann, P.E. (2006), Forging Ahead: Incorporating
Seasonality into Extreme Rainfall Estimation for Western Australia. Australian Journal of
Water Resources, 10(2), 195-205.
Fontaine, T.A. and Potter, K.W. (1989), Estimating Probabilities of Extreme Rainfalls, Journal
of Hydraulic Engineering, Volume 115, pp. 1562-1575, American Society of Civil Engineers.
Green, J.H., Beesley, C., The, C., Podger, S. and Frost, A. (2016), Comparing CRC-FORGE
estimates and the New Rare Design Rainfalls. Proc. ANCOLD Conference, Adelaide.
Herron, A., Stephens, D., Nathan, R. and Jayatilaka, L. (2011), Monte Carlo Temporal
Patterns for Melbourne. In: Proc 34th World Congress of the International Association for
Hydro- Environment Research and Engineering: 33rd Hydrology and Water Resources
Symposium and 10th Conference on Hydraulics in Water Engineering. Barton, A.C.T.:
Engineers Australia, pp: 186-193.
Hill, P.I., Nathan, R.J., Rahman, A., Lee, B.C., Crowe, P. and Weinmann, P.E. (2000),
Estimation of extreme design rainfalls for South Australia using the CRC-FORGE method.
In: Proceedings of 3rd international hydrology and water resources symposium interactive
hydrology, IE Aust., Perth, Western Australia, 1: 558-563, 20-23 Nov 2000.
Jordan, P., Nathan, R., Mittiga, L. and Taylor, B. (2005),'Growth Curves and Temporal
Patterns for Application to Short Duration Extreme Events'. Aust J Water Resour, 9(1),
69-80.
Kennedy, M.R and Hart, T.L. (1984), The estimation of probable maximum precipitation in
Australia. Civ. Engg Trans, Inst Engrs Aust, CE26: 29-36.
Kyselý, J. (2008), A Cautionary Note on the Use of Nonparametric Bootstrap for Estimating
Uncertainties in Extreme-Value Models. J. Appl. Meteor. Climatol., 47: 3236-3251.
36
Estimation of Very Rare to
Extreme Rainfalls
Lang, S., Zhang, J., Scorah, M., and Nathan, R. (2016), Characterising the location and
rarity of annual maxima rainfall in the GTSM-R coastal zone. Proc. Hydrology and Water
Resources Symposium, Melbourne.
Laurenson, E.M. and Kuczera, G.A. (1999), Annual exceedance probability of probable
maximum precipitation, Aus. J. Water Resour., 3(2), 189-198.
Laurenson, E.M. and Kuczera, G.A. (1998), Annual exceedance probability of probable
maximum precipitation - report on a review and recommendations for practice. Report
prepared for the NSW Department of Land and Water Conservation and the Snowy
Mountains Hydro-Electric Authority.
MGS Engineering and Applied Climate Services (2014), Stochastic modelling of floods for
the Bridge River system. Consulting report prepared for BC Hydro.
McConachy, F.L.N., Weinman, P.E., Nathan, R.J. and Mein, R.G. (1997), Confidence limits
for rainfall frequency curves in the extreme rainfall range. Proceedings, 24th Hydrology and
Water Resources Symposium, Auckland, New Zealand, pp: 333-338.
Minty, L., Meighen, J. and Kennedy, M.R. (1996), Development of the Generalised
Southeast Australia Method for estimating Probable Maximum Precipitation, Hydrology
Report Series No.4, Hydrology Unit, Bureau of Meteorology, Melbourne.
Nandakumar, N., Weinmann, P.E., Mein, R.G. and Nathan, R.J. (1997), Estimation of
extreme rainfalls for Victoria using the CRC-FORGE method (for rainfall durations 24 to 72
hours). CRC Research Report 97/4, Melbourne, Cooperative Research Centre for
Catchment Hydrology.
Nathan, R.J., Scorah, M., Jordan, P., Lang, S., Kuczera, M., Schaefer, M. and Weinmann,
P.E. (2015), A Tail of Two Models: Estimating the Annual Exceedance Probability of
Probable Maximum Precipitation. Proc.Hydrology and Water Resources Symposium,
Hobart.
Nathan, R.J., Weinmann, P.E. and Minty, L. (1999), Estimation of the Annual Exceedance
Probability of PMP Events in Southeast Australia, Aus. J. Water Resour., 3(1), 143-154.
Pearse, M.A. and Laurenson, E.M. (1997), Real probabilities for real floods. Proc., ANCOLD
1997 Conference on Dams. ANCOLD.
Podger, S, Babister, M. and Brady, P. (2016), Deriving temporal patterns for areal rainfall
bursts. Proc. 37th Hydrology and Water Resources Symposium 2016: Water, Infrastructure
and the Environment. Barton, ACT: Engineers Australia.
Scorah, M., Lang, S. and Nathan, R. (2015), Utilising AWAP gridded rainfall dataset to
enhance hydrology studies. Proc. Hydrology and Water Resources Symposium, Hobart.
37
Estimation of Very Rare to
Extreme Rainfalls
Weinmann, P.E., Rahman, A., Hoang, T., Laurenson, E.M. and Nathan, R.J. (1998), A new
modelling framework for design flood estimation. 'HydraStorm'98', Adelaide, Australia,
Institution of Engineers, Australia, pp: 393-398.
Wilson, L.L. and Foufoula-Georgiou, E. (1990), Regional Rainfall Frequency Analysis via
Stochastic Storm Transposition, J Hydraul. Engin., 116(7), 859-880, American Society of
Civil Engineers.
38
Chapter 4. Estimation of Rainfall
Excess for Very Rare to Extreme
Events
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
The specific recommendations in Book 8, Chapter 4, Section 3 apply to loss parameters for
the Initial Loss – Continuing Loss (IL-CL) model, as a large body of relevant experience has
been accumulated over many years. However, other loss models may be used if they can be
shown to be more appropriate in the specific situation.
For extreme rains and floods, a much greater proportion of a catchment may become
saturated during the event than is the case for most floods in the observed range. Also,
during extreme rainfalls, vegetation may be stripped from the catchment, thus resulting in an
increase in the volume and speed of the overland flow component of runoff (Kemp and
Daniell, 1997). Any evidence relevant to the changed behaviour of the catchment under
extreme rainfall conditions should be considered when estimating design losses and the
resulting design floods.
39
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
4.1.2. Losses Associated With Design Storms and Design
Bursts
When considering the adoption of design losses it is necessary to understand the distinction
between design bursts of rainfall, and design storms. The difference between the two
concepts and the implications of the two concepts for the estimation of design losses are
explained in Book 5, Chapter 3, Section 3. The selection of design loss values must take into
consideration the manner in which the design information was derived, and whether the
losses are to be applied to design storms or design bursts. More specifically, there is a
significant difference between the initial loss values applicable to design storms and design
bursts, and how these loss values can be expected to vary with event magnitude (see Book
8, Chapter 4, Section 1).
The discussion in Book 5, Chapter 3, Section 4 makes it clear that Storm Initial Loss (ILs)
and Burst Initial Loss (ILb) are expected to show a different degree of variation with event
magnitude. The two types of initial loss for rural catchments are therefore treated separately
in Book 8, Chapter 4, Section 1 and Book 8, Chapter 4, Section 1.
The interpretation of Proportional Loss (PL) as the unsaturated proportion of the catchment
implies that with larger storm events the unsaturated proportion of the catchment is reducing
and thus the proportional loss also reduces. As it is difficult to extrapolate the rate of this
reduction to Extreme events, the proportional loss model is generally considered less
appropriate for estimating Very Rare to Extreme floods. On the other hand, the Continuing
Loss (CL) is expected to approach a limiting value for saturated catchment conditions, and
this limiting value is the appropriate design loss rate for all events for which the saturation
threshold has been exceeded. More detailed discussion of the variation of CL with event
magnitude for rural catchments is given in Book 8, Chapter 4, Section 1.
An analysis of the rainfall conditions prior to the largest storms on record in the GSAM region
of south-eastern Australia (Minty and Meighen, 1999) indicated qualitatively no propensity
40
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
for “greater than normal” rainfall in the 15 days immediately preceding these large storms.
The analysis by (Minty and Meighen, 1999) shows that about 75% of the largest storms on
record in south-eastern Australia were preceded by rainfall totals of less than 10% of the
depth of the storm. Further, the analysis revealed that the average length of the dry period
between pre-storm rainfall and the storms was about 8 days.
The available evidence thus suggests that there is no need to vary ILs with event
magnitudes up to the largest event on record. Further research is desirable to confirm the
applicability of these findings of little or no variation of ILs with event magnitude to regions
outside south-eastern Australia.
41
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
4.1.5. Consideration of Joint Probabilities
Where losses are considered to have an important influence on the design floods of interest,
it is recommended that they be simulated using joint probability approaches to minimise bias
in the transformation of rainfalls to floods. In the extreme range of floods it would be
expected that losses are generally less important than temporal patterns, and hence where
volume is not important it may well be sufficient to model losses in a deterministic fashion.
The selection of loss parameters for Very Rare to Extreme design events should allow for
the following factors:
• type of design rainfall data, i.e. design storm or design burst (Book 8, Chapter 4, Section
1);
Specific recommendations are given for the selection of design initial loss in (Book 8,
Chapter 4, Section 3 for design bursts and Book 8, Chapter 4, Section 3 for design storms)
and design continuing loss (Book 8, Chapter 4, Section 3). Except for storm initial loss,
42
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
different design situations are distinguished depending on the event magnitude. Where
appropriate, different recommendations are given for specific geographic regions, consistent
with the availability of design information for different parts of Australia.
Beyond the credible limit of flood extrapolation, it is not possible to check the
appropriateness of the adopted loss values against independent flood estimates, and thus it
is necessary to adopt a more prescriptive, conservative approach. The recommendations in
Book 8, Chapter 4, Section 3 and Book 8, Chapter 4, Section 3 reflect this philosophy.
4.3.2. Rural Initial Loss Values for Use with Design Bursts
The selection of initial loss for use with design bursts of rainfall is problematic as the depth of
rainfall antecedent to the burst varies with both storm duration and event magnitude.
Traditionally, it has been assumed that the net bias resulting when storm losses obtained
from calibration are applied with design bursts is negligible. However, the available evidence
for flood events more frequent than the 1 in 100 AEP event suggests that the losses
obtained from calibration to large historic floods are too low (e.g. (Walsh et al., 1991), and
(Hill et al., 1996b)).
The expected reduction of ILb with reducing burst duration and increasing event magnitude
means that the following recommendations have to differentiate between event magnitudes.
Where possible, reconciliation with independently derived design flood estimates should also
be attempted, as described in Book 5, Chapter 3, Section 3.
Alternatively, it may be assumed that the losses vary linearly on a log-log plot of losses
versus AEP; this assumption is more consistent with the interpolation procedure used for
design rainfalls (Book 8, Chapter 3, Section 5), and is also more amenable to calculation.
For example, if initial loss values L1 and L2 were assigned, respectively, to events of 1 in Y1
and 1 in Y2 AEP, then the loss value to be used in conjunction with a design burst of
intermediate 1 in Y AEP could be interpolated using the following equation:
log �2 − log �1
log �� = log �1 + log � − log �1 (8.4.1)
log �2 − log �1
43
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
A zero loss value is again to be approximated by a small value, say 0.1 mm. The practical
difference between the use of Equation (8.4.1) and the assumption of log-Normal variation is
negligible given the uncertainty of loss rate variation.
In this context of selecting a design loss, some care and interpretation may be required in
assessing the minimum value in observed floods. Sometimes an apparently anomalous
value occurs that is appreciably lower than all other derived values. As this could have
resulted from the effects of data errors, it may be desirable to neglect the anomalously low
value in selecting the minimum value.
• Humid and sub-humid regions of south-eastern Australia: For long duration rainfalls in this
region, temporal patterns of pre-burst rainfall are available (Jordan et al, 2005; Minty et al,
1999), and thus the procedures provided in Book 8, Chapter 4, Section 3 for design
storms should be used. If PMP design bursts are used directly, and for shorter duration
design bursts, an ILb value of zero should be selected.
• Tasmania: For western Tasmania, catchments are likely to be saturated, and 100% runoff
(i.e. ILb=0) is appropriate for design. Loss values for south-eastern Australia should apply
to eastern Tasmania.
• Arid and Semi-Arid regions: The few data available indicate that no initial loss should be
deducted from the PMP.
• Western Australia: For the forested south-west region, the following values of ILb are
recommended:
• Winter: ILb = 0
• Summer: ILb = 200 mm from the high absorbing gravels and sands of the lateritic
uplands and zero from the remainder of the catchment.
4.3.3. Rural Initial Loss Values for Use with Design Storms (ILs)
Pre-burst temporal patterns are available for the whole of Australia, and their use to
construct complete design storm events provides a more logical basis for the derivation of
hyetographs of rainfall excess.
Unless specific evidence of significant variation of initial loss with event magnitude or
duration has been found in the region of interest, the storm initial loss values derived by the
procedures in Book 5, Chapter 3, as representative (median) values from large events, are
44
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
applicable to flood estimation over the whole range, from Infrequent floods to the PMP
Flood, and for all durations.
4.3.4. Rural Continuing Loss Values (CL) for use with Design
Bursts and Design Storms
4.3.4.1. Rare to Very Rare Events
The CL values derived by the procedures in Book 8, Chapter 4, Section 2 are based on the
analysis of moderate to large events and are thus directly applicable to events in that range.
For CL values determined by reconciliation with independently estimated flood estimates
(Book 5, Chapter 3, Section 4), the range of application depends on the credible limit of
extrapolation of floods for the particular design situation.
For short duration events, losses are very small compared with depths of precipitation, and
variations in the value adopted will have little effect on the magnitude of the resulting flood.
For longer storms, the rate of loss and the form of loss adopted can have a considerable
effect on estimated floods, particularly on flood volumes, and greater care is needed in their
selection. An example of the variation of maximum pond level with loss values is given by
Brown (1982).
• Humid and sub-humid regions of north-eastern and northern Australia: higher CL values
than for south-eastern Australia may be appropriate, but values greater than 3 mm/h
would be unusual.
45
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
• Tasmania: For western Tasmania, catchments are likely to be saturated, and zero
continuing loss is considered appropriate for design. Loss rates for south-eastern Australia
should apply to eastern Tasmania.
• Arid and semi-arid regions: The few data available indicate that a slightly higher value of
loss rate may be appropriate than for more humid regions in the south-east of the
continent. It is unlikely that this value would be greater than 3 mm/h.
• Western Australia: For the forested south west region, losses should be estimated using a
variable proportional loss model based on catchment storage, as described in Book 5,
Chapter 3 and Pearce (2011). For the remainder of the State, it is considered unlikely that
CL would be greater than 3 mm/h.
4.4. References
Brown, J.A.H. (1982), A review of flood estimation procedures. Proc. of the Workshop on
Spillway Design, Dept of Natl Devel. and Energy, AWRC Conf. Series No.6, pp: 84-108.
Hill, P.I., Mein, R.G. and Siriwardena, L. (1998), How much rainfall becomes runoff? Loss
modelling for flood estimation. CRC for Catchment Hydrology Industry Report 98/5, Dept. of
Civil Eng., Monash University.
Hill, P.I., Maheepala, U.K., Mein, R.G., Weinmann, P.E. (1996a), Empirical analysis of data to
derive losses for design flood estimation in south-eastern Australia. CRC for Catchment
Hydrology Report 96/5, Dept. of Civil Eng., Monash University.
Hill, P.I., Mein, R.G., Weinmann, P.E. (1996b), Testing of Improved Inputs for Design Flood
Estimation in South-Eastern Australia. CRC for Catchment Hydrology Report 96/6, Dept. of
Civil Eng., Monash University.
Jordan P, Nathan R., Mittiga L and Taylor B (2005): “Growth Curves and Temporal Patterns
for Application to Short Duration Extreme Events”. Aust J Water Resour 9(1), 69-80.
Kemp, D. and Daniell, T.M. (1997), The Olary Floods - February 1997, Aqua Australis, 1(3),
3-7.
46
Estimation of Rainfall Excess
for Very Rare to Extreme
Events
Laurenson, E.M. and Pilgrim, D.H. (1963), Loss rates for Australian catchments and their
significance. I.E. Aust., The Journal, Jan.-Feb. 1963, pp: 9-24.
Pearce, L. J. (2011), Regional runoff coefficients for summer and winter design flood events
in south-west Western Australia. In Proceedings of the 33rd Hydrology and Water Resources
Symposium, Barton, A.C.T., Engineers Australia, pp: 331-338.
Phillips, B., Goyen, A., Thomson, R., Pathiraja, S. and Pomeroy, L. (2014), Project 6 - Loss
models for catchment simulation - urban catchments, Stage 2 Report. Report prepared for
Australian Rainfall and Runoff Revision, Engineers Australia.
Walsh, M.A., Pilgrim, D.H. and Cordery, I. (1991), Initial losses for design flood estimation in
New South Wales. Proc. International Hydrology and Water Resources Symposium, Perth,
I.E. Aust., Nat. Conf. Publ. (91/19), 283-288.
Weinmann, P.E., Rahman, A., Hoang, T., Laurenson, E.M. and Nathan, R.J. (1998), A new
modelling framework for design flood estimation. 'HydraStorm'98', Adelaide, Australia,
Institution of Engineers, Australia, pp: 393-398.
47
Chapter 5. Selection, Configuration,
and Calibration of Hydrograph
Models
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
5.1. General
In Australia, both unit hydrograph models and runoff-routing models have traditionally been
applied for event-based flood hydrograph estimation but over the last decade there has been
a shift to almost exclusive use of runoff-routing models. In recent times attention has also
been given to the use of “rain-on-grid” approaches with two dimensional (2D) hydraulic
models. Discussion on the general catchment modelling concepts and the application of
hydrograph and catchment models to the estimation of design floods is provided in Book 4,
Book 5 and Book 7.
Guidance on the use of rain-on-grid approaches for estimation of Very Rare to Extreme
floods is not provided here for two reasons: firstly, as discussed in Book 5, Book 6 and Book
7, the techniques have not been well researched or validated at this point in time and their
use to simulate overland flow routing raises a number of difficulties which are likely to be
exacerbated under extreme event conditions; secondly, such models are generally focussed
on applications in floodplain management where the design risks of interest are at the lower
range of events relevant to the guidance presented in this Book. However, the use of
hydraulic models to simulate extreme floods does have some theoretical merit, and it is
hoped that with further research guidelines can be developed that better integrate the
benefits of these two approaches.
For event-based models, the quality of the modelled flood hydrographs depends on three
components of the modelling process: (i) the basic model capabilities and constraints, (ii) the
quality of the catchment representation in the model, and (iii) the appropriateness of the
selected parameter values in the flood range of interest. General recommendations for these
three components in the context of estimating Very Rare to Extreme floods are provided
separately in Book 8, Chapter 5, Section 2 to Book 8, Chapter 5, Section 5, but it should be
recognised that the components are closely linked. The theoretical advantages of a more
flexible model that allows a more accurate representation of the important catchment
features can only be realised if suitable data or design information exists to identify
appropriate model parameter values.
48
Selection, Configuration, and
Calibration of Hydrograph
Models
In the application of runoff-routing models, a distinction needs to be made between
essentially rural catchments and substantially urbanised catchments. Book 8, Chapter 5,
Section 4 deals with the determination of model parameters for essentially rural catchments,
while the special aspects of model parameterisation for urban catchments are dealt with in
Book 8, Chapter 5, Section 5.
While there is evidence of non-linear routing response over the range of observed floods in
most natural catchments, it is unclear to what extent this effect persists to the range of Very
Rare to Extreme floods. In this range of flood magnitudes the routing response depends on
how the efficiency of flow and the available flood storage change with increasing flood
magnitude (or reducing flood frequency). The recommended procedures in Book 8, Chapter
5, Section 4 are based on this assumption. The degree of non-linearity of catchment
behaviour and its effects are discussed by Pilgrim (1986), together with the background to
the recommended procedures.
49
Selection, Configuration, and
Calibration of Hydrograph
Models
catchment, i.e. it should be internally consistent to allow matching of observed hydrographs
at the catchment outlet and at required internal points. It should be noted that some of the
simple hydrograph models in current use only provide an adequate representation of internal
flows for locations near the catchment outlet. For other internal locations it may be
necessary to increase the degree of catchment sub-division (and re-calibrate the model) to
conform with the recommendations for the minimum number of upstream sub-areas (Book
5).
50
Selection, Configuration, and
Calibration of Hydrograph
Models
Extreme floods, the variation in parameter sets for the different subcatchments should, as far
as possible, be directly related to differences in measurable catchment characteristics.
In general, allowance for different catchment conditions can be made more easily by runoff--
routing than by unit hydrograph models. In runoff-routing models the different routing
characteristics of existing or future catchment conditions can be incorporated by the
judicious selection of parameters and the types of routing elements.
51
Selection, Configuration, and
Calibration of Hydrograph
Models
experience available at that time (Pilgrim, 1986). There has been limited research since then
to resolve the issue of the appropriate degree of model non-linearity for the estimation of
Extreme flood events. Given the lack of strong research evidence, specific design
recommendations for this range of events must be based on a consensus approach. The
following considerations and recommendations are based on the broad range of views
expressed by different groups of practitioners and current practice in Australia.
The key factor to be considered when selecting parameters for modelling Very Rare to
Extreme events is that the parameters found from calibration to a relatively narrow range of
observed flood events cannot be assumed to apply to the range of more extreme events. As
the magnitude of the event to be modelled increases significantly beyond the range of the
largest observed events, the parameter selection process has to be guided more strongly by
physical/hydraulic consideration of how the response of the catchment is expected to
change when exposed to more extreme rainfall events. This will depend on the physical
characteristics of hillslopes and on-stream and floodplain characteristics such as breakout
points, threshold levels and the availability of significant off-stream storage areas in the
lower part of the catchment.
The model should thus be calibrated to events over a range of flood magnitudes up to the
largest observed event, and the results analysed for the presence of any trends. If
appropriate data are not available at the site of interest, consideration should be given to
transferring parameters from a calibrated model of a nearby catchment with similar
characteristics, with appropriate adjustments for differences in catchment size and
characteristics.
The examination of log-log plots of storage versus discharge for particular routing elements
may be helpful in the assessment of calibration results and in identifying parameter variation
with flood magnitude. In assessing the calibration results, it should be borne in mind that the
calibrated parameter values for individual events reflect not only the catchment response to
actual rainfall, but also any errors in the estimated catchment rainfall, in the rating curve
used to establish the observed flood hydrograph, and in the adopted baseflow separation
procedure. The first two types of errors tend to increase with event magnitude.
52
Selection, Configuration, and
Calibration of Hydrograph
Models
Before applying any calibrated parameter values to modelling of more extreme events, they
should be checked for consistency with the recommendations as discussed Book 8, Chapter
5, Section 4.
The approach is particularly suited to catchments with a good flood peaks record but only
limited hydrograph information. It can also be applied to reconcile rainfall based flood
estimates with flood estimates obtained from paleohydrological procedures (Book 8, Chapter
6, Section 2). Before applying any adjusted parameter values to modelling of more extreme
events, they should be checked for consistency with the recommendations in Book 8,
Chapter 5, Section 4.
There are relatively few published hydrograph modelling studies of very large Australian
flood events (Wong and Laurenson, 1983; Pilgrim, 1986; Sriwongsitanon et al., 1998). The
available evidence points towards reducing non-linearity of catchment response for very
large events (Pilgrim, 1986; Zhang and Cordery, 1999), indicating relatively more catchment
storage for increasing discharge and thus greater attenuation of flood peaks. However, this
tendency may not continue to the range of Extreme events, if flow efficiency also increases
substantially for these events. The conclusion from these studies might also be affected by
the high degree of uncertainty in estimated flow rates for Very Rare to Extreme events: the
apparent tendency towards linearity could alternatively be explained by underestimation of
the true peak flow rate.
The available studies cover only a limited range of catchment conditions, and care should
thus be taken in applying the study results to other catchments. Detailed analysis of other
large flood events and publication of results is highly desirable.
53
Selection, Configuration, and
Calibration of Hydrograph
Models
gauged to ungauged catchments for modelling of Very Rare to Extreme events, particular
emphasis should be placed on assessing the similarity of catchment characteristics relevant
to this flood range.
Before applying any regional parameter values to modelling of more extreme events, they
should be checked for consistency with the recommendations provided in the following
section.
While this simplified representation of the relationship between storage and discharge has
been shown to produce satisfactory results over a limited range of flood magnitudes, it is
well known that it fails to adequately represent the variations of flow conditions over a much
wider range of flood magnitudes. As an example, it has been shown that the S-Q
relationship for the transitional stages between in-bank and fully developed floodplain flow is
much more complex (Wong and Laurenson, 1983; Bates and Pilgrim, 1983; Pilgrim, 1986).
The failure of the power-function relationship between S and Q to account for these
complexities expresses itself in different calibrated pairs of k and m values for different flow
ranges.
The available flood data provide good evidence for the nature of non-linearity in stream-
channel and floodplain flow for Rare floods and possibly even Very Rare events. However,
relatively little evidence is available to assess the nature of the S-Q relationship for flows on
hillslopes beyond the range of relatively frequent events, or for Extreme floods in stream-
channel and floodplain systems.
In this situation of limited reliable evidence from very large flood events, the extrapolation of
model parameter values for application to extreme events must be guided by the
consideration of specific catchment topography and hydraulic factors. These factors are
further discussed in Book 8, Chapter 5, Section 4.
Hydraulic models may be used to better define the representation of flow behaviour in
complex environments, and their use for this purposes is discussed in Book 8, Chapter 5,
Section 5 .
54
Selection, Configuration, and
Calibration of Hydrograph
Models
factors for the different parts of the catchment may thus provide valuable information on the
general form of the S-Q relationship. As an example, the hydraulic analysis of channel and
floodplain flow characteristics may shed some light on the nature of non-linearity in the
streamflow routing elements in the extreme flood range. Similar analyses may be
undertaken for hillslope segments but the results will necessarily be associated with a
greater degree of uncertainty.
The following factors are considered to be responsible for variations of actual S-Q
relationships between the above special cases:
i. Factors increasing the relative efficiency of flow with increasing event magnitude (and
thus decreasing the effective value of m): With increasing event magnitude, there is a
tendency in hill-flow segments for concentration of flow in relatively efficient flow paths.
The increasing depth of flow may reduce the effective roughness of vegetation and other
flow resistance elements. Similarly, the removal and stripping of vegetation during rare
flood flows will tend to decrease the effective value of m. Some short-circuiting of the
more sinuous flow path taken during more frequent flood events is also likely to occur.
When compared to transitional stream channel and floodplain flow in Very Rare to
Extreme flood events, fully developed floodplain flow during Very Rare to Extreme events
can be expected to be more efficient.
ii. Factors reducing the relative efficiency of flow with increasing event magnitude (and thus
increasing the effective value of m): Extreme flood events can be expected to produce
significant changes to the catchment, stream and floodplain morphology. The erosive
surface changes and sediment transport processes require significant inputs of flow
energy, resulting in an increase of effective flow resistance. In stream/floodplain systems,
an increase in flood magnitude is generally associated with more complex flow patterns
and increased turbulence, also resulting in an increase of effective flow resistance. The
question to be resolved for extrapolation to Extreme events is to what extent the
increasing resistance will be offset by more efficient flow paths.
iii. Factors increasing or reducing the effects of catchment storage (and thus increasing or
reducing, respectively, the value of m compared to calibration events): In catchments with
extensive floodplains, increasingly larger flood events will mobilise additional storage
areas that may not contribute significantly to flood flow conveyance. The question to be
addressed in extrapolation of calibration results to Extreme events is, to what extent these
areas will still contribute mainly to storage, and to what extent they will become effective
conveyance areas. In heavily vegetated catchments, flood debris may create temporary
pondage areas and thus additional catchment storage.
In extrapolating model parameter calibration results to Very Rare and Extreme events, the
above factors should be carefully balanced.
55
Selection, Configuration, and
Calibration of Hydrograph
Models
It is recognised that in many cases the constraints on the study budget will limit the extent to
which the above factors can be evaluated. It will thus be necessary to place a greater
reliance on experience gained from earlier studies and to introduce a margin of conservatism
into the selection of parameter values. Book 8, Chapter 5, Section 4 gives recommendations
for parameter selection based on these considerations.
i. Where most of the valleys in the catchment are V-shaped with only small floodplains:
• if the available model calibration results for the catchment of interest include Very Rare
events and the calibrated m is in the range from 0.8 to 0.9 inclusive, adopt the
calibrated value;
• if the available model calibration results for the catchment of interest include Very Rare
events and the calibrated m is outside the range from 0.8 to 0.9, select an appropriate
value, guided by the additional information and considerations in Book 8, Chapter 5,
Section 4;
• if the range of available model calibration results for the catchment of interest is limited
to Rare events, select an appropriate value of m in the range from 0.8 to 0.85, guided
by the additional information and considerations in Book 8, Chapter 5, Section 4;
• if neither Very Rare event calibration data nor the appropriate expertise for the
considerations in Book 8, Chapter 5, Section 4 are available, adopt a conservatively low
value of m = 0.8.
ii. Where many of the valleys in the catchment have appreciable floodplains:
• if the available model calibration results for the catchment of interest include Rare
events and the calibrated m is in the range from 0.85 to 0.9 inclusive, adopt the
calibrated value;
• if the available model calibration results for the catchment of interest include Very Rare
events and the calibrated m is outside the range from 0.85 to 0.9, select an appropriate
56
Selection, Configuration, and
Calibration of Hydrograph
Models
value, guided by the additional information and considerations in Book 8, Chapter 5,
Section 4;
• if the range of available model calibration results for the catchment of interest is limited
to Rare events, select an appropriate value of m in the range from 0.85 to 0.9, guided
by the additional information and considerations in Book 8, Chapter 5, Section 4;
• if neither Very Rare event calibration data nor the appropriate expertise for the
considerations in Book 8, Chapter 5, Section 4 are available, adopt a conservatively low
value of m = 0.85.
It should be noted that in the context of the above recommendations the term “Very Rare
event” should be interpreted as floods that are clearly beyond the transition between within-
bank and floodplain flow, i.e. fully developed floodplain flows of appreciable depth.
The recommendations for m relate to all floods beyond the credible limit of extrapolation. If
the value of m selected for extreme floods differs from the value of m for floods of lesser
magnitude, then the coefficient k in the power law storage-discharge relation (Book 5) should
be adjusted to ensure that the magnitude of flow at the credible limit of extrapolation is
unchanged when used with the new value of m. An initial estimate of the required value of k
can be obtained by means of Equation (Book 5).
The degree of complexity required when modelling an urban system is largely dictated by
the design context. If the main focus is on sizing trunk drainage capacities then it may be
sufficient to use non-linear storage routing models, where appropriate attention is given to
characterising the shorter relative delay times associated with urbanisation of the natural
drainage paths. Many hydraulic controls that influence flood response in urban catchments
become drowned out under extreme conditions, and the complexities required to model the
57
Selection, Configuration, and
Calibration of Hydrograph
Models
performance of these systems under Very Frequent to Rare conditions may not be required
for more extreme events.
In complex systems it may not be possible to predict the changing nature of flow paths with
event magnitude, or adequately characterise the influence of major floodplain features. In
such cases it would be expected that flood behaviour is best assessed using hydraulic
models, as described in Book 6 and Book 7. However, while the use of such models better
resolves the influence of hydraulic controls, they introduce additional complexity associated
with the need to interface with the hydrologic models used to derive input hydrographs. The
need for such an interface might be avoided by inputting rainfall directly onto the hydraulic
model grid, but this is only possible for catchments where the model covers the whole
contributing area. While this potentially provides a more realistic representation of catchment
controls, the approach is not well validated at this point in time and is subject to additional
uncertainties, as discussed in Book 7.
The joint use of hydrologic and hydraulic routing models involves some explicit trade-offs in
modelling complexity. On one hand hydrologic models are easily run within a joint probability
framework and are thus able to explicitly solve the joint probabilities involved in the
production of flood runoff to yield unbiased estimates of flood risk. On the other, they are ill-
suited to representing the influence of complex hydraulic controls that might arise in an
urban environment under Extreme conditions.
One means of balancing this trade-off is to use a hydraulic model to define the
characteristics of a storage-discharge relationship. With this approach, a selection of flood
hydrographs spanning the range of conditions of interest are input into the hydraulic model,
and the outputs are then used to derive a relationship between storage, discharge and/or
level, as relevant to the design problem of interest. This relationship can then be
incorporated into a joint probability framework and then used to derive the flood
characteristics without further need for hydraulic modelling. The advantage of the approach
is that it combines the benefits of hydraulic modelling with stochastic simulation of flood
processes but without impractical computational burden. The limitations of the approach is
that it assumes that the derived storage-discharge relationship is adequate for all
combinations of inputs, a situation that is only likely to be valid when considering one or two
dominant mechanisms of flood loading. An example of this approach is provided by Sih et al.
(2012), who used the hydrologic model to resolve the joint probabilities involved in reservoir
drawdown and the concurrence of flood inflows from two major tributaries, and a hydraulic
model to relate tributary inflows and tide levels to peak water levels at locations within a
complex urbanised floodplain.
For more complex environments it will be necessary to rely directly on a hydraulic model to
provide a realistic representation of flow behaviour. At present it is usually impractical to
consider running complex hydraulic models in a stochastic simulation scheme, though it is
expected that this approach will become increasingly feasible as parallel and distributed
computing capabilities improve and become more easily implemented. The simplest way of
trading off the potential for bias associated with rainfall-runoff modelling and the need for
accurate representation of hydraulic behaviour is by careful selection of deterministic
hydrologic inputs. For example, estimates of the concurrent peak design floods may be
obtained through ensemble or Monte Carlo approaches, and these may be used to scale
representative hydrographs for input to deterministic simulation in a hydraulic model. At its
simplest, single runs of hydraulic models may be undertaken for each combination of storm
duration and event severity, but this can be extended to ensemble hydraulic runs for a more
representative range of flood inputs. The success of either approach rests on the selection of
inputs that minimise bias in the transformation between rainfall exceedance probability and
58
Selection, Configuration, and
Calibration of Hydrograph
Models
the flood level (or outflow) of interest, and sensitivity analyses will assist the identification of
dominant influences and the selection of representative scenarios to be modelled.
5.6. References
Bates, B.C. and Pilgrim, D.H. (1983), Investigation of storage-discharge relations for river
reaches and runoff-routing models. Civ. Eng. Trans., I.E. Aust., CE25: 153-161.
Brown, J.A.H. (1982), A review of flood estimation procedures. Proc. of the Workshop on
Spillway Design, Dept of Natl Devel. and Energy, AWRC Conf. Series No.6, pp: 84-108.
Dyer, B., Nathan, R.J., McMahon, T.A. and O'Neill, I.C. (1993), A cautionary note on
modelling baseflow in RORB. I.E. Aust. Civil Engin. Trans., CE35(4), 337-340.
Kemp, D. (1998), Flood hydrology modelling of Keswick Ck using the RRR model.
Hydrastorm 98, 3rd International Symposium on Stormwater Management, Adelaide,
September 1998, pp: 349-354.
Mein, R.G., Laurenson, E.M. and McMahon, T.A. (1974), Simple nonlinear model for flood
estimation. ASCE J. Hydraulics Div., 100(HY11), 1507-1518.
Pilgrim, D.H. (1986), Estimation of large and extreme floods. Civ. Engg Trans, Inst. Engrs
Aust., CE28: 62-73.
Pilgrim, D. (1998), Runoff routing methods, Book V Section 3, in Australian Rainfall and
Runoff, A Guide to Flood Estimation, Volume 1, D.H.Pilgrim (ed), National Committee of
Water Engineering, Institution of Engineers, Australia, reprint.
Sih, K., Hill, P.I., Nathan, R. and Mirfenderesk, H. (2012), Sampling in Time and Space -
Inclusion of Rainfall Spatial Patterns and Tidal Influences in a Joint Probability Framework
Proceedings of the 34th Hydrology & Water Resources Symposium. ISBN
978-1-922107-62-6. Engineers Australia.
Watson, B. (1982), The estimation of spillway design floods - The Hydro Electric
Commission, Tasmania. Proc. of the Workshop on Spillway Design, Dept of Natl Devel. and
Energy, AWRC Conf. (6), 141-156.
Weeks, W.D. and Stewart, B.J. (1982), Aspects of design flood estimation using runoff
routing. Proc. of the Workshop on Spillway Design, Dept of Natl Devel. and Energy, AWRC
Conf. (6), 109-118.
Wong, T.H.F. and Laurenson, E.M. (1983), Wave speed-discharge relations in natural
channels. Water Resour. Res., 19(3), 701-706.
59
Chapter 6. Derivation of Design
Floods
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
6.1. Overview
6.1.1. Selection of Basic Procedure
The available procedures can be divided into two main groups: those based on fitting a
frequency curve to flood maxima, and those based on design rainfalls. Flood frequency
methods (Book 3, Chapter 2) are used to provide estimates of peak discharge, but perhaps
their most valuable role in the context of this Book is to provide information that can be used
to validate, or even calibrate, rainfall-based procedures. The limit of credible extrapolation for
Flood Frequency Analysis based on regional gauged data is perhaps 1 in 500 AEP
(Table 8.1.1), though paleoflood analysis (Book 8, Chapter 6, Section 2) can be used to
considerably extend this limit. The credible limit of flood frequency analysis that can be
typically obtained using at-site data is perhaps only 1 in 100 AEP (Table 8.1.1).
Rainfall-based procedures use loss models and hydrograph models to transform design
rainfall inputs into design flood estimates. Final design estimates of Very Rare to Extreme
floods, beyond the credible limit of extrapolation (of either rainfall or floods), should be
derived using rainfall-based procedures. The design details in the following sections relate
mainly to rainfall-based procedures. As discussed in Book 1, Chapter 3, Section 4, event-
based approaches are generally more applicable to the estimation of Very Rare to Extreme
floods than are approaches based on continuous simulation; accordingly, the procedures as
outlined in Book 4 are generally applicable to the estimation of design floods for Very Rare
and Extreme events.
Book 8, Chapter 6, Section 1 briefly discusses issues related to the specification of design
flood characteristics. Book 8, Chapter 6, Section 1 introduces a number of special design
considerations that are covered in more detail in Book 8, Chapter 7. Subsequent sections
(Book 8, Chapter 6, Section 2 to Book 8, Chapter 6, Section 4) provide guidance on final
design procedures, while Book 8, Chapter 6, Section 5 discusses the treatment of
uncertainties associated with flood estimates derived by these procedures.
60
Derivation of Design Floods
It should be noted that use of the Total Probability Theorem to derive flood quantiles using
Monte Carlo Event procedures as described in Book 4 yields expected probability quantiles.
This approach ensures probability neutrality, at least for the set of hydrologic inputs used in
its application. If estimates are derived from a blend of approaches, comments on the likely
magnitude and importance of expected probability adjustments for different flood ranges are
as follows:
• Rare Floods – significant extrapolation may be involved where these floods are estimated
from frequency analysis of at-site data. The recommendations on the adjustment of Book
3, Chapter 2, Section 5 should thus be followed.
• Very Rare Floods – design rainfalls are estimated from regional analysis of large data
sets. Generally this does not involve extrapolation beyond the probability plotting positions
of the largest events, and any expected probability adjustment would thus be relatively
minor for the rainfall frequency distribution. However, there is usually significant
extrapolation of the rainfall-runoff model. In such cases, parameter uncertainty in rainfall-
runoff model parameters may lead to a significant expected probability adjustment. At
present, there are no accepted methods for making this adjustment.
• Extreme Floods – these are derived by methods of interpolation from other estimates for
which an adjustment may have been made. Therefore separate adjustment is not
required.
61
Derivation of Design Floods
In principle, it is also possible to derive design flood estimates for a site downstream of a
reservoir directly from Flood Frequency Analysis of flood data available at that site. However,
for reservoirs with large storage capacity compared to typical flood volumes, and with
significant inter-event variability of reservoir flood storage, a much longer data series is
required to adequately sample the combined effects of inflow and storage content variability.
In these cases, the scope for extrapolation of directly determined flood frequency curves to
the range of Rare to Very Rare events is severely limited. Book 8, Chapter 7, Section 2 gives
further guidance on the derivation of reservoir outflow frequency curves.
i. as direct basis for estimating Rare floods for final design applications: where the range of
AEPs is limited to 1 in 100 (perhaps 1 in 200 AEP for analysis of at-site and regional
data);
62
Derivation of Design Floods
ii. as direct basis for estimating Rare to Very Rare floods for preliminary design or
performance checks: where the lowest AEP of interest is around 1 in 200 (for analysis of
at-site flood data only) or perhaps 1 in 500 (for analysis of at-site and regional flood data);
iii. as a basis for determining the lower end of a complete flood frequency curve: where an
estimate of the PMP Flood is available but no rainfall-based estimates of Rare to Very
Rare floods;
iv. as basis for independent checking of rainfall-based design flood estimates and possible
adjustment of model parameters: where rainfall-based design floods are to be determined
for the full range of design floods, from Rare to Extreme.
Cases (i) and (ii) only involve extension of the flood frequency curve to the credible limit of
extrapolation, but case (iii) requires the estimation of the PMP Flood and the application of
an interpolation procedure for intermediate events (Book 8, Chapter 6, Section 3). Case (iv)
requires detailed consideration of how the flood estimates from different sources can best be
reconciled (Book 8, Chapter 5, Section 4).
63
Derivation of Design Floods
The overall requirement for these types of analyses is that estimates can be derived quickly,
and that given the large uncertainty the flood estimates should be biased towards
conservatively high values.
Preliminary estimates should not be used for final design purposes, nor should the results be
relied upon for making decisions about long term levels of acceptable risk. Practitioners are
encouraged to use any information and methods that they consider appropriate, and the
following recommendations are provided for general guidance only.
• Peak discharge: estimates of peak discharge are directly suitable for the preliminary
design of bridge waterways and spillways for those storages where it can be
conservatively assumed that only minor attenuation of the inflow hydrograph occurs;
Estimates of the volume of the hydrographs can easily be determined from estimates of
design rainfalls and losses. The volume of the hydrograph can simply be determined as the
average depth of rainfall excess over the catchment multiplied by the catchment area.
Appropriate hydrograph shapes can be derived by scaling hydrographs obtained from either
detailed studies on similar catchments or from suitable at-site records, though hydrographs
obtained from rainfall-based models (using regional parameters) can be scaled to suit the
preliminary peaks derived in Book 8, Chapter 6, Section 2.
64
Derivation of Design Floods
Where suitable rainfall and runoff data are available, the model selected should be calibrated
using observed floods on the catchment of interest and, where appropriate, the parameter
values should be adjusted to help reconcile differences between design values derived from
Flood Frequency Analysis and rainfall-based methods (Book 8, Chapter 5, Section 4). In
other cases, design values for the model parameters must be estimated from calibration on
adjacent gauged catchments, regional relationships, or other relevant information. Where a
concentrated storage, such as a reservoir or lake, can have a significant impact on the
catchment response to rainfall, allowance must be made for its effect (Book 8, Chapter 5,
Section 3). Design hydrographs usually need to be estimated for a range of design rainfall
durations and AEPs in order to derive a complete flood frequency curve, and this is
discussed in Book 8, Chapter 6, Section 3.
The rainfall-based procedure described above provides estimates of design floods that are
comprised solely of direct runoff, i.e. that portion of the hydrograph that is derived from
event-based rainfall excess. To derive design floods that reflect the total volume of the
hydrographs, it is necessary to add baseflow (Book 8, Chapter 6, Section 3).
Baseflow estimates for Rare events should be based on procedures described in Book 5,
Chapter 4. Where there is clear evidence that initial baseflow increases with flood magnitude
a constant baseflow 20% to 50% greater than the maximum value estimated in observed
floods may be appropriate for Extreme events. If the difference between these two baseflow
values is of minor importance then a representative, fixed value could be used for all
intermediate AEPs. However, if deemed appropriate, the magnitude of the baseflow could be
varied linearly on a plot of baseflow versus log(AEP) between the value adopted for the 1 in
100 AEP event and that adopted for the flood resulting from the PMP (alternatively Equation
(8.4.1) could be used).
65
Derivation of Design Floods
There seems little justification for use of simple event approaches for the estimation of Very
Rare to Extreme floods as the dominant source of natural variability that influences flood
magnitude for this class of events (other than rainfall depth) is typically the temporal pattern
of incident rainfall. The ensemble event method (Book 3, Chapter 3, Section 2) represents a
modest increase in computational requirements, whereby a representative sample of
temporal patterns is used to provide a centrally tended estimate (either the arithmetic mean
or the median) of the peak flow associated with the AEP of the input rainfall. A
representative hydrograph from the ensemble can be scaled to match the derived peak for
design purposes.
Monte Carlo event approaches provide the additional attraction that losses can be sampled
(where designs are sensitive to long-duration events), along with other factors which may
have a significant influence on the design outcome (such as reservoir drawdown, or spatial
patterns of rainfall).
The general issues involved in the selection of the simulation framework are discussed in
Book 4, though it should be noted that the estimation of extreme events can involve more
significant degrees of non-linearity than present in the estimation of more frequent floods.
For example, use of an ensemble event method to assess the influence of initial reservoir
level on outflow floods is likely to provide highly biased estimates, which is avoided if a
Monte Carlo scheme based on the Total Probability Theorem (or similar) is used (Book 4).
While the focus of the guidance in this Book is on Very Rare to Extreme flood events, it is
important to check that the models yield estimates that are consistent with available
evidence. Estimates of Rare floods provide the “anchor point” for derivation of more extreme
events, and it is advisable to select a best estimate by weighting the estimates obtained from
different methods by their uncertainty. In practice, however, the information required to do
this is limited and it is recommended that where possible rainfall-based estimates are
reconciled with independent estimates from Flood Frequency Analysis or regional flood
method estimates (Book 3, Chapter 2 and Book 3, Chapter 3).
66
Derivation of Design Floods
67
Derivation of Design Floods
Figure 8.6.1. Illustration of Derived Frequency Curve Based on Reconciliation with Flood
Frequency Quantiles
The PMF is also used to define the extent of flood-prone land (AEMI, 2014). The extent,
nature and potential consequences of flooding associated with a range of events up to and
including the PMF event is considered in some floodplain management studies. The PMF
causes the largest scale of flood emergency and is also therefore often used for emergency
management planning (AEMI, 2014). Guidance relevant to these purposes is provided in
Book 8, Chapter 6, Section 4.
Pilgrim and Rowbottom (1987) defined the PMF as the limiting value of flood that could
reasonably be expected to occur. Superimposing risks of very low probabilities was not
68
Derivation of Design Floods
Concerns around the difficulties of estimating the PMF in a consistent manner have been
recognised for a long time (eg, (Newton, 1982; Barker et al., 1996; Nathan et al., 2011)).
While the notion of a “probable maximum” flood standard appears a simple enough concept,
in practice its estimation is confounded by a number of key problems (Nathan and
Weinmann, 2004), namely:
• The lack of established criteria to determine the “reasonableness” with which to combine
the various flood producing factors;
Accordingly, the intention of the recommendations herein is to retain the concept that the
PMF represents the limiting value of flood that could reasonably be expected to occur, but to
provide additional considerations that reduce the scope for inconsistency.
The temporal patterns used to derive the PMF should be selected from an ensemble of
patterns appropriate for use with the Generalised PMP (Book 8, Chapter 3, Section 8).
Rearrangement of rainfall intensities within the patterns to give the highest possible flood
peak may yield rainfall patterns with implausible serial correlation structure and is at variance
with the objective of deriving a limiting value of flood that could reasonably occur. An
estimate of a reasonable upper limiting value of floods may be derived by using the temporal
(or spatially varying temporal (space-time) pattern from the available ensemble that yields
the maximum flood characteristic of interest. It should be recognised that temporal and
space-time patterns of rainfall based on historical events (Book 2, Chapter 4) are usually
based on a limited number of pluviometers; when scaled to PMP storms over large
69
Derivation of Design Floods
catchments such patterns may yield embedded bursts of rainfall that are quite unrealistic.
Accordingly, the characteristics of the PMF derived using a single temporal (or space-time)
pattern should be checked against the results obtained from other patterns in the available
ensemble. If the difference between the maximum adopted pattern and other results is
anomalously large, then it may be appropriate to adopt a less severe pattern so as not to
superimpose inputs of very low probabilities.
The hydrograph models used to transform the PMP to the PMF should follow the general
recommendations provided in Book 8, Chapter 5, Section 2 and Book 8, Chapter 5, Section
3. Parameter values should be selected in accordance with the recommendations provided
in Book 8, Chapter 5, Section 4. The selection of other design inputs, such as initial reservoir
level or snowpack depth, should be representative of the more extreme conditions that could
reasonably be expected to occur.
• 0 mm burst initial and 1 mm/hr continuing loss rates (or higher as justified for such regions
as the south west of Western Australia);
• the temporal (or space-time) pattern from a sample of ten that yields the highest
magnitude flow;
• the storage levels in any upstream impoundments are assumed to be initially full; and
• other inputs influencing the design estimate should be set at their notional maximum.
It is considered reasonable that estimates required for such upper limiting checks be derived
for the critical location at a single location representative of the planning focus of interest. It
is accepted that the above assumptions may be considered unreasonably conservative
compared to the more detailed assessment described in the following section, but that this
considered reasonable given the planning context for which such estimates are required.
The cost implications of upgrading dams may well be very sensitive to the degree of
conservatism adopted by practitioners when assessing the “reasonableness” of assumptions
used to derive the PMF. For example, as illustrated in Figure 8.6.2a (dashed curve), the
costs of providing additional flood capacity may increase monotonically with flood
magnitude; under such a scenario there is no obvious point where the upgrade costs
increase disproportionally with the degree of conservatism adopted. However, if there is a
step function involved in the relationship between flood capacity and cost – for example if an
70
Derivation of Design Floods
additional spillway is required because the practical limit of extending a wave wall has been
reached (solid curve in Figure 8.6.2a) – then a small difference in subjective judgement may
have a significant impact on the costs and feasibility of an upgrade.
To this end, the following steps may be warranted when providing an estimate of the PMF:
• Derive the estimate of the PMP Flood under probability neutral assumptions
• 0 mm burst initial and 1 mm/hr continuing loss rates (or higher as justified for such
regions as the south west of Western Australia);
• the temporal (or space-time) pattern from a sample of ten that yields the highest
magnitude flow;
• if the design is for a dam, then adopt an initial storage at Full Supply Level.
• Estimate the shift in AEP associated with the difference in magnitude between the PMP
Flood and the PMF (by simple extrapolation as shown in Figure 8.6.2b).
• If a deterministic modelling framework is used to estimate the PMP Flood, then undertake
a number of simulations using inputs selected from a plausible range of values to
understand the catchment specific impacts of the PMF assumptions made.
• If a Monte Carlo framework is used to estimate the PMP Flood, then also calculate the
proportion of samples in which the PMF is exceeded given the PMP depth as input. If the
shift in AEP (as shown in Figure 8.6.2b) is greater than one order of magnitude, or the
conditional probability that the PMF is exceeded is less than around 10% to 1%, then
revisit assumptions used to derive the PMF and relax as appropriate.
• Finally, check the sensitivity of any decisions that are to be based on the PMF estimate – if
there is a marked difference in outcome within a range of estimates that could be
71
Derivation of Design Floods
It is expected that the above steps will only be required in a small proportion of cases in
which design and or mitigation costs increase disproportionally with the degree of
conservatism adopted.
The estimation of Very Rare to Extreme floods is a region where “the computation of
hydrologic probabilities is based on arbitrary assumptions about the probabilistic behaviour
of hydrologic processes rather than on empirical evidence or theoretical knowledge and
understanding of these processes” (Klemes, 1993). Improving the consistency of the manner
in which such assumptions are applied in practice will thus minimise the potential for
differences in the results obtained by different hydrologists. The main strategy available for
reducing the impact of this form of uncertainty is to ensure that the practitioners undertaking
the work are appropriately qualified and supervised. In addition, prescriptive procedures
relating to the estimation of floods beyond the credible limit of extrapolation are justifiable as
without empirical evidence or scientific justification there can be little rational basis for
departing from a consensus approach.
6.6. References
Australian Emergency Management Institute. (2014). Managing the floodplain: a guide to
best practice in flood risk management in Australia. Attorney-General's Department,
Canberra.
Barker, B., Schaefer, M., Mumford, J. and Swain, R. (1996), A Monte Carlo Approach to
Determine the Variability of PMF Estimates. In Proc. Conf. American Society Dam Safety
Officials.
72
Derivation of Design Floods
Brown, J.A.H. (1982), A review of flood estimation procedures. Proc. of the Workshop on
Spillway Design, Dept of Natl Devel. and Energy, AWRC Conf. Series No.6, pp: 84-108.
Herschy, R. (2003), World catalogue of maximum observed floods. IAHS Publ No. 284, pp.
285
Lam, D., Thompson, C., Croke, J., Sharma, A. and Macklin, M. (2017a), Reducing
uncertainty with flood frequency analysis: the contribution of palaeoflood and historical flood
information. Water Resour. Res. 53: 2312-2327.
Lam, D., Croke, J., Thompson, C. and Sharma, A. (2017b), Beyond the gorge: Palaeoflood
reconstruction from slackwater deposits in a range of physiographic settings in subtropical
Australia. Geomorphology, 292: 164-177.
Malone, T. (2011), Extreme design floods for Seqwater storages. In: Proc. 33rd Hydrology
and Water Resources Symposium (Brisbane). Barton, A.C.T.: Engineers Australia, pp:
323-330.
Myers, V.A. (1967), 'The Estimation of Extreme Precipitation as the Basis for Design Floods,
Resume of Practice in the United States', Proceedings of the Leningrad Symposium, Floods
and Their Computation, International Association of Scientific Hydrology.
Nathan, R.J. and Weinmann, P.E. (2004), Towards increasing objectivity in the Probable
Maximum Flood. Proc. ANCOLD/NZSOLD Conference, 14-17 Nov 2004.
Nathan, R.J., Weinmann, P.E., and Gato, S. (1994), A quick method for estimation of the
probable maximum flood in South East Australia. Int. Hydrology and Water Resources
Symposium: Water Down Under, November, Adelaide, I.E. Aust. Natl. Conf. Publ. No.94, pp:
229-234.
Nathan, R.J., Weinmann, P.E. and Minty, L. (1999), Estimation of the Annual Exceedance
Probability of PMP Events in Southeast Australia, Aus. J. Water Resour., 3(1), 143-154.
Nathan, R., Hill, P, and Weinmann, E. (2011), Achieving consistency in derivation of the
Probable Maximum Flood, ANCOLD 2011 Conference on Dams. Melbourne.
Pearce, L.J. (2011), Regional Flood Estimation for Small Catchments in South-West Western
Australia. In: Proc. 33rd Hydrology and Water Resources Symposium (Brisbane). Barton,
A.C.T., Engineers Australia, pp: 323-330.
Pilgrim, D.H. and Rowbottom, I.A. (1987), Chapter 13 - Estimation of large and extreme
floods, In Pilgrim, D.H. (ed.) Australian Rainfall and Runoff: A Guide to Flood Estimation, I.E.
Aust., Canberra.
Smythe C. and Cox G. (2006), A regional flood frequency method for Tasmania, Proc.
Hydrology and Water Resources Symposium, Launceston.
73
Derivation of Design Floods
U.S. Bureau of Reclamation (USBR) (1999), A framework for developing extreme flood
inputs for dam safety risk assessments, Part I - Framework, Draft, February 1999.
Watt, S., Sciacca, D., Hughes, M., Pedruco, P. (2018), A quick method for estimating the
Probable Maximum Flood in the Coastal Zone. Proc. Hydrology and Water Resources
Symposium, Melbourne.
Zhirkevich, A., Asarin, A. (2010), Probable maximum flood (PMF): basic information and
problems with the procedure used for its calculation in Russia, Power Technology and
Engineering, 44(3), 195-201.
74
Chapter 7. Special Design
Considerations
Rory Nathan, Erwin Weinmann
Chapter Status Final
Date last updated 14/5/2019
7.1. General
There are a number of special considerations that are relevant to some design situations
and the following sections detail some of the more common issues that may need to be
considered. The importance of these considerations, and hence the complexity of the
techniques required to adequately address the issues, is very much dependent on the
characteristics of the specific design problem. For example, where the storage volume of a
reservoir is large compared to the volume of catchment runoff, the choice of initial starting
levels in the reservoir is likely to have a more significant impact on the outcome of the study
than the selection of runoff-routing parameter values.
One design objective of general importance is the derivation of floods of specified AEP.
Satisfying this objective generally requires the adoption of probability neutral inputs i.e. the
selection and/or treatment of design inputs to ensure that any bias in the AEP of the
transformation between rainfall and runoff is minimised. The issues considered in this
section are generally aimed at the more rigorous treatment of the joint probabilities involved
in the selection of design inputs. However, as discussed in Book 8, Chapter 2, Section 1, it
should be recognised that the defensibility of these estimates rests upon the
representativeness of the selected inputs and the correct treatment of correlations which
may be present.
The appropriate level of complexity to be adopted is dependent upon the sensitivity of the
design outcome to the input. Accordingly it is not possible to provide recommendations that
are applicable to all design situations. The procedures recommended here are relevant to
many situations, but they should be regarded as providing only a general guide to
recommended practice. The practitioner is thus encouraged to adopt different procedures if
they have a sound theoretical basis.
75
Special Design Considerations
The analysis of the joint probabilities of storage volume and inflows is just one example of
the more generic solution offered by Monte Carlo methods. Accordingly, if the rainfall-runoff
modelling is undertaken in a Monte Carlo framework, then this is easily extended to consider
reservoir outflows.
Application of either method is straightforward as long as the probabilities of all the inputs
can be appropriately defined; some care is required to ensure that the distributions are
representative of the design conditions of interest, though in most situations where it is
worthwhile undertaking the analysis the required information can usually be derived. The
guidance in this section first covers specification of the input distributions as this is common
to both methods, and this is followed by a description of the different solution schemes.
76
Special Design Considerations
computations. A total of around 20 to 30 class intervals is generally sufficient, but they need
to be well distributed over the range of possible variate values to ensure accuracy in the
most important part of the range. Each interval is then represented by the variate value at
the mid-point of the interval and by the width of the interval on the probability scale. The total
probability of all the intervals must add up to unity. It is worth considering the following issues
when discretising the distributions:
• Probability distribution of initial storage volume. The analysis of a time series of storage
level or storage volume is used to define the probability distribution of initial storage
volume. The time series of reservoir storage volume could be derived directly from the
historical record, but in most cases a synthetic time series of storage volume, derived from
simulation (behaviour analysis) studies, would be more appropriate. In the latter approach
the current operating rules can be applied to the historic climatic sequence, thus providing
a long stationary series relevant to the system under consideration. The usual time interval
for behaviour analyses is one month, which allows the within-season variation of storage
volume to be taken into account in the frequency analysis.
• Dependence between flood inflows and storage volume. The historical (or synthetic) time
series should be checked to see if there is a strong dependence between initial reservoir
level and flood inflows. If such dependence exists, then it would be necessary to derive
conditional probabilities of initial storage volume that correspond to different ranges of
flood inflows. To this end, it would be necessary to divide the inflow magnitudes into a
small number, say, three flow ranges (corresponding to low, average and high flows), and
derive separate distributions of initial storage for each. Care needs to be taken when
inferring correlations for extreme conditions based on a short period of historic (or
simulated) record, and distributions based on empirical analyses may need pragmatic
adjustment to ensure that they are representative of extreme conditions. Analysis of
regional rainfall information for relevant critical durations within a meteorologically
homogeneous region can provide information to help condition such relationships, and an
example of this using standardisation to trade space for time is provided by Scorah et al.
(2015).
77
Special Design Considerations
The conditional probability of a specified outflow event occurring, given that the conditioning
event is in a specific class interval, can be determined using a deterministic relationship
between inflows, outflows, and storage volume (the I-S-Q relationship). The I-S-Q
relationship has to be determined for a range of peak inflows (corresponding to a range of
design rainfalls for selected exceedance probabilities) and for a set of initial storage values.
The process of computing the conditional probability of a specified outflow event is illustrated
in Figure 8.7.1 for the case where the reservoir inflow was chosen as the conditioning event
and the initial storage volume as the secondary variable. From Figure 8.7.1 it is seen that the
conditional probability of a specified outflow event is evaluated as the width of the storage
volume probability interval (P[Qj|Ii]) that translates an inflow in the interval Ii into an outflow in
the interval Qj.
Figure 8.7.1. Schematic Illustration of the Determination of the Probability Interval of Storage
Volume as a Function of Inflow and Outflow
As different design rainfall durations result in different I-S-Q relationships, the computed
value of the storage volume probability interval will also depend on the rainfall duration used.
The critical rainfall duration to be used in the analysis is the one that translates into the
highest outflow; this also produces the largest estimate of conditional outflow probability.
Unfortunately, the critical rainfall duration varies with reservoir drawdown, and in some cases
it is necessary to compute separate I-S-Q relationships for different durations, and to derive
an outflow frequency curve as the envelope of frequency curves derived for different
durations.
Another complication is that the above formulation assumes that the two distributions of
storage volume and inflows are independent. This may not be the case, and if such
correlation is found to be significant then the calculations must be based on the appropriate
conditional selection of input variables.
78
Special Design Considerations
The evaluation of the I-S-Q relationship is the most time consuming element of the process.
Many tens of individual runs are required to define the I-S-Q relationship in sufficient detail,
though it is possible to automate the processing of different initial starting levels. The
computation of the conditional probabilities is readily undertaken using spreadsheet software
and is not resource intensive.
The derivation of the outflow frequency curve by Laurenson (1974) joint probability approach
involves the calculation of a transition probability matrix. Each element in this matrix
represents the conditional probability of an inflow within the given inflow interval resulting in
an outflow in a specified interval. Depending on the degree of non-linearity of the spillway
rating curve, outflows may be discretised into class intervals of equal magnitude, or else
intervals can be selected to provide more accuracy in the region of interest (e.g. for flows
just above and below the spillway capacity). The total probability of an outflow in that interval
can then be obtained as the sum of the probabilities over all the inflow intervals, i.e. all the
inflow and initial storage combinations that produce an outflow in the specified range.
Outflow AEPs are then computed as the cumulative probability over all outflow ranges
exceeding the flood magnitude of interest.
The outflow frequency curve can be derived either by direct frequency analysis of the outflow
peaks, or by application of the Total Probability Theorem. The latter approach is suited to
stratified sampling schemes, as would generally be required for estimation of Extreme
events. A description of the different methods available to derive a frequency curve based on
Monte Carlo sampling is provided in Book 4, and an example calculation for sampling from
an empirical frequency curve is provided in Book 4.
79
Special Design Considerations
Figure 8.7.2. Illustration of Monte Carlo Framework to Derive Outflow Frequency Curve
The nature of dependence in storage contents is shown by the large diamond symbols in
Figure 8.7.3, which is derived from the behaviour of two reservoirs located in south-eastern
Australia. Such data is difficult to normalise or fit to (bivariate) probability distributions, and
80
Special Design Considerations
thus an empirical sampling approach can be used. The approach to stochastically sample
from such a data set can be described as follows:
1. Identify the “primary” variable that is most important to the problem of interest, and
prepare a scatter plot of the two variables with the primary variable plotted on the x-axis
(as shown in Figure 8.7.3).
2. Divide the primary variable into a number of ranges such that variation of the dependent
variable (plotted on the y-axis) within each range is reasonably similar; in the example
shown in Figure 8.7.3 a total of seven intervals has been adopted as being adequate.
This provides samples of the secondary variable that are conditional on the value of the
primary variable.
3. Stochastically generate data for the primary variable using an empirical sampling
approach as described in Book 8, Chapter 8, Section 3.
4. Derive an empirical distribution of the dependent data for each of the conditional samples
identified in Step 2 above; thus, for the example shown in Figure 8.7.3 a total of seven
separate empirical distributions of upstream storage levels are prepared (these are shown
as separate curves on the inset panel in Figure 8.7.3).
5. For each generated value of the primary variable, stochastically sample from the
conditional distribution corresponding to the interval that it falls within; for example, if a
downstream storage level of 1500 ML was generated in Step 3 above, then the
corresponding conditional distribution (E) is used.
The results from application of the above procedure are illustrated in Figure 8.7.3 for 2000
stochastic samples (shown by the blue “+” symbols). It is seen that the correlation structure
in the observed data set is preserved reasonably well by this procedure.
While the above approach can be extended to multiple storages, obviously this becomes
progressively more tedious to implement. At some point the dependencies are better
modelled using continuous simulation as the system will be largely dependent on the
sequences of flood volumes.
81
Special Design Considerations
Figure 8.7.3. Illustration of Conditional Empirical Sampling in Which the Storage Volume in
an Upstream Dam is Correlated with the Volume in a Downstream Dam
There are a number of methods available for the assessment of concurrent flows (refer to for
example, Book 4, Chapter 4). In the context of risk analysis it is important to focus on those
methods that yield probability neutral estimates. In essence, the issue of concurrent flooding
is another joint probability problem, and the method of Laurenson (1974) described in Book
8, Chapter 7, Section 2 can be applied directly to the joint occurrence of floods in tributaries
and adjacent catchments. With the analysis of concurrent flows, the deterministic I-S-Q
relationship referred to in Book 8, Chapter 7, Section 2 is replaced by the relationship
between total flows downstream of the confluence and the joint occurrence of upstream
flows of differing magnitudes, and the marginal distribution of storage volume is replaced by
the probability distribution of flows in the adjacent tributary. Careful consideration needs to
be given to the specification of the marginal distribution of tributary inflows as the two flow
distributions will be correlated. Also, peak discharges are unlikely to coincide. The worked
example provided on this approach provided in Book 8, Chapter 8, Section 3 is directly
applicable to this situation and can be applied if desired.
Monte Carlo techniques also provide a rigorous solution to the problem. If space-time
patterns of rainfall are used in the modelling then an unbiased estimated of the frequency
82
Special Design Considerations
1. Independently generate two normal random variates with a mean of zero and a standard
deviation of 1: X= N(0,1) and Z= N(0,1)
2. Set � = ρX + � 1 − �2
3. Return:
� = �� + ���
� = �� + ���
where and are the means of the two distributions and and are the required standard
deviations.
For application to catchment rainfalls, X and Y could represent the log-transformed values of
rainfall maxima, in which case the above scheme would represent the generation of a
bivariate log-Normal distribution of rainfalls which has been found to provide a satisfactory
approximation over the range of AEPs of interest (Nandakumar et al., 1997). The stochastic
rainfalls could be used in conjunction with a rainfall-based method to provide concurrent
flood hydrographs. Estimates of suitable correlations can be obtained from the analysis of
observed rainfall data, or else using the generalised correlation-distance relationships
reported in Nathan et al. (1999). Ideally, however, such correlations would be determined
using areal rainfall estimates based on site-specific analysis of gridded data (e.g. Jones et
al. (2009)).
Application of the above algorithm is illustrated in Figure 8.7.4. The input parameters to this
example are ρ=-0.7, μx=70 and σx=10, and μy=50 and σy=10, and as before a total of 2000
correlated variates are generated. Any distribution could be used in lieu of the Normal
distribution, or else the variates of interest could be transformed into the normal domain.
83
Special Design Considerations
Figure 8.7.4. Illustration of the Generation of Variables with a Correlation of 0.7 Based on
Normal Distributions
where � and � signify the mean and standard deviation of the marginal distributions, � is the
correlation between the two variates, and x and y represent design flows at the two sites;
note that all flows need to be transformed into the logarithmic domain.
The correlation � can be obtained from an analysis of large historic events, and the other
parameter values can be found by fitting log-Normal distributions to both the mainstream and
tributary streamflow data. The mean and standard deviation can be determined by fitting a
line of best fit (either graphically or analytically) through the available design flood estimates
in the log-Normal domain. Usually a number of design flood estimates will be available for
the mainstream flows as a complete frequency curve will have been derived (Book 8,
Chapter 6, Section 3), but design flood estimates for the tributary flow may be derived using
the approximate procedures provided in Book 8, Chapter 6, Section 2.
Given the uncertainty of the correlation structure over the range of magnitudes of interest, it
is considered that the above approximations are appropriate for those design situations in
84
Special Design Considerations
which the magnitude of the tributary flows are minor compared to the mainstream flows, and
the correlation between the two flows is small or modest. It is worth noting that the
magnitudes of the tributary floods are very sensitive to the strength of the correlation, and
thus careful attention should be given to the nature and selection of the events used to
derive the correlation value. It is also perhaps worth noting that the tributary distribution of
interest is the flow value coinciding with the peak flows in the mainstream; the use of the
peak flow distribution for the tributaries is an additional approximation.
A worked example illustrating some of the above concepts is presented in Book 8, Chapter
8, Section 5.
Given a set of seasonal frequency curves, care needs to be given to converting the seasonal
exceedance probabilities to annual estimates. The AEP of a specific event (e.g. a dam
overtopping event, Q0) which is not conditional on the time of year can be approximated by
summing the seasonal exceedance probabilities of the selected event.
As an example, if the year was divided into two seasons, then two separate events could be
considered: a summer event Qs (Q>Q0) and a winter event Qw (Q>Q0). If these events are
regarded as being independent (and if their exceedance probabilities are less than, say, 1 in
10 AEP), then the unconditional AEP of an event Q>Q0, i.e. of Qs or Qw, can be computed
as:
where SEPs[Q0] and SEPw[Q0] are respectively the summer and winter Seasonal
Exceedance Probabilities (SEP) of the selected event, and AEP[Q0] represents the
85
Special Design Considerations
probability of one or more events of magnitude Q ≥Q0 occurring in a single year. The
computation of the AEPs from seasonal distributions for more than two seasons is
analogous, and is illustrated in Figure 8.7.5. The SEPs can be simply added to give AEPs, if
the seasons are defined such as to form an exhaustive set of mutually exclusive events (i.e.
they are non-overlapping and cover the whole year).
It is important to note here that the event whose AEP is being analysed needs to be clearly
defined in terms of a magnitude (e.g. Q ≥100 m3/s) rather than in terms of a concept (e.g.
“PMP”) that does not directly relate to a magnitude. This means that the Equation (8.7.2)
cannot be directly applied to PMPs for different seasons but only to rainfalls or floods of a
specified magnitude occurring in different seasons.
86
Special Design Considerations
The snowmelt algorithms used in the established flood event models can be broadly divided
into two groups. One group of models is based on a temperature index approach in which
temperature alone is used as a surrogate for the energy available for snowmelt. Another
group of snowmelt algorithms is based on an energy balance approach in which energy
fluxes are calculated explicitly using physically-based process equations. The results of an
international comparison of snowmelt runoff models (World Meteorological Organisation,
1986) indicate that the temperature index approach has an accuracy comparable to more
complex energy budget formulations. Unfortunately, however, the method does not lend itself
to hourly computations (which are required for flood event estimation purposes) because it is
the radiation component which is mainly responsible for the hour-to-hour variations (Rango
and Martinec, 1995).
The most appropriate method to use for the derivation of snowmelt design floods will depend
largely on the nature of the available data. Practitioners are encouraged to review carefully
the type of data that can be obtained for the site of interest, and to select a model that is
commensurate with the complexity of the available data. A number of suitable models are
commercially available (e.g. USACE (1990)), though there is little documented experience
with their application to Australian conditions.
Nathan and Bowles (1997) provide one example of a study in which a joint probability
approach was adopted for the derivation of snowmelt design floods. They incorporated the
Snow Compaction Procedure (USBR, 1966) into a modified version of the RORB model.
This procedure uses a water budget approach which is based on the concept of snow
87
Special Design Considerations
compaction and a threshold density, where the maximum potential rate of snowmelt is
derived using the sub-daily application of the US Corps of Engineers degree-day snowmelt
equations (USACE, 1960). A simplified approach was taken to sample antecedent snowpack
conditions, but this would be better implemented within a Monte Carlo framework.
While it may be necessary to consider the likelihood of storm sequences in tropical regions,
it is reasonably clear that long duration design events (one to several days) in south-eastern
Australia are unlikely to be preceded by significant antecedent rainfalls (Book 8, Chapter 3,
Section 6). Accordingly, the issue of storm sequences over extended periods may be
implicitly solved by undertaking a joint probability analysis of inflow floods and reservoir
volume, as described in Book 8, Chapter 7, Section 2.
There are other design situations (such as tailings dams) in which the design objective is to
ensure that the risk of spills from the storage is negligible. These types of problems can
generally be handled by undertaking mass balance calculations of all operational inflows and
outflows for very long hydroclimatic sequences. It is usually not necessary to use a
hydrograph model to route the rainfall excess as the surface area of the storage may be
large compared to the contributing catchment area; it thus may be sufficient to allow for a
freeboard in the storage that fully accommodates the volume of runoff corresponding to the
required AEP of rainfall. This type of problem does not lend itself to event-based joint
probability analyses but requires water balance computations over extended periods.
Generally, it is desirable to generate the long hydroclimatic sequences by stochastic data
generation techniques (refer to (McMahon and Mein, 1986), and an example of this
approach used for spillway design is provided by Kinkela and Pearce (2014). The required
security against overtopping can be achieved by using sequences of different lengths, as
described for example in Grayson et al (1996), Book 8, Chapter 5, Section 2).
One of the major practical and theoretical problems with the application of stochastic data
generation techniques – particularly when used in the assessment of the Very Rare to
Extreme risks – is the characterisation of statistical extremes. This difficulty relates both to
the tail of the distribution, as well as to the definition of the correlation between the stochastic
inputs over a range of event magnitudes. These issues require careful consideration and
should only be undertaken by practitioners with specialist experience.
General guidance on assessing the impact of climate change is provided in Book 1, Chapter
6, however, it is worth noting that at present no allowance is made for climate change in
88
Special Design Considerations
There are other factors apart from rainfall intensities that can be considered when assessing
the impact of climate change. In the context of Very Rare to Extreme events, Fowler et al.
(2010) considered the impacts on two additional factors on the assessment of spillway
adequacy, namely catchment losses and the distribution of water levels. The change in
catchment losses was assessed by use of a continuous simulation model to derive
streamflow sequences corresponding to current-day and changed-climate conditions; design
losses were altered to achieve a match between quantiles of 4-day flood volumes obtained
from Monte Carlo analysis and the frequency analysis of the derived maxima. Similarly, an
altered distribution of drawdown conditions was obtained from a model that simulated altered
irrigation demands and streamflow sequences. While that study found an overall reduction in
flood risk due to the downward shift in distribution of initial storage levels, it would be
expected that outcomes will vary depending on the characteristics of the system being
modelled.
Until better information becomes available it is considered that assessments of the impact of
climate change on Very Rare to Extreme flood risks are likely to be speculative and most
suited to sensitivity analyses.
7.8. References
ANCOLD (Australian National Committee on Large Dams) (2000), Selection of acceptable
flood capacity for dams, March
Bergström, S., Harlin, J. and Lindström, G. (1996), Spillway design floods in Sweden: I. New
guidelines, Hydrol. Sci. J. 37(5), 505-519.
Fowler, K., Hill, P.I., Jordan, P.W., Nathan, R.J., Sih, K. (2010), Application of Available
Climate Science to Assess the Impact of Climate Change on Spillway Adequacy. ANCOLD
2010 Conference on Dams. Hobart.
Grayson, R.B., Argent, R.M., Nathan, R.J., McMahon, T.A. and Mein, R.G. (1996),
Hydrological recipes - Estimation techniques in Australian Hydrology, Cooperative Centre for
Catchment Hydrology, Monash University, pp.125.
Jakob, D., Smalley, R., Meighen, J., Taylor, B., Xuereb, K., Box, G. P. O. and Vic, M. (2008),
Climate change and Probable Maximum Precipitation. In Proc. Hydrology and Water
Resources Symposium, Water Down Under, pp: 109-120.
Jones, D., Wang, W. and Fawcett, R. (2009), High-quality spatial climate data-sets for
Australia, Australian Meteorological and Oceanographic Journal, 58: 233-248.
89
Special Design Considerations
Jordan, P., Nathan, R., Mittiga, L. and Taylor, B. (2005),'Growth Curves and Temporal
Patterns for Application to Short Duration Extreme Events'. Aust J Water Resour, 9(1),
69-80.
Kinkela, K. and Pearce, L. (2014), No ordinary dam - rainfall and flood hydrology for the Ord
River dam. In: Hydrology and Water Resources Symposium 2014. Barton, ACT: Engineers
Australia, pp: 342-349.
Kunkel, K., Karl, T.R., Easterling, D.R. and Redmond, K. (2013), Probable maximum
precipitation and climate change. Geophysical Research Letters, 40: 1402-1408.
McMahon, T.A. and Mein, R.G. (1986), River and reservoir yield. Water Resources
Publications, Colorado, U.S.A., p: 368
Micovic, Z., Schaefer, M. G., and Taylor, G. H. (2015). Uncertainty analysis for Probable
Maximum Precipitation estimates. J. Hydrology, 521(2015), 360-373. doi:10.1016/j.jhydrol.
2014.12.033
Natural Environment Research Council (NERC; 1975), Flood Studies Report, 5 Vols, Natural
Environment Research Council, Institute of Hydrology, Wallingford, UK.
Nandakumar, N., Weinmann, P.E., Mein, R.G. and Nathan, R.J. (1997), Estimation of
extreme rainfalls for Victoria using the CRC-FORGE method (for rainfall durations 24 to 72
hours). CRC Research Report 97/4, Melbourne, Cooperative Research Centre for
Catchment Hydrology.
Nathan, R.J. and Bowles, D.S. (1997), A probability-neutral approach to the estimation of
design snowmelt floods. Proceedings of the Hydrology and Water Resources Symposium:
Wai-Whenua, November 1997, Auckland, pp: 125-130.
Nathan, R.J., Weinmann, P.E. and Minty, L. (1999), Estimation of the Annual Exceedance
Probability of PMP Events in Southeast Australia, Aus. J. Water Resour., 3(1), 143-154.
Rango, A. and Martinec, J. (1995), Revisiting the degree-day method for snowmelt
computations'Water Resour. Bull., 31(4), 657-669.
Scorah, M., Lang, S. and Nathan, R. (2015), Utilising AWAP gridded rainfall dataset to
enhance hydrology studies. Proc. Hydrology and Water Resources Symposium, Hobart.
Stratz, S.M. and Hossain, F.M. (2014), Probable Maximum Precipitation in a Changing
Climate: Implications for Dam Design. Journal of Hydrologic Engineering, pp: 1-7. zoi:
10.1061/(ASCE)HE.1943-5584.0001021.
United States Army Corps of Engineers (USACE) (1990), HEC-1 Flood hydrograph
Package, User's Manual. Report CPD-1A, Version 4.0., Hydrologic Engineering Centre, U.S.
Army Corps of Engineers.
United States Army Corps of Engineers (USACE) (1960), Runoff from Snowmelt, Manual
EM 1110-2-1406, U.S. Government Printing Office, Washington, D.C. 75pp.
90
Special Design Considerations
U.S. Bureau of Reclamation (USBR) (1966), Effect of Snow Compaction on Runoff from
Rain on Snow, Engineering Monograph No.35, Denver, Colorado. P: 45.
91
Chapter 8. Worked Examples
Rory Nathan, Erwin Weinmann
Flood frequency curves are derived for both inflows to the reservoir, as well as for reservoir
outflows. As the volume of the reservoir is large compared to the volume of runoff, and it is
likely that the reservoir is drawn down below full supply level, the derivation of the outflow
frequency curve requires consideration of the joint probabilities of both inflows and storage
volume. A tributary enters the mainstream just below the reservoir, and estimates of
concurrent tributary flows are required for a range of AEPs in order to help determine the
component of incremental damages that could be attributed to dam failure.
• a set of calibration results obtained by fitting a flood event model to several large observed
floods;
• a synthetic monthly time series of reservoir volume obtained from a system simulation
model;
• design rainfalls between 1 in 50 and 1 in 2000 AEP from Book 2 procedures; and,
92
Worked Examples
• GSAM estimates of the PMP for a range of standard durations obtained from the Bureau
of Meteorology.
For Very Rare rainfalls, point estimates for 24 and 48 hour durations are also obtained from
Book 2 procedures, as made available online at www.bom.gov.au [http://www.bom.gov.au].
Estimates of Very Rare rainfalls for the 12 hour event are obtained from the growth factors
provided in Table 8.3.2, multiplied by the 1 in 100 AEP point rainfall depth. For example, the
1 in 2000 AEP 12 hour depth is simply estimated as 111.4 × 1.698 = 189.2��.
To obtain areal design rainfalls, the point rainfall estimates are multiplied by the Areal
Reduction Factors (ARFs) provided in Book 2, Chapter 4 For the long-duration rainfalls the
ARF for this location is estimated as a function of rainfall duration (D, hrs), catchment area
(A, km2), and 1 in Y AEP as follows:
where the area of the catchment is 439 km2. For the short duration rainfalls, the appropriate
Areal Reduction Factors is independent of AEP and can be estimated from:
��� = min 1.00, 1.00 − 0.1 �0.14 − 0.879 − 0.029 �0.233 1.255 − log10 � (8.8.2)
The areal rainfalls obtained by applying the above equations are show in the last three
columns of Table 8.8.1.
Table 8.8.1. Calculation of Areal Design Rainfalls for Rare to Very Rare Events
AEP (1 Point Rainfall (mm) Areal Reduction factors Areal Rainfall (mm)
in Y)
12 hour 24 hour 48 hour 12 hour 24 hour 48 hour 12 hour 24 hour 48 hour
50 99.4 135.8 181.9 0.870 0.925 0.940 86.5 125.6 170.9
100 111.4 153.5 205.8 0.864 0.923 0.938 96.3 141.7 193.0
93
Worked Examples
AEP (1 Point Rainfall (mm) Areal Reduction factors Areal Rainfall (mm)
in Y)
200 127.0 172.4 231.2 0.858 0.922 0.936 109.0 158.9 216.5
500 149.7 199.7 268.2 0.850 0.919 0.934 127.3 183.6 250.5
1000 168.6 222.1 298.7 0.844 0.918 0.932 142.3 203.8 278.5
2000 189.2 246.4 332.0 0.838 0.916 0.931 158.5 225.7 308.9
Y1- 1000
Y2- 2000
YPMP- 2.28x106
Zd , Sgc and Sgap are calculated from Equation (8.3.3), Equation (8.3.5) and Equation (8.3.6),
respectively, and their values are shown in the upper panel of Table 8.8.2. Parameter gY
varies with AEP and RY varies with both AEP and duration, and their values (using Equation
(8.3.4) and Equation (8.3.2)) are shown in the lower panel of Figure 8.8.5. Design rainfalls
for intermediate AEPs are calculated using Equation (8.3.1), and these are shown in the last
three columns of the lower panel of Table 8.8.2.
Table 8.8.2. Parameters Calculated of Areal Design Rainfalls for Very Rare to Extreme
Events
Parameter 12 hour 24 hour 48 hour
zd 3.057 3.057 3.057
Sgap 0.071 0.062 0.060
Sgc 0.075 0.062 0.055
94
Worked Examples
Table 8.8.3. Calculation of Areal Design Rainfalls for Very Rare to Extreme Events
To check that the derived rainfall frequency curves are well behaved, it is worth plotting the
results on Normal probability paper. Rainfall may be displayed on either arithmetic or
logarithmic scales, and if a suitable probability scale is not available then probabilities can be
expressed as standard normal variate (i.e. the “z score”, the inverse of the standard normal
cumulative distribution) and then plotted on an arithmetic axis. The z scores for the rarer
AEPs are shown in the 2nd column of Table 8.8.2. The frequency plot of the derived rainfall
frequency curves are shown in Figure 8.8.1. Alternatively, the results could be plotted on log-
log scales: while this would not as clearly illustrate the behaviour of extremes, it would be
sufficient to check for inconsistencies.
95
Worked Examples
To illustrate the derivation of a 36 hour rainfall frequency curve, the rainfall depth for an
event of 1 in 1000 AEP is calculated from logarithmic interpolation as:
log 36 − log 24
log 36 hr 1 in 100 AEP depth = × log 268.0 − log 190.4 + log 190.4
log 48 − log 36 (8.8.3)
= 2.367
where the values for the 24 and 48 hour rainfall depths are obtained from Table 8.8.2. The
resulting rainfall depth is computed as 102.367 = 232.5 mm. The above steps are repeated for
the 1 in 2000 AEP and PMP depths, and the intermediate AEPs are then obtained from the
interpolation procedure as described in Book 8, Chapter 8, Section 2. The resulting 36 hour
rainfall frequency curve is shown as a dashed line in Figure 8.8.1.
Figure 8.8.2 (b) illustrates the impact of successively introducing variability into the flood
estimates. The black curve represents the frequency curve obtained using a simple design
event approach in which all inputs (except for rainfall) are held at fixed values. The curve
represents the envelope of all durations trialled. When losses are allowed to vary
stochastically with season, it is seen from the light blue curve in Figure 8.8.2(b) that the flood
peaks beyond 1 in 50 AEP are lower; this result arises as the seasonal distribution of losses
is slightly out of phase with that of rainfalls. Next, when an ensemble of temporal patterns is
stochastically sampled, it is seen (darker blue curve) that the flood peaks are higher than if a
fixed temporal distribution of rainfall is adopted. This reflects the highly non-linear runoff
response to variability in temporal patterns. When all the inputs are allowed to vary
96
Worked Examples
stochastically it is seen (red curve) that the final result is slightly higher than if deterministic
assumptions were adopted.
It should be stressed that the results shown in Figure 8.8.2(b) simply illustrate the manner in
which the probability neutral assumptions of flood producing factors can be examined and
combined. The magnitudes of differences between deterministic and joint probability
approaches are very site-specific, and depend largely on the sensitivity of the system to the
dominant hydrometeorological inputs.
Figure 8.8.2. (a) Simulation Framework used to Generate Floods for Selected Stochastic
Inputs and (b) Resulting Flood Frequency Curves
(ii) the relationship between inflows and outflows, for different initial reservoir levels; and,
The analytical approach is based on the method developed by Laurenson (1974), and the
numerical approach is based on Monte Carlo simulation.
97
Worked Examples
For this example, the inflow frequency curve is that derived in Book 8, Chapter 8, Section 3
based on the stochastic sampling of seasonal losses and temporal patterns (red curve,
Figure 8.8.2). The relationship between inflows, outflows, and initial reservoir level (I-O-S
relationship) is shown in Figure 8.8.3. The frequency distribution of storage volume is
assumed to have been derived from the simulation results of long-term reservoir behaviour,
and is shown in Figure 8.8.4.
98
Worked Examples
The whole range of peak outflows is divided into 20 class intervals, as indicated in the first
column of Figure 8.8.5. The elements of the table are then evaluated for each inflow class
interval. The numerical values represent the conditional probability (in percentage points) for
which an inflow peak in the given class interval produces an outflow peak falling in the
selected outflow class interval. The sum of the values in each inflow class interval (i.e. the
sum of each column) is 100. It is worth noting that the values provided in Figure 8.8.5 have
been computed using specialist software; the numerical accuracy used in the calculations
are greater than that which could be achieved using graphical techniques, but the procedural
steps are identical.
The derivation of a particular element is described as follows. Consider the inflow class
interval of 2200 m3/s to 3000 m3/s and represent it by its mid-point of 2600 m3/s. Consider
the outflow class interval 1500 m3/s to 1690 m3/s. From Figure 8.8.3, the initial storage
volume which produce peak outflows of 1500 m3/s to 1690 m3/s from a peak inflow of 2600
m3/s are respectively 89.5% and 92.5% of full storage. From Figure 8.8.4 the probabilities
99
Worked Examples
that the actual storage volume will be greater than the above are respectively 29.5% and
37.4%, so the probability that the initial volume will be between 89.5% and 92.5% of full
storage volume is (37.4-29.5)= 7.9%. Thus the probability that a peak inflow between 2200
m3/s to 3000 m3/s would lead to an outflow between 1500 m3/s to 1690 m3/s is 7.9%. This
value is inserted in the appropriate position in the table and other values are computed in a
similar manner.
The distribution of peak outflows is evaluated by multiplying each element of the table by the
corresponding probability of occurrence of the inflow interval, and the resulting products are
summed horizontally (and divided by 100) to give the values in the second last column of
Figure 8.8.5. For example, the outflow element corresponding to the outflow range of 1500
m3/s to 1690 m3/s is obtained from the following calculation:
Finally, the values are added for all outflow intervals which exceed the outflow magnitude of
interest to give the probabilities of exceedance, as listed in the last column. For this
example, the AEP of Q=1500 m3/s is found to be 0.000757% or about 1 in 130 000.
The calculated outflow points from Figure 8.8.5 are plotted and a curve fitted to define the
frequency distribution of peak outflows, as shown in Figure 8.8.6. Note that if a sufficient
number of intervals are used to discretise the inflow and outflow frequency curves then it is
probably not necessary to fit a curve as the points generally follow a smooth curve in the log-
Normal probability domain.
For comparison purposes, outflows are also derived for an initial storage volume fixed at the
median level of drawdown, which is 81.3% of the full supply storage (Figure 8.8.6). The
corresponding outflow curve is plotted in Figure 8.8.3, and it is seen that this simplistic
approach yields an outflow frequency curve that is significantly lower than that obtained
using the more accurate joint probability approach.
100
Worked Examples
Figure 8.8.5. Transition Probabilities between Reservoir Inflow and Outflow Classes
101
Worked Examples
Figure 8.8.6. Outflow Frequency Curves Obtained using Joint Probability Analysis and a
Median Level of Drawdown
If the distribution of initial reservoir levels is found to vary with event severity (as illustrated
by the insert in Figure 8.7.3) then the same framework as shown in Figure 8.8.7 can still be
used, the only difference being that a different drawdown distribution is selected depending
on the magnitude of the inflow flood (or causative rainfall). The drawdown distribution
selected can also vary with season to account for marked differences in seasonal water
levels, and again the framework as shown in Figure 8.8.7 is directly applicable if distributions
are selected according to season.
102
Worked Examples
Figure 8.8.7. Simulation Framework to Derive Outflow Frequency Curve Based on Variable
Initial Starting Level in Reservoir
103
Worked Examples
The parameters of the log-Normal distribution can then most easily be calculated by simply
fitting a linear regression line through the transformed data (i.e. columns 3 and 5, and
columns 3 and 7). The intercept of the fitted line is equivalent to the mean of the distribution
(as the standardised variate of the mean of a log-Normal distribution is zero), and the slope
is equivalent to the standard deviation. The fitted parameters are listed below columns 5 and
7, and may be obtained either graphically, or by using standard spreadsheet functions. The
design flood estimates and the fitted log-Normal distributions are shown in Figure 8.8.9. The
log-Normal estimate (x) may be calculated from the relevant sample mean (m), standard
deviation (s), and standardised variate (z) as follows:
� = � + �� (8.8.5)
For example, to calculate the 1 in 100 AEP design flood in the mainstream:
104
Worked Examples
The computed design flood estimates from the fitted distribution are shown in columns 8 and
10; these are then back-transformed into the arithmetic domain, as shown in columns 9 and
11 of Figure 8.8.8.
Figure 8.8.9. Fitted log-Normal Flood Frequency Curves for Mainstream and Tributary
Design Flows
0.376
� � � = 1.251 + 0.5 � − 1.797 (8.8.7)
0.362
105
Worked Examples
where 0.5 represents the correlation between the log-transformed flows calculated for the
largest floods on record. The average concurrent flow in the tributary corresponding to a 1 in
50000 AEP event in the mainstream is thus calculated by:
0.376
� �|� = 1.251 + 0.5 (3.465 − 1.797)
0.362
(8.8.8)
= 2.117log m3/s
= 131 m3/s
The computed figures for all AEPs are shown in columns 12, and the back-transformed
values are shown in columns 13. It is of interest to calculate the AEPs of the concurrent
tributary flows, and these may be calculated by first calculating the standard normal variate
using:
�−�
�= (8.8.9)
�
For example, to calculate the AEP of the 74 m3/s design flood estimate in the tributary:
log 74 − 1.251
�=
0.376 (8.8.10)
= 1.644
The corresponding standard normal cumulative distribution for this value of z is 0.95, which
corresponds to1 in 20 AEP. Values for the other estimates are shown in column 14.
8.6. References
Abramowitz, M. and Stegun, I.A. (1964), Handbook of Mathematical Functions with
Formulas, Graphs, and Mathematical Tables, Applied mathematics series, Dover
Publications.
106
BOOK 9
cix
Runoff in Urban Areas
cx
Runoff in Urban Areas
cxi
List of Figures
9.2.1. Simple Model of Water Inputs, Storage and Flows in an Urban Catchment ............... 6
9.2.2. Spatial Distribution of Average Annual Rainfall Depths for the Greater
Melbourne Region (Coombes, 2012) .......................................................................... 9
9.2.3. Spatial Distribution of Average Annual Frequency of Rainfall for the Greater
Melbourne Region (Coombes, 2012) ........................................................................ 10
9.2.4. Spatial Distribution of Average Annual Maximum Temperatures for the Greater Melbourne
Region (Coombes, 2012) .......................................................................................... 11
9.2.5. Total Water Input to Curtin in ACT ............................................................................. 13
9.2.6. Total Water Output from Curtin in ACT ...................................................................... 13
9.2.7. Average Annual Water Balances from Households in Adelaide, Ballarat, Brisbane,
Canberra, Melbourne, Perth and Sydney .................................................................. 14
9.2.8. Water Cycle Processes in the Ballarat Water District from 1999 to 2012 .................. 15
9.3.1. Schematic of Traditional Urban Catchments and Cumulative Stormwater Runoff
Processes .................................................................................................................... 29
9.3.2. The Full Spectrum of Flood Events (Adapted from
Weinmann (2007)) ....................................................................................................... 30
9.3.3. Cumulative Impacts of Distributed Management ....................................................... 32
9.3.4. Schematic of the Connectivity of Urban Water Networks .......................................... 33
9.3.5. Example Overland Flow Path Map Generated Using a Two Dimensional Model. .... 35
9.3.6. Example of Fluvial Flow Path with Interface with Overland Flow Path ...................... 38
9.3.7. Typical Configurations of Major Minor Conveyance Systems Deployed in Australian
Practice ..................................................................................................................... 41
9.4.1. Typical Hydrograph Change Generated by a Temporary Storage (Without
Harvesting) .................................................................................................................. 53
9.4.2. Rural and Developed Catchment Hydrographs ......................................................... 54
9.4.3. Developed Catchment with Retention as Compared to Detention and Slow Drainage
Strategies .................................................................................................................... 54
9.4.4. Potential Overlapping Volume Management Design Objectives ............................... 58
9.4.5. Detention Basin Typical Section ................................................................................ 69
9.4.6. Typical Detention Basin Primary Outlets ................................................................... 72
9.4.7. Typical Section Through a Below Ground On-Site Detention ................................... 73
9.4.8. Frequency Staged Below Ground On-Site Detention System (adapted from
Upper Parramatta River Trust (2005)) ..................................................................... 75
9.4.9. Rain Water Tank with Dedicated Air Space (adapted from
Coombes et al. (2001)) ............................................................................................ 82
9.4.10. Components of a Bioretention Basin (Healthy Waterways by Design, 2014) .......... 83
9.4.11. Schematic Layout of a Typical Constructed Wetland ............................................. 85
9.4.12. Photo of a Typical Constructed Wetland (Source: Steve Roso) ............................. 86
9.4.13. Example of a Managed Aquifer Recharge Scheme in an Unconfined Aquifer
(Adapted from (Natural Resource Management Ministerial Council et al., 2009)) .... 88
cxii
Runoff in Urban Areas
cxiii
Runoff in Urban Areas
9.6.3. Example of a Box and Whisker Plot of Peak Stormwater Runoff Utilised to Select the
Critical Storm Burst Ensemble and Other Design Information ................................ 180
9.6.4. Rainfall Runoff Processes in Urban Catchments .................................................... 181
9.6.5. Stormwater Runoff from Roofs and Properties – Lot Scale Effects ......................... 184
9.6.6. Stormwater Management System for Larger Properties with Complex Land Uses 186
9.6.7. Example Distribution of Available Storage Prior to Storm Events versus Annual
Exceedance Probability (AEP) of Storm Events ...................................................... 188
9.6.8. Example of a Simple Urban Stormwater Sub-Catchment ....................................... 189
9.6.9. Changes in Design Modelling Techniques for Urban Areas .................................... 192
9.6.10. Design Process that Utilises Rainfall Ensembles in Hydrology to Select the Rainfall
Pattern Closest to Mean Peak Flows for use in Hydraulic Analysis ........................ 193
9.6.11. Design Process that Utilises Rainfall Ensembles in Hydrology and Hydraulic
Simulations to Select the Mean Pattern for Analysis of Flooding ............................ 193
9.6.12. The Woolloomooloo Catchment in Sydney ........................................................... 195
9.6.13. Use of Ensembles of Storm Bursts (1% AEP) in the Hydraulic Model to Select Critical
Duration ................................................................................................................... 198
9.6.14. Example of Runoff from ARR 1987 Single Storm Burst and Ensembles of Storm Bursts from
this guideline (1% AEP) ........................................................................................... 199
9.6.15. Upper Catchment Volume Check for Direct Rainfall Model (Prior to Corrections) 200
9.6.16. Comparison of Treatment of Initial Conditions in Overland Flows Generated by Coupled
Direct Rainfall Models Near the Top of the Catchment (Top Pane: Riley Street) and Near the
Bottom of the Catchment ......................................................................................... 202
9.6.17. Outflow Hydrographs Catchment Showing the Significance of Surface Flows ..... 203
9.6.18. Catchment prior to development ........................................................................... 204
9.6.19. Developed Catchment ........................................................................................... 205
9.6.20. Estimated Rural Peak Flows using the RFFE ....................................................... 206
9.6.21. Regional Flow Gauges used in the RFFE Estimate of Rural Peak Flows ............. 206
9.6.22. Calibration of Rural Flows to RFFE Flow Estimates ............................................. 207
9.6.23. Pre-Development Peaks Flows for 50% AEP Events ........................................... 207
9.6.24. Pre-Development Peak Flows for 10% AEP Events ............................................. 208
9.6.25. Pre-Development Peak flows for 1% AEP Events ................................................ 208
9.6.26. Selection of the 10% AEP Design Storm for Preliminary Infrastructure Design .... 209
9.6.27. Development Peak Flows for 50% AEP Events .................................................... 209
9.6.28. Development Peaks Flows for 10% AEP Events .................................................. 210
9.6.29. Development Peak Flows for 1% AEP Events ...................................................... 210
9.6.30. Overview of the Planned Conveyance Network in the Urban Development ......... 211
9.6.31. Overview of the Planned Storage Basin Below the Urban Development .............. 212
9.6.32. Peak flows from the Basin for 50% AEP Events ................................................... 212
9.6.33. Peak Flows from the Basin for 10% AEP Events .................................................. 213
9.6.34. Peak Flows from the Basin for 1% AEP Events .................................................... 213
9.6.35. Peak Water Levels the Basin for 1% AEP Events ................................................. 214
cxiv
Runoff in Urban Areas
9.6.36. Peak Water Levels in the Basin and on Roads for 1% AEP Events Subject to Climate
Change .................................................................................................................... 215
cxv
List of Tables
9.2.1. Annual Water Balance Data from Suburbs of Australian Cities. .................................. 7
9.2.2. Water Balances for Selected Regions ....................................................................... 11
9.2.3. Water Balance for Curtin Catchment in Canberra for the Period 1979 – 1995.
(Adapted from (Mitchell et al., 2003)) ........................................................................ 12
9.3.1. Factors Influencing the Balance between Capacities of Major and Minor Systems in
Design ..................................................................................................................... 39
9.4.1. Summary of Volume Management Design Objectives .............................................. 55
9.4.2. Volume Management Facility Components ............................................................... 58
9.4.3. Common Volume Management Facility Configurations in Australian Practice .......... 60
9.4.4. Indicative Suitability of Common Volume Management Design Solutions ................ 65
9.4.5. Detention Basin Freeboard Requirements (Adapted from
Queensland Department of Energy and Water Supply (2013)) ........................................... 71
9.4.6. Above Ground On-Site Detention Storage Design Considerations ........................... 76
9.4.7. Below Ground On-Site Detention Storage Design Considerations
(Department of Irrigation and Drainage, 2000),
(Department of Irrigation and Drainage, 2012) .................................................................... 77
9.4.8. Interim Relationship between Annual Exceedance Proability and ‘Emptying Time
(Argue, 2017) ...................................................................................................................... 95
9.5.1. Suggested Design and Severe Blockage Conditions for Inlet Pits
Book 6, Chapter 6 ............................................................................................................. 120
9.5.2. Correction Factors for ku and kw for Submergence Ratios (S/Do) not Equal to 2.5
(QUDM, 2013) ................................................................................................................... 129
9.6.1. Common Types of Urban Models ............................................................................ 165
9.6.2. Generic Classification of Model Estimation Capability ........................................... 167
9.6.3. Typical Urban Model Scales ................................................................................... 169
9.6.4. Example Flood Generation Processes at Different Model Spatial Scales ............... 170
9.6.5. Summary of Updated Design Rainfall Processes ................................................... 181
9.6.6. ARR 1987 Rainfall Loss Parameters ....................................................................... 196
9.6.7. ARR 2016 Rainfall Loss Parameters for Urban Areas ............................................ 197
9.6.8. Adopted ARR 2016 Rainfall Loss Parameters ........................................................ 197
9.6.9. Assumed Capacity of Inlet Pits for 1% AEP Rain Events ........................................ 198
9.6.10. Characteristics of the Upper Catchment used in the Volume Check ..................... 200
cxvi
Chapter 1. Introduction
Peter Coombes, Steve Roso
1.1. Introduction
There have been profound changes to the science and practice of urban hydrology and
stormwater management since the last edition of Australian Rainfall and Runoff (ARR)
published in 1987 (Pilgrim, 1987). During this period analysis methods have evolved from
use of the slide rule to the computer age and beyond. The revision of ARR has aimed for an
evidence based approach that incorporates 30 years of additional data, science and
knowledge. This includes a move away from simple design rainfall burst event methods
towards Ensemble and Monte Carlo approaches to better capture variability. There is less
reliance on the rational method, more data available, new Intensity Frequency Duration (IFD)
data and better flow estimates for ungauged catchments (refer to Book 3, Chapter 3).
There are new challenges and gaps in knowledge about urban hydrology that is part of an
increasingly complex urban water cycle and town planning processes. The designer now
aims to retain stormwater within urban landscapes, manage stormwater quality, maximize
the potential of the stormwater resource and to slow flows into receiving waterways.
Australian Rainfall and Runoff now employs Australian data which ensures that urban
designers can better represent real local systems and address these new challenges.
Wherever possible this version of ARR provides information about the uncertainty of
methods and inputs. This will better equip urban designers to understand risks in the urban
environment. The Urban Book (Book 9 – Runoff in Urban Areas) has been constructed to
utilise and complement the broader set of tools in ARR used to manage the water cycle. The
over-arching objective of this book is to provide revised and up-to-date guidance for analysis
and management of urban stormwater runoff.
1
Introduction
Two classes of stormwater management infrastructure are described in this book; volume
management (Book 9, Chapter 4) and conveyance systems (Book 9, Chapter 5).
Volume management includes measures that can store runoff for a period of time, promote
infiltration and store harvested stormwater for beneficial uses. Modern best practice aims to
achieve a range of hydrologic and water quality objectives within these facilities. Volume
management is a key element of stormwater management and flood control which has
increased in importance and will continue to evolve into the future. Stormwater volume
controls have been subject to substantial and increased research effort since 1987.
Conveyance systems allow runoff to pass through urban areas and provide connections
through the catchment. Conveyance systems can be classified in different ways, for example
underground versus surface and trunk versus non-trunk. The traditional description of urban
stormwater management involves a minor and major event management philosophy where
the minor concept involves pipe drainage networks and the major concept addresses flood
events that are conveyed as surface flows. A minor versus major design concept is also still
relevant in order to efficiently convey urban runoff while mitigating nuisance, damage and
disaster. Regardless, the focus for conveyance should be careful management of surface
flows and restoration of natural flow behaviour wherever possible.
Volume management facilities and conveyance systems are interlinked to form a network
with volume management most often at discrete locations connected by more linear
conveyance systems. Both conveyance and volume management can exist at multiple
scales from lot scale (source control) to regional scale (end of pipe). In the context of Book
9, natural and semi-natural urban waterways are considered part of the network of
conveyance and storage infrastructure.
1.1.3. Modelling
The unique characteristics of urban modelling include measurement and assessment of the
hydrologic and hydraulic effect of impervious surfaces, conveyance systems and hydraulic
structures including volume management facilities. Analysis of urban areas involves data
intensive and complex processes. There is a need for complex computing tasks aided by
software to assist with modern investigation. A wide range of computer software is available
to the designer. Hand calculations are generally unsuitable for most urban applications other
than basic checks and approximations.
Choice of computer software such as urban hydrology and hydraulic models depends on a
number of factors including the spatial scale of the investigation area and the magnitude of
the floods of interest. Book 9, Chapter 6 provides guidance on how to pick a short list of
suitable models based on these factors. The aim should be to best match the selected
model with the type of investigation being undertaken.
Once a suitable model has been selected, the challenge is to ensure the model is applied
correctly. Book 9, Chapter 6 does not provide guidance on how to use specific modelling
software and instead describes the urban modelling process in a software independent
2
Introduction
manner. Some models can be simplified and the physical resolution reduced, depending on
the spatial scale of the investigation and experience of the modelling team. Urban modelling
frameworks are described providing guidance for key segments of urban catchments from
the behaviour of land uses within sub-catchments that flow to inlet structures, through urban
stormwater networks, and into the receiving waterway.
This book focusses on the entire spectrum of runoff events and potential flooding outcomes.
Book 9, Chapter 2 provides an overview of the characteristics of urban hydrology. Book 9,
Chapter 3 introduces some of the key concepts in urban stormwater management as part of
an urban water cycle and urban systems. It is built around Book 9, Chapter 4 and Book 9,
Chapter 5 which describe the key stormwater design elements of volume management and
conveyance. Book 9, Chapter 6 provides guidance on urban modelling including model
selection and application. Two case studies are also provided in Book 9, Chapter 6.
In some jurisdictions, there has been disproportionate focus on mitigating nuisance in the
minor system at the expense of a proper analysis of the major system. Replacement of the
minor or major drainage approach with the relativity of mitigating nuisance or disaster may
be a future innovation of stormwater management. Allowing space for a major system can
help manage large events and provides flexibility for adapting stormwater management to
incorporate integrated systems and better management of nuisance.
It is expected that policy frameworks will evolve to further integrate land and water
management with design processes at all spatial scales from local to regional and which
also applies to urban renewal and asset renewal or replacement choices. Future design
methods fomakr integrated solutions are likely to include most of the variability of real rainfall
events by using continuous simulation, Monte Carlo frameworks and techniques that
consider complete storms, frequency of rainfall volumes and the spatial variability of events.
Good urban runoff management will only be achieved when it is integrated with the complete
management of the urban water cycle and includes proper consideration of runoff quality.
The guidance in the Urban Book must be linked with Australian Runoff Quality (ARQ)
(Engineers Australia, 2006) and other water quality guidelines so that urban stormwater
management is an integrated part of the urban water cycle and avoids duplication of
infrastructure. An integrated approach to stormwater management should avoid installation
of infrastructure to meet separate objectives that, in combination, create unexpected
3
Introduction
diminished performance. There is a need to consider integrated approaches for future urban
water management. Integrated systems have the capacity to produce solutions that respond
to multiple objectives including economic, social and environmental criteria.
This Book on Runoff in Urban Areas is part of the evolving story of stormwater management
and aims to encourage innovation into the future.
1.2. References
Engineers Australia (2006), Australian Runoff Quality - a guide to Water Sensitive Urban
Design, Wong, T.H.F. (Editor-in-Chief), Barton.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
4
Chapter 2. Aspects of Urban
Hydrology
Anthony Ladson
With contributions from the Book 9 editors (Peter Coombes and Steve Roso)
• Inputs increase as mains water is supplied to urban catchments along with rainfall.
• Less water is stored within urban catchments. Paved surfaces replace much of the
landscape to diminish infiltration of rainfall into soil profiles. Hydraulically efficient
conveyance networks rapidly remove surface water from urban areas.
• There are dramatic changes in quantity, quality and timing of water leaving the catchment.
Runoff volumes are often substantially increased. Wastewater networks provide an
alternative flow path that interacts with stormwater and groundwater. There may be less
opportunities for water to evaporate if it can quickly drain from a catchment.
The change in the rate and volume of inputs, outputs and storage explains the hydrologic
behaviour in urban areas: the rapid response to rainfall and increased flood magnitude and
frequency that correlates with development. This chapter explores aspects of urban
hydrology, the impact of development and urban stormwater conveyance networks,
focussing on areas where the effect of urbanisation needs to be considered for estimation of
floods.
5
Aspects of Urban Hydrology
Figure 9.2.1. Simple Model of Water Inputs, Storage and Flows in an Urban Catchment
The water balance for an urban catchment, during a selected time period, can be expressed
by equating the change in the amount of for water stored to the sum of catchment inputs
minus the sum of catchment outputs (Mitchell et al., 2003).
�� = � + � − �� + �� + �� (9.2.1)
Where:
6
Aspects of Urban Hydrology
P is precipitation
I is imported water
Ea is actual evapotranspiration
Rs is stormwater runoff
Rw is wastewater discharge
There have been several studies of water balances in the urban areas of Australia including
Canberra, Melbourne, Perth, Sydney and South-East Queensland (Table 9.2.1). Although
there are substantial differences in climate of these study areas, and the number of selected
examples is small, the data provides some insights.
• Wastewater leaving a catchment should be less than 59% to 86% of the amount of water
imported, since imported water contributes to stormwater and evapotranspiration. This
means that imported water contributes to stormwater and/or evapotranspiration. As a
result, stormwater plus evaporation exceeds precipitation, according to all case studies.
• Imported water is about 30% to 39% of precipitation. This means imported water
substantially increases catchment inflows.
• The volume of imported water is about the same as, or less than, wastewater plus
stormwater. This suggests the potential for augmentation of water supply by some
combination of rainwater harvest, stormwater harvest and wastewater reuse.
Table 9.2.1. Annual Water Balance Data from Suburbs of Australian Cities.a
7
Aspects of Urban Hydrology
information on water use in regions that include the urban areas of Adelaide, Canberra, Melbourne, Perth, South
East Queensland and Sydney.
bSee original studies for details
cIncludes imported water and use of groundwater
dInflow of stormwater from upstream area
eAdjusted for change in groundwater storage
Assessment of water balances for cities or urban regions also need to account for the spatial
and temporal variation of paramaters throughout an area. For example, the spatial
distribution of rainfall depth, frequency (rain days per year) and maximum temperatures are
shown for the Greater Melbourne region in Figure 9.2.2, Figure 9.2.3 and Figure 9.2.4.
8
Aspects of Urban Hydrology
Figure 9.2.2. Spatial Distribution of Average Annual Rainfall Depths for the Greater
Melbourne Region (Coombes, 2012)
Figure 9.2.2 demonstrates that average annual rainfall depths range from less than 470 mm
to greater than 1640 mm across the Greater Melbourne region. The spatial distribution of
rainfall will impact on the assessment of the water balance for the region and also impact on
selection of stormwater management strategies. The spatial distribution of the frequency of
rainfall will also impact on the determination of a water balance (Figure 9.2.3) (Walsh et al.,
2012).
9
Aspects of Urban Hydrology
Figure 9.2.3. Spatial Distribution of Average Annual Frequency of Rainfall for the Greater
Melbourne Region (Coombes, 2012)
10
Aspects of Urban Hydrology
Figure 9.2.4. Spatial Distribution of Average Annual Maximum Temperatures for the Greater
Melbourne Region (Coombes, 2012)
A range of recent detailed investigations that also considered the spatial and temporal
variation of parameters was used to define water balances for Greater Melbourne, Greater
Sydney, Greater Perth, and South-East Queensland regions. Water balances for 2013 were
extracted from these studies to provide the examples presented in Table 9.2.2.
Table 9.2.2 demonstrates that each region is subject to substantially greater volumes of
stormwater runoff than demands for mains water. In addition, the volumes of wastewater
discharges are similar to water demands. However, this result may be misleading as there
11
Aspects of Urban Hydrology
are less wastewater connections (especially for Perth) than water supply connections in
each region. Households in some areas are reliant on local wastewater management
measures (such as septic tanks) and receive mains water supplies.
Table 9.2.3. Water Balance for Curtin Catchment in Canberra for the Period 1979 – 1995.
(Adapted from (Mitchell et al., 2003))
The average annual input and output of the catchment was about 830 mm. Approximately
24% (200 mm) of water was imported to the catchment via the supply system. Precipitation
(rainfall) contributed the remaining 630 mm. Outputs included actual evapotranspiration
(61%, 508 mm), stormwater runoff (24%, 203 mm) and wastewater discharge (14%, 118
mm).
The volume of imported water exceeded the volume of wastewater in all years and thus
contributed to stormwater runoff, and at least in the driest years, to evapotranspiration. More
water left the catchment as evapotranspiration and as stormwater runoff than was input via
precipitation. In addition, in all but the driest years, wastewater and stormwater were greater
than imported water, indicating the potential for harvest of suburban discharges to meet
water demands. This highlighted the requirements for water imports under drought
conditions.
Climate had a substantial influence on several of the water fluxes. Annual precipitation was
highly variable ranging between 214 mm to 914 mm. On average, there was three times as
much rainfall as water imports but in the driest year, more water was imported to the
catchment than fell as rainfall. In the wettest year, imported water made up only 13% of
water input. Figure 9.2.5 shows the relative amounts of precipitation and imported water for
the driest, average and wettest years. The area of pie charts are proportional to total input.
The proportion of imported water increases in drier years.. The proportion of imported water
increases in drier years.
Considering outputs, the largest term is evapotranspiration, which represents 59% or more
for each year. Although the total evapotranspiration varies between 347 mm and 605 mm for
dry and wet years, the proportion of water lost as evapotranspiration is reasonably constant
(59% to 66%). Figure 9.2.6 shows that relative amounts of actual evapotranspiration,
stormwater and wastewater for the driest, average and wettest years (area of pie chart is
proportional to total output). The proportion of stormwater increases in wetter years.. The
proportion of stormwater increases during wetter years. The total volume and percentage of
12
Aspects of Urban Hydrology
In summary, at the annual scale the urban water balance indicates the human impact on the
hydrologic cycle. Water is imported into urban catchments and this exceeds the amount of
wastewater exported, therefore there must be a net increase in outputs. Data from Curtin in
the ACT shows that in dry years more than half of water inputs are via the mains supply
system.
13
Aspects of Urban Hydrology
The volumes of stormwater runoff from urban areas exceed the water demand in many
cities.
The water balance in cities include stormwater runoff, wastewater discharges and imported
reticulated mains water. To illustrate this, Figure 9.2.7 presents the average annual water
balance from the perspective of households in a range of Australian cities (Coombes, 2015).
Figure 9.2.7. Average Annual Water Balances from Households in Adelaide, Ballarat,
Brisbane, Canberra, Melbourne, Perth and Sydney
Figure 9.2.7 reveals how the combined volumes of stormwater runoff and wastewater
discharging from households (and their properties) in each of the cities are greater than the
volume of imported reticulated water supply at each location. Indeed, the average annual
volumes of stormwater runoff from residential properties is similar to or greater than the
average reticulated water demand from most of the properties. Improving stormwater
management provides an opportunity to supplement urban water supplies as well as
enhancing the amenity of urban areas and protecting the health of waterways in most cities.
The timing of water balances (rainfall, local and imported surface water supplies,
groundwater, metered water use, sewage collected and stormwater runoff from urban
surface) in the Ballarat Water District during recent drought is provided in Figure 9.2.8 as an
example of water cycle processes (Coombes, 2015).
14
Aspects of Urban Hydrology
Figure 9.2.8. Water Cycle Processes in the Ballarat Water District from 1999 to 2012
15
Aspects of Urban Hydrology
Figure 9.2.8 indicates how the Ballarat Water District was dependent on surface water from
nearby dams on local waterways [surface water (local)], until the worst of the drought in
2006. The reduced flows into local dams were supplemented using local ground water and
the surface water imported from the Goulburn River (Murray-Darling Basin). Citizen's actions
to reduce water use in response to water restrictions, installation of water efficient
appliances and rainwater harvesting also halved the demands for utility water supply [water
use (metred)] of the Ballarat Water District. The Council and the Water Authority also
implemented stormwater harvesting and wastewater reuse solutions. In combination with the
availability of ground water and imported surface water from the Goulburn River, these
actions ensured that the City of Ballarat did not exhaust water supplies during drought.
Despite rainwater and stormwater harvesting, there were still substantial stormwater runoff
events suggesting that additional water was available albeit at additional cost.
The integrated action across the water cycle by the entire Ballarat community was a success
from a water supply perspective that demonstrates the value of integrated solutions and
understanding urban water balances. Nevertheless, this example also highlights the variable
and temporal nature of urban water balances and connectivity with surrounding systems.
Some of the key insights highlighted in Figure 9.2.8 are that substantial stormwater runoff
events occur during drought, annual volumes of wastewater discharges were similar to water
demands during water restrictions. Increases in stormwater runoff drive increases in
wastewater discharges to be greater than water demands. The integrated solution for
Ballarat was able to overcome the jurisdictional and institutional boundary conditions that
limit opportunities for catchment based solutions in many cases.
The partitioning of outflow between evaporation and stormwater runoff depends on water
availability, conveyance infrastructure, storage in the catchment and the extent of irrigated
parkland and gardens. There are a few examples, other than for Curtain, where this has
been looked at in detail in an Australian context. In Melbourne, during a time of highly
restricted water use for irrigation, Coutts et al. (2009) found that rapid stormwater runoff
resulted in much reduced water availability and decreased evapotranspiration in urban areas
compared to neighbouring rural sites. The result was a very dry urban landscape with energy
partitioned into heating the atmosphere (which drove hot dry conditions) or into heat storage
(which increased overnight temperature).
16
Aspects of Urban Hydrology
• Decreases the storage of water within soil profiles and on the ground surface and so
increases the proportion of rain that runs off;
Additionally, the natural stream network is augmented by conveyance networks (pipes and
channels) that directly collects water from roofs and roads throughout the urban catchment.
The expanded conveyance (drainage) network:
• Increases flow velocity because constructed drains are smoother and straighter than
natural channels or overland flow paths;
• Reduces the storage of water in the channel system and on the catchment;
• Decreases the amount of water lost to evaporation because the water is quickly removed
by the drainage network; and
• Means that almost all areas will contribute flow to a stream because the piped drainage
network often extends to the furthest reaches the catchment.
As a result, although the exact effect of urbanisation on stream hydrology depends on the
specific circumstances, there are some general comments that apply to many urban
waterways in Australia. Urbanisation results in:
• Reduced seasonality of high flows – high flow events occur year round rather than being
mainly concentrated in a wet season;
17
Aspects of Urban Hydrology
Hydrologic changes caused by urbanisation occur at the same time as, and partly cause,
changes to sediment loads, stream ecology and water quality (Walsh et al, 2005). Key
hydrologic changes are considered in more detail in the following sections.
More rainfall is converted to runoff in urban catchments from impervious surfaces and from
pervious areas that are commonly compacted or irrigated by imported water (Harris and
Rantz, 1964; Cordery, 1976; Hollis and Ovenden, 1988a; Hollis and Ovenden, 1988b; Hollis,
1988; Ferguson and Suckling, 1990; Boyd et al., 1994; Walsh et al., 2012; Askarizadeh et
al., 2015).
The increase in magnitude of flooding because of urbanisation has been recognised for
many decades (Leopold, 1968). Urbanisation causes up to a 10-fold increase in peak flood
flows in the range 4 EY to 1 EY with diminishing impacts on larger floods (Tholin and Keifer,
1959; ASCE, 1975; Espey and Winslow, 1974; Hollis, 1975; Cordery, 1976; Packman, 1981;
Mein and Goyen, 1988; Ferguson and Suckling, 1990; Wong et al, 2000; Beighley and
Moglen, 2002; Brath et al., 2006; Prosdocimi et al., 2015).
Increased flood magnitudes have been confirmed by analysis of paired catchment data in
Australia as demonstrated by the comparison of urban Giralang and rural Gungahlin
catchments in Canberra (Codner et al., 1988) as well as numerous modelling studies
(Carroll, 1995). The impact of this increased flooding is substantial and makes up a large
proportion of overall average annual flood damage estimates (Ronan, 2009).
Runoff in urban streams responds more rapidly to rainfall in comparison to rural catchments
and recedes more quickly. The quick response means there are more flow peaks in urban
streams (Mein and Goyen, 1988; McMahon et al., 2003; Baker et al., 2004; Heejun, 2007;
Walsh et al., 2012). Urbanisation was found to reduce the volume of channel storage by a
factor of 30 in Canberra (Codner et al., 1988). This contributes to the rapid response of
urban streams and increased flood flows.
The lag time – the time between the centre of mass of effective rainfall and the centre of
mass of a flood hydrograph – decreases by 1.5 to 10 times in response to urbanisation
(Packman, 1981; Bufill and Boyd, 1989).
18
Aspects of Urban Hydrology
summer when, in equivalent rural catchment, there is little flow (Codner et al., 1988; Smith et
al., 2013).
The influence of urbanisation has complex impacts on groundwater and base flow in
streams. Various features of urbanisation have confounding effects and their relative
magnitude will determine the overall influence on base flow in streams. These features
include:
• Increases in impervious surfaces that limits infiltration and reduces evaporation of shallow
groundwater;
• Drainage of groundwater into pipes or the gravel-filled trenches that surround pipes.
The most common response to urbanisation is that base flow in urban streams is decreased.
More impervious areas means less opportunity for water to infiltrate so groundwater storage,
for storage in soil profiles and discharges are reduced (Simmons and Reynolds, 1982;
Lerner, 2002; Brandes et al., 2005). Less commonly, there may be increased base flow,
particularly where stormwater is deliberately infiltrated (Ku et al., 1992; Al-Rashed and
Sherif, 2001; Barron et al., 2013).
2.3.2. Conveyance
Urbanisation changes the processes of conveying water. The network of urban stormwater
conveyance infrastructure is denser and more extensive than the natural stream system it
replaces. This means that water is conveyed rapidly from both pervious and impervious
surfaces throughout an urban catchment. Resistance to flows is lower in straight and smooth
drainage paths of urban waterways, as compared to their natural counterparts.
The way water is conveyed from impervious areas can enhance or mitigate the influence of
impervious areas. Modelling by Wong et al (2000) suggests that condition of the waterways
also influences peak discharges that follow urbanisation. The largest impacts occur when
urban streams are lined and made hydraulically efficient.
Understanding the conveyance of water in urban areas during times of overland flooding is a
critical part of the analysis and design of urban stormwater management strategies. The
major/minor principle requires that overland flow paths must be considered once the
capacity of conveyance conduits is exceeded. This behaviour can be complex. Modelling of
19
Aspects of Urban Hydrology
overland flow paths is used in many areas to guide zoning of land to control development
and so reduce flood risk (Baker et al., 2005).
The catchment boundary for overland flows will often differ from boundaries of flows in
conduits. This means that the behaviour of large floods may be substantially different from
smaller events and has the potential to produce unexpected behaviours. An example is a
suburb protected from riverine flooding by a levee. Stormwater is usually discharged under
the levee into the river. If overland stormwater flooding cannot reach the river because of the
levee it may, instead, back up and cause flooding. This type of unexpected and rapid
flooding can be dangerous, as people are unlikely to be prepared for these types of events.
Major rivers flowing through urban centres are also receivers of urban stormwater. These
rivers will determine the base level to be used for modelling and additional analysis of the
river system may be required to ensure flood risks are adequately considered.
The impact of urbanisation on major rivers can be contrasted with the effect on urban
stormwater conveyance systems. Much of the water that is used in cities is harvested from
the rivers that flow through them, for example, the Yarra River in Melbourne, the
Hawkesbury-Nepean in Sydney and the Brisbane River. This results in lower flows and
reduces flooding in main streams. There is a paradox here. The main rivers in urban areas
have much reduced flow while in urban waterways flows are increased. For example, in
Melbourne, there is about 125 km of streams and estuaries where flow has been
substantially decreased by harvesting for urban water supply, and 1700 km of urban streams
with substantially increased flow from urban catchments. From a citywide perspective,
stormwater management needs to consider both of these impacts.
• Increased influence of the spatial pattern of rainfall because catchments respond to short
rainfall events which are more spatial variable;
• Flooding from both riverine (fluvial) and stormwater (pluvial) overflows; and
20
Aspects of Urban Hydrology
• Floods can occur at any time of the year and may be most severe when triggered by
summer thunderstorms - there is often no requirement for antecedent rainfall to wet the
catchment to generate flooding.
In reviewing the components of average annual flood damage, Ronan (2009) suggested
that, in general, risks from riverine flooding were reasonably well addressed but that
stormwater flooding was a major issue that was yet to be adequately considered.
2.4. References
ASCE (American Society of Civil Engineers) (1975), Aspects of hydrological effects of
urbanization. ASCE Task Committee on the effects of urbanization on low flow, total runoff,
infiltration, and groundwater recharge of the Committee on surface water hydrology of the
Hydraulics Division. Journal of the Hydraulics Division, (5), 449-468.
Al-Rashed, M.F. and Sherif, M.M. (2001), Hydrogeological aspects of groundwater drainage
of the urban areas in Kuwait City. Hydrological Processes15: pp: 777-795.
Argueso D., Evans, J.P., Fita, L. and Bormann, K.J. (2014), Temperature response to future
urbanisation and climate change. Climate Dynamics. 42(7), 2183-2199.
Askarizadeh, A., Rippy, M.A., Fletcher, T.D., Feldman, D.L., Peng, J., Bowler, P., Mehring,
A.S., Winfrey, B.K., Vrugt, J.A., AghaKouchak, A., Jiang, S.C., Sanders, B.F., Levin, L.A.,
Taylor, S. and Grant, S.B. (2015), From rain tanks to catchments: use of low-impact
development to address hydrologic symptoms of the urban stream syndrome. Environmental
Science and Technology, 49(19), 11264-11280.
Baker, D.B., Richards, R.P., Loftus, T.T. and Kramer, J.W. (2004), A new flashiness index:
characteristics and applications to Midwestern rivers and streams. Journal of the American
Water Resources Association, 40: 503-522.
Baker, A., Rasmussen, P., Parkyn, K., Catchlove, R. and Kazazic, E. (2005), A Case Study
of the December 2003 Melbourne Storm: the Meteorology, Rainfall Intensity, and Impacts of
Flash Flooding. 29th Hydrology and Water Resources Symposium. Canberra. Engineers
Australia.
Barron, O.V., Barr, A.D. and Donn, M.J. (2013), Effect of urbanization on the water balance
of a catchment with shallow groundwater. Journal of Hydrology, 485: 162-176.
21
Aspects of Urban Hydrology
Beighley, R.E. and Moglen, G.E (2002), Trend assessment in rainfall-runoff behaviour in
urbanising watersheds. Journal of Hydrologic Engineering, 7(1), 27-34
Bell, F.C. (1972), The acquisition, consumption and elimination of water by the Sydney
Urban System. Proceedings of the Ecology Society of Australia, 7: 161-176.
Boyd, M.J., Bufill, M.C. and Knee, R.M. (1994), Predicting pervious and impervious storm
runoff from urban drainage basins. Hydrological Sciences Journal, 39(4), 321-332.
Boyd, M.J., Bufill, M.C. and Knee, R. (1993), Pervious and impervious runoff in urban
catchments. Hydrological Sciences Journal, 38(6), 463-478.
Brandes, D., Cavallo, G.J. and Nilson, M.L. (2005), Base flow trends in urbanizing
watersheds of the Delaware river basin. Journal of the American Water Resources
Association, 41(6), 1377-1391.
Brath, A., Montanari, A., and Moretti, G. (2006), Assessing the effect on flood frequency of
land use change via hydrological simulation (with uncertainty), Journal of Hydrology,
324(1-4), 141-153.
Bufill, M.C. and Boyd, M.J. (1989), Effect of urbanization on catchment lag parameters.
Hydrology and Water Resources Symposium. Christchurch, New Zealand, 23-30 November.
pp: 171-178.
Carroll, D.G. (1995), Assessment of the effects or urbanisation on peak flow rates. Second
International Symposium on Urban Stormwater Management 1995: Integrated Management
of Urban Environments. Engineers Australia, Canberra. pp: 201-208.
Codner, G.P., Laurenson, E.M. and Mein, R.G. (1988), Hydrologic Effects of Urbanisation: A
case study. Hydrology and Water Resources Symposium, ANU Canberra, 1-3 Feb 1988,
Institution of Engineers Australia, pp: 201-205.
Coombes P.J. (2012), Effectiveness of rainwater harvesting for management of the urban
water cycle in South East Queensland. Report by Urban Water Cycle Solutions for the
Rainwater Harvesting Association of Australia and the Association of Rotomoulders
Australasia, Carrington, NSW. Available at http://urbanwatercyclesolutions.com.
Coutts, A.M., Beringer, J., Jimi, S. and Tapper, N.J. (2009), The urban heat island in
Melbourne: drivers, spatial and temporal variability, and the vital role of stormwater.
Stormwater 2009. Stormwater Industry Association.
22
Aspects of Urban Hydrology
Delleur, J.W. (2003), The evolution of urban hydrology: past, present and future. Journal of
Hydraulic Engineering, 129: 563-573.
Espey, W.H. and Winslow, D.E. (1974), Urban flood frequency characteristics. Journal of the
Hydraulics Division. American Society of Civil Engineers, 100(HY2), 279-293.
Ferguson, B.K. and Suckling, P.W. (1990), Changing rainfall-runoff relationships in the
urbanizing Peachtree Creek Watershed, Atlanta, Georgia. Water Resources Bulletin, 26(2),
313-322.
Grimmond, C.S.B. and Oke, T.R. (1986), Urban Water Balance: 2. Results From a Suburb of
Vancouver, British Columbia. Water Resources Research, 22(10), 1404-1412.
Hardy, M.J., Coombes, P.J. and Kuczera, G.A. (2004), An investigation of estate level
impacts of spatially distributed rainwater tanks, 2004 International Conference on Water
Sensitive Urban Design, Engineers Australia, Adelaide, Australia.
Harris, E.E. and Rantz, S.E. (1964) Effect of urban growth on streamflow regimen of
Permanente Creek Santa Clara County, California: Hydrological effects of urban growth.
Geological survey water-supply paper 1591-B. US Department of the Interior.
Hill, P.I., Mein, R.G. and Siriwardena, L. (1998), How much rainfall becomes runoff: loss
modelling for flood estimation. Cooperative Research Centre for Catchment Hydrology,
Melbourne.
Hill, P.I., Graszkiewicz, Z., Taylor, M. and Nathan, R. (2014), Loss models for catchment
simulation: Phase 4. Analysis of Rural Catchments. Australian Rainfall and Runoff Report
No. P6/S3/016B, Engineers Australia.
Hollis, G.E. (1988), Rain, Roads, Roofs and Runoff: Hydrology in Cities. Geography 73(1),
9-18.
Hollis, G.E. (1975), The effect of urbanization on floods of different recurrence interval Water
Resources Research, 11: 431-435.
Hollis, G.E. and Ovenden, J.C. (1988a), One year irrigation experiment to assess losses and
runoff volume relationships for a residential road in Hertfordshire, England. Hydrological
Processes, 2: 61-74.
Hollis, G.E. and Ovenden, J.C. (1988b), The quantity of stormwater runoff from ten stretches
of road, a car park and eight roofs in Hertfordshire, England during 1983. Hydrological
Processes, 2: 227-243.
Ku, H.F.H., Hagelin, N.W. and Buxton, H.T. (1992), Effects of urban storm-runoff control on
Ground-water recharge in Nassau County, New York, Ground Water, 30(4), 507-514.
Leopold, L.B. (1968), Hydrology for urban land planning - A guidebook on the hydrologic
effect of urban land use. U.S Geological Survey, Circular. 554: 18, available at http://
pubs.usgs.gov/circ/1968/0554/report.pdf
23
Aspects of Urban Hydrology
Lerner, D.N. (2002), Identifying and quantifying urban recharge: a review. Hydrogeology
Journal, 10: 143-152.
McMahon, G., Bales, J.D., Coles, J.F., Giddings, E.M.P. and Zappia, H. (2003), Use of stage
data to Characterize hydrologic conditions in an urbanizing environment. Journal of the
American Water Resources Association, 39(6), 1529-1546.
Mein, R.G. and Goyen, A.G. (1988), Urban runoff. Civil Engineering Transactions, CE30(4),
225-238.
Melbourne Water (2012), Planning for sea level rise. 5 June 2012. Available at http://
www.melbournewater.com.au/Planning-and-building/Forms-guidelines-and-standard-
drawings/Documents/Planning-for-sea-level-rise-guidelines.pdf
Mitchell, V.G., McMahon, T.A and Mein, R.G. (2003), Components of the Total Water
Balance of an Urban Catchment. Environmental Management, 32(6), 735-746.
Packman, J.C. (1981), Effects of catchment urbanization on flood flows. ICE, Flood studies
report - Five Years On. Proceedings of a conference organized by the Institution of Civil
Engineers, held in Manchester, 22-24 July. Thomas Telford Ltd, London, pp: 121-129.
Prosdocimi, I., Kjeldsen, T.R. and Miller, J.D. (2015), Detection and attribution or
urbanization effect on flood extremes using nonstationary flood-frequency models. Water
Resources Research, 51(6), 4244-4262.
Ronan, N.M. (2009), Future flood resilience - Victoria's next strategy. Albury, Floodplain
Management Association.
Simmons, D.L. and Reynolds, R.J. (1982), Effects of urbanisation on base flow of selected
south-shore streams, Long Island, New York. Water Resources Bulletin, 18(5), 797-805.
Smith, B. K., Smith, J.A., Baeck, M.L., Villarini, G. and Wright, D.B. (2013), Spectrum of
storm event hydrologic response in urban watersheds. Water Resources Research, 49:
2649-2663.
Tarr, J.A. (1979), The separate vs. combined sewer problem: a case study in urban
technology design choice. Journal of Urban History, 5(3), 308-339.
Tholin, A.L. and Keifer, C.J. (1959), The hydrology of urban runoff. Journal of the Sanitary
Engineering Division, 85(SA2), 47-106.
Walsh C.J., (2018), The water we should be using first - stormwater runoff from roofs and
roads, https://urbanstreams.net/tools/melbrunoff/.
Walsh, C.J., Roy, H., Feminella, J.W., Cottingham, P., Groffman, P.M. and Morgan, R.
(2005), The urban stream syndrome: current knowledge and the search for a cure. Journal
North American Benthological Society, 24(3), 706-723.
24
Aspects of Urban Hydrology
Walsh, C.J., Fletcher, T.D. and Burns, M.J. (2012), Urban stormwater runoff: a new class of
environmental flow problem. PLOSone DOI: 10.1371/journal.pone.0045814.
Western, A.W. and Grayson, R.B. (2000), Soil moisture and runoff processes at Tarrawarra.
In: Grayson, R. G. and Blöschl, G. (eds) Spatial Patterns in Catchment Hydrology:
Observations and Modelling. Cambridge University Press, pp: 209-46.
Wong, T., Breen, P.F. and Lloyd, S. (2000), Water sensitive road design - design options for
improving stormwater quality of road surfaces. Cooperative Research Centre for Catchment
Hydrology, Melbourne.
Woolmington, E. and Burgess, J.S. (1983), Hedonistic water use and low-flow runoff in
Australia's national capital, Urban Ecology, 7(3), 215-227.
25
Chapter 3. Philosophy of Urban
Stormwater Management
Peter Coombes, Steve Roso
3.1. Introduction
Urban stormwater management is historically described as the hydraulic design of urban
drainage networks that safely conveys stormwater runoff to receiving environments. The
industry’s approach to urban water management in Australia has changed significantly since
the establishment of centralised and separate water supply, stormwater and wastewater
paradigm in the 1800s.
Urban water management evolved over time to include waterways protection, mitigation of
stormwater quality, use of Water Sensitive Urban Design (WSUD), Integrated Water Cycle
Management (IWCM), Water Sensitive Cities (WSC), Integrated Water Management (IWM),
and many other approaches. Although these approaches are relatively new, they have wide
adoption and support in legislation and policies for water management throughout Australia.
Similar changes in approach to urban stormwater management in other countries include
Sustainable Urban Drainage Systems (SuDS) (Bozovic et al., 2017) and Low Impact Design
(LID) (USEPA, 2008). Consequently, the approach to urban stormwater management
includes water supply and is based on retention and conveyance of stormwater runoff to
meet multi-purpose design objectives that enhance livability of urban areas, mitigate
nuisance, and avoid damage to property and loss of life.
The ARR 1987 guideline focused on collection and conveyance of peak stormwater flows in
drainage networks. The guideline’s advice on hydrologic and hydraulic analysis was
consistent with the emerging computer age and hand calculation while programmable
calculator and computer methods were discussed. The increasing complexity of the different
methods and an associated requirement for use of computers was highlighted.
Use of statistical design rainfall bursts was recommended to calculate inflows to drainage
networks and the Rational Method was described as the best known method for estimation
26
Philosophy of Urban
Stormwater Management
of urban stormwater runoff. The main objective of urban drainage was to convey stormwater
from streets and adjoining properties without nuisance from minor rain events, and to avoid
property flooding and associated damage from major rain events (the minor/major design
approach).
Drainage solutions solely focused on developed catchment and were mostly designed by
engineers. The simplicity of methods for estimating stormwater runoff implied accuracy and
certainty of design performance for many users. Urban water management further evolved in
the mid-1990s to cover protection of waterways, mitigation of urban stormwater quality,
WSUD (Whelans and Maunsell, 1994), IWM and IWCM (Coombes and Kuczera, 2002)
approaches. Nevertheless urban stormwater runoff creates complex impacts on urban
stream ecosystems and receiving waterways (Walsh et al., 2005; Paul and Meyer, 2001).
Increases in runoff volumes and rates from urban areas (flow regimes) contribute to
degradation of riparian ecosystems and promotes geomorphic changes within urban streams
(Walsh et al., 2012). Although these approaches are relatively new, they have subsequently
gained widespread adoption and support throughout Australia. To support this evolution,
Engineers Australia published 'Australian Runoff Quality – A Guide To Water Sensitive
Urban Design' in 2006 (Engineers Australia, 2006).
The acceptance of WSUD, IWCM and related approaches is manifested in three significant
ways:
• the development of benchmark projects (e.g; Lynbrook estate (Lloyd et al, 2002), Fig Tree
Place (Coombes et al., 2000) and Little Stringy Bark Creek (Walsh et al., 2015)) that
provided evidence that these new approaches were successful;
• the creation of local policies and plans for integrated water management; and
• the adoption of policies for sustainable water management by state and federal
governments.
Recent droughts, such as the 'millennium drought' also triggered many other changes in the
urban water sector, largely associated with water conservation, harvesting, recycling and
reuse (Aishett and Steinhauser, 2011).
Urban areas are complex systems that are subject to dynamic interaction of economic,
social, physical and environmental processes across time and space (Forrester, 1969;
27
Philosophy of Urban
Stormwater Management
Coombes and Kuczera, 2003; Beven and Alcock, 2012). Continuous intervention is required
to renew urban economic, technical and social structures to maintain human welfare and
protect ecosystem services (Forrester, 1969; Meadows, 1999). Understanding these
processes into the future also encounters the uncertainty created by non-stationary data that
describes past processes. Design and analysis processes should include distributed
approaches to account for the time based dynamics of essential data. The integrated nature
of contemporary water management approaches is different to the objectives and design
solutions envisaged in 1987. Urban water management is now required to consider multiple
objectives (e.g. resilience, livability, sustainability and affordability) and the perspective of
many disciplines. Advances in computing power, more available data and associated
research also allows the analysis of increasingly complex systems to understand the trade-
offs between multiple objectives (Coombes and Barry, 2014). Design of urban water
management seeks to integrate land and water planning. Use of more comprehensive
datasets revealed a greater range of potential outcomes that needs understanding to
develop integrated solutions.
According to Argue (2017), the urban designer aims at managing the impact of urban
stormwater runoff ‘at source’ and at multiple scales by retaining stormwater in landscapes
and soil profiles, rainwater harvesting and disconnecting impervious surfaces from drainage
networks (Poelsma et al., 2013). Consistent with the philosophy of source control and
systems analysis, stormwater runoff is now seen as an opportunity and is valued as a
resource (Clarke, 1990; Mitchell et al., 2003; McAlister et al., 2004). Modern design criteria
may include analysis of the volumes, timing and frequency of stormwater runoff to determine
peak flow rates, water quality and requirements to mimic natural flow regimes to protect
waterway health (Walsh, 2004).
The impervious surfaces and hydraulically efficient infrastructure associated with urban
catchments increases the magnitude and frequency of stormwater runoff whilse reducing the
infiltration to soil profiles and subsequent baseflows in waterways. The accumulation of
stormwater flows within urban catchments is highlighted. The first response at A is the
(undisturbed) ecosystem upstream from urban impacts, the second response at B includes
the impact of water extraction to supply the urban area (changed flow regime in rivers
created by water supply) and the third response at C includes water discharges from the
urban catchment (changed flow and water quality regime from both stormwater runoff and
wastewater discharges) into the river basin.
28
Philosophy of Urban
Stormwater Management
Figure 9.3.1. Schematic of Traditional Urban Catchments and Cumulative Stormwater Runoff
Processes
Figure 9.3.1 demonstrates analysis and solutions at point D at the bottom of urban
catchments; it can exclude understanding of impacts within the urban catchment (sub-
catchments a-h) and external impacts to the river basin at B and C. Traditional analysis of
urban catchments is from the perspective of rapid discharge and accumulation of stormwater
via drainage networks (in sub-catchments a-h) with flow and water quality management at
the bottom of the urban catchment (D) using retarding basins, constructed wetlands, and
stormwater harvesting. However, the benefits for flood protection, improved stormwater
quality, and protection of the health of waterways from this approach do not occur within the
urban catchment upstream of point D.
Figure 9.3.1 also highlights how distributed land uses (allotments or properties) produce
hydrographs of stormwater runoff into the street drainage system. This system accumulates
stormwater runoff from multiple inputs, creating progressively larger volumes of stormwater
runoff, which ultimately flows into urban waterways or adjoining catchments (Pezzaniti et al.,
2002). This process results in significant changes in volume and timing of stormwater
discharge to downstream environments.
There has been an emerging understanding that this issue can be solved by viewing urban
stormwater as an opportunity to supplement urban water supplies and enhance the amenity
of urban areas (Mitchell et al., 2003; Barry and Coombes, 2006; Wong, 2006). This includes
development of green infrastructure and microclimates that reduce urban heat island effects.
Urban catchments with impervious surfaces are substantially more efficient than
conventional water supply catchments in translating rainfall into surface runoff. Rainwater
and stormwater harvesting can extend supplies from regional reservoirs and the restoration
of environmental flows in rivers subject to extractions for water supply (Coombes, 2007).
These insights are consistent with earlier applied research by Goyen (1981) that both
29
Philosophy of Urban
Stormwater Management
volumes and peak flows of stormwater runoff are required to design stormwater
infrastructure, and the local property scale is the building block of cumulative rainfall runoff
processes (Goyen, 2000). Reducing urban stormwater runoff volumes via harvesting and
retention in upstream catchments can also decrease stormwater driven peak discharges and
surcharges in wastewater infrastructure (Coombes and Barry, 2014).There has been an
emerging understanding that this issue can be solved
Changes in land use, climate, increased density of urban areas and decline in hydraulic
capacity of aging drainage networks can result in local flooding and damage to property.
Climate change is expected to reduce annual rainfall and generate more intense rainfall
events in a warming climate (PMSEIC, 2007; Wasko and Sharma, 2015). This will intensify
the challenges of providing secure water supplies and mitigating urban stormwater runoff.
There may also be the need to replace stormwater conveyance networks installed during
post-war urban redevelopment that are nearing the end of useful life. In this situation, the
capacity of an aging network or increased runoff from increasing development density can
be supplemented by source control measures and integrated solutions (Barton et al., 2007).
Integrated solutions and flexible approaches to design can avoid costly replacement of
existing infrastructure.
Flood management issues for many urban areas are driven by runoff discharged towards
waterways (overland flooding) rather than from flood flows originating at waterways (fluvial
flooding). There is a need to consider more extensive range of stormwater runoff events,
from frequent to rare or extreme and the associated impacts on urban environments
(Weinmann, 2007). Management of these flood related impacts require integrated
management of the full spectrum of flood events (Figure 9.3.2).
Figure 9.3.2. The Full Spectrum of Flood Events (Adapted from Weinmann (2007))
Figure 9.3.2 highlights the evolving methods of analysis, including continuous simulation and
Monte Carlo simulation of full storm volumes that are likely to be required to account for the
full spectrum of rainfall events as defined by Exceedance per Year (EY) or Annual
Exceedance Probability (AEP). The definition of rain events is currently a mix of assumptions
regarding frequency and magnitude that is clarified in this version of ARR to allow effective
advice on design of stormwater management schemes. This includes development of green
infrastructure and microclimates with reduction of urban heat island effects.
30
Philosophy of Urban
Stormwater Management
Strategic use of water efficiency, rainwater, stormwater and wastewater at multiple scales
can supplement the performance of centralised water supply systems to provide more
sustainable and affordable outcomes (Victorian Government, 2013). These integrated
strategies diminish the requirement to transport water, stormwater and wastewater across
regions with associated reductions in costs of extension, renewal and operation of
infrastructure (Coombes and Barry, 2014). This leads to decreased requirement to augment
regional water supplies and long run economic benefits. These strategies also focus on
restoring more natural flow regimes in waterways and they will be beneficial in reducing
remedial works in waterways and will provide reduction in size or footprint of quality
treatment measures (Poelsma et al., 2013).
31
Philosophy of Urban
Stormwater Management
It also follows that historical ‘top down’ design processes may not evaluate distributed
processes because a small proportion of the available data may be simplified as whole of
system average or fixed inputs (such as a runoff coefficient and average rainfall intensity).
Thus, the signals of linked distributed performance (such as local volume management
measures) in a system are smoothed or completely lost by partial use of data as averages
and by the scale of analysis. Therefore, there is no direct mechanism to capture cascading
changes in behaviours throughout a system. This can lead to competing objectives (For
example: local versus regional), inappropriate solutions and information disparity such as
provision of a wetland and retarding or detention basin downstream of an urban area when
management is required within the urban area to protect urban amenity, stream health and
avoid local flooding. This paradox can only be resolved through a broader analysis
framework which recognises location based principles of proportionality and efficient
intervention.
For example, consider the connectivity of contemporary water cycle networks presented in
Figure 9.3.4.
32
Philosophy of Urban
Stormwater Management
Figure 9.3.4 shows that an input, or extraction at any point α or β, or an increase in water
storage in a reservoir, at location A, will have some influence on flows and capacities at
many other points in the system. These in turn, will translate into changes in performance
and costs across the linked networks of infrastructure. Similarly, changes in behaviour
(demand) at any point in the system will generate different linked impacts a, b and c on
water, wastewater and stormwater networks respectively. Analysis and design of integrated
solutions needs to account for the linked dynamic nature of the urban water cycle and
demography. The inclusion of rainwater and stormwater harvesting, and wastewater reuse
further increases the level of connectivity of urban water networks.
The historical practice for estimation of stormwater runoff rates and the design of drainage
(conveyance) infrastructure is based on a methodology where all inputs, other than rainfall,
are fixed variables. The fixed values of the input variables are selected to ensure that the
exceedance probability of stormwater runoff is similar to that of regional rainfall statistics.
However, catchments that contain cascading integrated solutions involving retention, slow
drainage, harvesting of stormwater and disconnection of impervious surfaces require
enhanced design methods (Kuczera et al., 2006; Wong et al, 2008; Coombes and Barry,
2008). These emerging methods for analysis and design of integrated solutions include the
following considerations:
• Long sequences of rainfall that include full volumes of storm events are required to
generate probabilistic designs of integrated solutions;
• Peak rainfall events may not generate peak stormwater runoff from projects with
integrated solutions;
• The frequency of peak rainfall may not be equal to the frequency of peak stormwater
runoff from integrated solutions;
• Stormwater runoff from urban catchments is influenced by land use planning, and the
connectivity and sequencing of integrated solutions across scales;
33
Philosophy of Urban
Stormwater Management
• The probability distribution of the parameters that influence the performance of the
integrated solutions (for example human behaviour, rainfall and soil processes) and the
ultimate stormwater runoff behaviour are unknown for each project;
• Integrated solutions often meet multiple objectives (for example water supply, stormwater
drainage, management of stormwater quality, provision of amenity and protection of
waterways) and are dependent on linked interactions with surrounding infrastructure; and
• We should be mindful that the limitations of design processes are not always apparent and
diligence is required to ensure that substantial problems are avoided.
A combination of event based estimation techniques, directly or indirectly, may not reliably
produce probabilistic design of drainage, water quality, and water or wastewater
infrastructure within integrated strategies. While use of best available event based design
approximations are an accepted default or deemed to comply approach for design of
infrastructure, there is a need for more advanced methods for design of integrated solutions.
The definition and purpose of minor or major drainage system is unclear in the context of
modern approaches to water cycle management. Replacement of minor or major drainage
descriptions with a definition of managing nuisance or disaster respectively, would provide a
clearer focus on the relative importance of both concepts. To avoid nuisance, one may be
too focused on a prescriptive drainage approach to the minor system. A well-designed major
system to avoid disaster is likely to allow more opportunity for integrated solutions that will
also mitigate nuisance. We also need to be cognisant that water supply and stormwater
quality options can also assist in avoiding disaster and mitigating nuisance.
34
Philosophy of Urban
Stormwater Management
Overland flooding can be responsible for significant damage. Adequate major flow paths
must be provided or retained to manage these events. A stormwater management strategy
is required that includes systematic identification of overland flow paths and design practices
that recognise and respond to overland flood risks. Simple design practices such as slightly
elevating property and floor levels above the surrounding terrain can effectively eliminate
most overland flood risks.
Figure 9.3.5. Example Overland Flow Path Map Generated Using a Two Dimensional Model.
35
Philosophy of Urban
Stormwater Management
Approaches to analysis with less complexity may be more practical for smaller areas or
simpler stormwater management strategies. This may involve the capture of detailed ground
survey and inspection of the data by a suitably experienced designer to manually estimate
the location of low points and likely flow paths. Simple hydrologic and hydraulic calculations
(refer to Book 9, Chapter 5) could then be applied to estimate the depth and width of
stormwater at regular intervals throughout overland flow paths.
Caution should always be employed when interpreting the mapping of results for stormwater
flows and inundation as there may be significant uncertainties about the results caused by:
The application of two dimensional modelling approach produces results that reveal
hydrologic uncertainty due to use of the hydraulic model to simulate the natural physical
processes of stormwater flows. These results may be in contrast to empirical or statistical
relationships between rainfall and runoff that are used to estimate stormwater runoff in some
traditional hydrologic modelling software.
Flood warning emergency systems are usually inappropriate for overland flooding, as the
potential warning times are too short. However, incorporation of overland flooding
information with radar rainfall forecasts may assist in providing emergency management
warnings.
Building and development controls should include provisions that prevent the erection of
new buildings within overland flow paths or set minimum floor levels that are deemed safe.
Other building controls may also require measures that minimise potential blockage and
obstruction to flows within effected building envelopes. Application of these controls to
particular sites may require detailed site-based flood investigations to more accurately
estimate flood levels and behaviours.
A freeboard allowance above a calculated flood level is applied to determine the minimum
level of infrastructure such as a habitable dwelling. Freeboard is required to account for the
uncertainties that are inherent in the calculation of flooding. A typical minimum value of 0.3
m above a flood surface is suggested. However, this value can be varied to account for local
factors such as the sensitivity of specific infrastructure to flood damage and expected
36
Philosophy of Urban
Stormwater Management
uncertainty in estimates of flood level estimates for a site. Uncertainty about flood levels are
variable and dependent on many factors including the nature of the catchment and the
cross-sectional profile across the flow path.
Freeboard should not be used to protect against measurable uncertainties for example risk
of blockage and climate change. If these risks are a concern for the site then they should be
explicitly incorporated into the basic flood level estimates before freeboard is applied.
This type of flooding is natural. However, careful urban planning is required to avoid
substantial damage to infrastructure and property. Fluvial flooding is recognised as one of
the most significant natural hazards in Australia that is responsible for a significant
proportion of economic losses and damage to property. Therefore, fluvial flooding has been
the target of significant government programs for mapping of flood hazards and
implementation of measures that mitigate potential economic losses and damage to
property.
The management of hazards created by fluvial flooding differs from overland flooding as the
quantity of floodwaters can be much greater and therefore more difficult to control and
contain using physical changes to the floodplain. It is often preferable and more cost
effective to avoid these hazards using a process of careful urban planning. This is best
achieved by the use of strategic plans and a suite of flood related building and development
controls.
Public flood awareness mapping, flood education, flood mitigation and flood warning
emergency systems become more important where development has already occurred
within parts of the floodplain subject to fluvial flooding. Catchments that generate fluvial
flooding are often large and the lag between rainfall and runoff can be sufficient which
increases the feasibility of flood warning and emergency management strategies.
37
Philosophy of Urban
Stormwater Management
Figure 9.3.6. Example of Fluvial Flow Path with Interface with Overland Flow Path
Analysis of stormwater management strategies at the interface zone requires first principles
assessment of management techniques from the perspective of both overland and fluvial
flooding.
Both types of flooding can occur simultaneously. However, this is unlikely since the rainfall
mechanisms that typically cause each type of flood are different. It is more likely that
overland and fluvial flooding will occur at different times and possibly not during a single
rainfall event. This complex behaviour can confuse attempts to communicate flood risks and
implement management strategies. Confusion also arises when insurance claims are made
for loss and damage because the decision to pay a claim sometimes relies upon whether the
flooding was overland or fluvial in nature. In addition, the insurance industry has begun to
offer fluvial flood insurance cover, which may reduce this problem in future. Nevertheless, it
is important for practitioners to recognize the potential for both forms of flooding and
carefully assess flood behaviour at each site and for each flood event from first principles.
• The minor system manages nuisance. This runoff is conveyed in a manner that maintains
safety, minimises nuisance and damage to property. The infrastructure is also provided to
avoid potential maintenance problems for example ponding and saturation of designated
areas. Importantly, the minor system also includes volume management measures that
aim to hold water within urban landscapes and sub-catchments (refer to Book 9, Chapter
4) – these solutions may include ponding of stormwater within a defined area. The minor
system must withstand the effects of regular stormwater inundation.
• A major system primarily intended to mitigate disaster. The major system typically includes
overland flow paths on roads and through open space, and trunk conveyance
38
Philosophy of Urban
Stormwater Management
infrastructure. This system conveys additional stormwater runoff produced during larger
less probable and rarer storm events with the intent of managing the potential for flood
disaster. Overland conveyance of stormwater from large events is potentially hazardous
due to the velocity and depth of flows, and must be safely contained within a defined
corridor of major system flows.
However, there may be justification to deviate from this practice where a suitable risk
assessment identifies the need. For example where the consequences of flooding at a
particular location are high, it may be necessary to expand the overall system capacity to
cater for more extreme events. This is not commonly required and this type of decision must
have regard to the overall life-cycle cost and benefits that a larger capacity system may
deliver.
The threshold at which the capacity of the minor system is exceeded and the major system
begins to convey runoff is also a matter for consideration at the design stage or for policy
makers at the time when preparing local design standards for stormwater management. The
capacity of minor system is typically established to manage stormwater events ranging from
50% AEP to 5% AEP. Documentation of these standards can be found in drainage design
guidelines prepared by local government and relevant state authorities. No single universally
appropriate capacities of minor systems can be applied in practice.
Some factors that may influence the balance between the capacity of major and minor
systems are described in Table 9.3.1. These factors may generate a number of different
capacity standards for minor systems that account for different locations and jurisdictions.
Table 9.3.1. Factors Influencing the Balance between Capacities of Major and Minor
Systems in Design
Factor Description
Land availability Sufficient land may be available for major systems to safely convey
additional surface flows and reduce the proportion of flows conveyed by
minor systems. The use of volume management and WSUD approaches
can also change the proportion of flows assigned to minor and major
systems.
Local rainfall In some areas, such as tropical northern Australia, runoff generated by
patterns frequent storms may be too large to cost effectively convey using minor
systems. Major flow paths will need to be expanded accordingly to
manage a proportion of these flows.
Likely level of Major systems that are highly frequented by people or vehicles, for
exposure to the example in city streets or major motorways, involve greater exposure to
major flow path floodwaters and corresponding risks. In these cases, it may be
hazard appropriate for a greater proportion of runoff to be conveyed in minor
systems.
39
Philosophy of Urban
Stormwater Management
Factor Description
Physical and When new stormwater management systems are required for an existing
downstream urban area, it may be impractical or cost prohibitive to achieve an ideal
constraints capacity and compromise may be required.
Erosion Natural or otherwise unlined minor systems may be subject to erosion
when flow durations and or velocities are too high. If volume
management options (as discussed in Book 9, Chapter 4) are not
available, then lowering the capacity of the minor systems and forcing a
greater proportion of flow into the major system may be one way to
manage these effects.
Blockage Where the capacity of minor systems is reduced by a likelihood of
potential blockage with debris, resources should be directed towards safer and
more durable surface flow paths within major systems.
Climate change The expected future increases in short duration rainfall intensities may
require appropriate design responses to increase the capacity of minor
systems or change the relationship with major systems to maintain
current levels of service.
The depth and velocity of flows along any proposed surface flow paths are considered when
calculating the dimensions of stormwater conveyance corridors and must meet relevant
standards for design, safety and maintenance. A design should also ensure that operation of
a conveyance network during severe storms does not cause unexpected or catastrophic
consequences (for example, an unintended diversion of flows into an adjoining catchment
because of blockage or extreme events).
Wherever possible the width of the land corridor set aside for stormwater management
should be generous to improve the constructability of the system and reduce the costs of
any future renewal and maintenance activities. Opportunities for co-location of stormwater
management within urban parklands should be considered. The alignments of stormwater
conveyance networks typically follow natural low points to minimise earthworks. However,
some re-alignment away from the natural low points may occur to account for urban form
and limit conflicts with other urban infrastructure. However, the design of conveyance
networks should also consider minimising damage to existing ephemeral waterways.
Alignments of major systems are often parallel to minor systems and should be continuous
until intersection with a natural watercourse or receiving waters. The design should include
adequate management to avoid nuisance or risks at crossings, for example roadways or
footpaths.
40
Philosophy of Urban
Stormwater Management
presented in Figure 9.3.7. The most common configuration (shown in Figure 9.3.7)
comprises an underground conveyance (inlet structures and pipes) network (minor system)
within surface flow paths on roads (major system).
The design of the major and minor systems should integrate smoothly with other urban
infrastructure and manage impacts on natural environments. In particular, innovative design
of urban parks can be used to achieve drainage objectives while also enhancing aesthetic
and environmental outcomes.
41
Philosophy of Urban
Stormwater Management
3.5.3. Analysis
Suitable hydrologic and hydraulic calculation methods, described in Book 9, Chapter 4, Book
9, Chapter 5 and Book 9, Chapter 6, are used to estimate depths and velocities of
stormwater flows with associated extents of flooding throughout major and minor systems,
which facilitates the design of various components. The methods selected for analysis or
design must be able to simulate the complexity of the stormwater management strategy. A
design problem may include complex flow behaviours, for example parallel underground and
surface flow paths, multiple inflows and the effects of storage and tail water conditions.
These methods must have the capacity to predict the hydraulic performance of the overall
system and of each different component within the system for example inlet structures, pipes
and channels. Hydraulic performance must be assessed using a range of storm events and
configurations. Ideally, a design should be challenged by ensembles of full volume storm
events to determine the critical storm duration and shape for each AEP.
The available software modelling tools can facilitate most of these complex calculations.
However, emerging engineering practice and software tools aim to seamlessly handle the full
range of linked hydrologic and hydraulic calculations required to account for surface flow
behaviours throughout complex conveyance networks. These complex scenarios may
require combinations of hydrologic models linked to hydraulic models with one dimensional
conveyance network and two dimensional surface flows.
Typically, this is achieved through the design and installation of volume management
facilities. Detailed aspects of these facilities are described in Book 9, Chapter 4, however at
a philosophical level the questions that need consideration when developing a catchment-
wide volume management strategy are:
Phillips and Yu (2015) suggest whilst undertaking these assessments, catchment managers
should also consider whether to use an ensemble of complete storms with a storm burst of
42
Philosophy of Urban
Stormwater Management
around the critical duration or a storm burst only to determine the benchmark condition(s).
The decision of what design to adopt can be informed through identifying the level of risk the
community is willing to accept within the catchment.
For example, a facility might only manage peak discharge from a site for a single probability
design flood event used for regional flood planning (e.g. 1% AEP). This might be achieved
by storing a proportion of the hydrograph volume and releasing it later during the storm
event through a constricted outlet. This is commonly called a detention basin or retarding
basin.
Separate facilities might be required to also meet other stormwater volume objectives for
example a rainwater tank for harvesting and a bio-retention basin for water quality
improvement.
A more comprehensive facility might aim to achieve a peak discharge control objective
alongside other volume objectives, by storing a proportion of the hydrograph volume and
releasing it well after the storm event has finished, or even store it for later use (i.e. not
released into the stormwater system at all). For example a constructed wetland (water
quality) with an extended detention storage compartment above (peak discharge control),
providing pre-treatment for a stormwater harvest facility (retention).
For each facility and objective it is necessary to determine whether the facility must achieve
a low or high level of performance.
For example, it may be sufficient to retain the hydrologic conditions equivalent to a pre-
developed condition, which might be considered a low level of performance.
The performance level sought will be related to the sensitivity of the downstream receiving
waterway and whether the local community aspires to achieve a high performance solution.
In some circumstances, there is opportunity to make broad strategic decisions about the
distribution of these facilities across a catchment. Some typical volume management
strategies that can be followed include:
• An ‘at source’ management strategy: this employs small facilities, widely distributed across
the catchment, many of which will only service a small catchment or single property.
Strategies of this type are most commonly part of a more comprehensive and integrated
urban water strategy.
• A ‘neighbourhood scale’ management strategy: this strategy employs larger facilities that
are less widely distributed than lot scale facilities but servicing larger catchments. These
facilities are normally publicly managed and co-located alongside a watercourse or
drainage reserve at the interface between underground and surface conveyance paths.
43
Philosophy of Urban
Stormwater Management
• A ‘regional scale’ management strategy: this strategy uses very large facilities that are
located at the catchment outlet and service all properties in the watershed. These are
normally publicly owned and co-located with major parkland. This is also referred to as an
‘end of pipe’ strategy.
Some typical types of urbanising catchments and their associated volume strategy
considerations are:
• Future growth areas where there is currently limited urban development (also commonly
referred to as ‘Greenfields’ development). For these catchments the over-riding strategic
objective commonly applied is to preserve the nature and amenity of their waterways in
terms of hydrology (flow and channel geometry) and aquatic communities. This can be
achieved using ‘source control’ measures applied throughout their contributing
catchments. These measures include rainwater tanks, bio-retention facilities, ‘rain
gardens’, infiltration trenches, ‘soakaways’ and access to aquifers where soil and
geological conditions are favourable.
Since there is often opportunity to forward plan in ‘greenfields’ catchments there may also
be opportunities for comprehensive ‘neighbourhood scale’ and ‘regional scale’ placement
strategies.
Every effort should be made in these catchments to encourage 'informal' drainage, green
spaces and to disconnect as much impervious surface as possible. The criterion for
successful design of these systems is keeping the volume discharged from each site the
same after development as before, for design flood events. Use of these practices, is
referred to by Argue (2017) as a ‘regime-in-balance’ strategy. It is suggested that adoption
of such a strategy can keep urban waterways operating as natural systems for many years
before increased urbanisation might then require the introduction of rectification strategies
such as increased channel lining.
• Highly urbanised catchments where the strategic objective is often to minimise the need
for further modification or upgrades to conveyance networks as development and re-
development continues. For these catchments land availability may constrain opportunities
for wide adoption of ‘neighbourhood scale’ and ‘regional scale’ placement strategies.
However volume management objectives can be achieved in a similar manner to a
‘greenfields’ catchment using ‘source control’ practices as re-development takes place. An
additional opportunity, ‘roof gardens’, is provided by the presence of multi-storey and high-
rise elements of this class of development.
The objective for successful design of these systems is keeping the volume discharged
from each site the same after development as before, for a design flood event. This
objective is more difficult to achieve than in ‘greenfields’ catchments giving rise to the
more common use of temporary on-site storages holding stormwater after flood peaks
have passed. This problem can be solved by ‘slow release’, infiltration or harvesting to
ensure storages are empty ahead of closely-spaced storm events. With such provisions in
place, the supporting infrastructure can continue to operate successfully without
enlargement.
44
Philosophy of Urban
Stormwater Management
The criterion for successful design of these systems is not just to match pre-development
conditions but to go further and minimise the volume discharged from each site after re-
development. This is referred to by Argue (2017) as the ‘yield-minimum’ strategy. The
nature of re-development in an already over-developed urban catchment is frequently
large-scale, for example urban renewal projects. These lend themselves to complete re-
organisation of local drainage infrastructure and, hence, opportunities for less discharge
during the ‘design’ runoff events. Every component of re-development incorporated under
the ‘yield-minimum’ strategy moves the catchment in the direction of a balance between
runoff being generated and infrastructure capacity.
Catchment managers will also need to take into account the local landscape and soil
conditions, which may limit the application of certain volume and quantity management
solutions. For example, heavy clay soils may limit the application of infiltration based
solutions, whereas sandy soils may promote such solutions.
• different choices may be required depending on the nature of the catchment and the asset
policies of the local stormwater authority.
Typical catchment management strategies (as designed using bottom up or top down
methods or other analyses) can include a number of different approaches which reflect the
local authority’s commitment to WSUD principles, as well as commitment to restore
overloaded systems to balance.
Yield-maximum: maximise the quantity of storm runoff captured at the end of the catchment
and ensure that the floodwaters are contained within a defined floodplain. This strategy is
most suitable for local authorities with a desire to have large centrally controlled systems,
rather than distributed local solutions.
45
Philosophy of Urban
Stormwater Management
Yield-minimum: improve the performance of the urban flood control infrastructure through
minimisation of stormwater discharge from each development site (including redevelopment
sites). This strategy is most suitable for catchment or sub-catchments with already poorly
controlled urban development with a history of flood damage and ecosystem deterioration.
Large catchments, where urbanisation is actively occurring, and over an extended period,
may contain precincts where a mix of these strategies might be appropriate. Notably, all
strategies will benefit from urban planning that promotes rainfall infiltration, harvesting and
retains natural hydrologic function.
Tradeable permits for pollution control are attractive as they provide opportunities for
economic efficiency, flexibility and incentives for innovation (Kraemer et al., 2004; Haensch
et al., 2016). The international experience with water pollution emission trading is not
extensive but does include some successful examples (Shortle, 2013). Trading of pollution
abatement responsibilities can cause water quality to deteriorate at different times and rates
in some parts of a catchment. Therefore designing a tradeable permit or offset scheme
needs to take spatial, temporal and environmental equivalence effects into account.
At the time of writing, Melbourne Water (MWC, 2018) and Queensland Healthy Waterways
(Water by Design, 2014) (for example) operate stormwater quality offset schemes. These
schemes involve a financial contribution paid by developers for stormwater management
works to be undertaken in another location to meet catchment wide objectives for managing
stormwater and protecting waterway health. These schemes respond to the assumption that
regional stormwater management is more cost and time effective than distributed smaller
scale solutions. These off-set schemes can be useful for urban areas subject to infill
development that may have limited space for infrastructure.
There are limited examples of trading or offset schemes for management of stormwater
runoff volumes or peak flows. The District of Columbia Water and Sewer Authority (for
example: DCWater (2018)) provide an impervious area charge incentive program for
customers to reduce effective impervious surfaces and, therefore, stormwater runoff on their
properties which avoids regional works. Properties that use best management practices
such as rain gardens, rainwater harvesting, green spaces and pervious paving are
considered to reduce effective impervious surfaces and results in a reduced stormwater
charge. Similarly, the historical on-site detention (OSD) strategies by the Upper Parramatta
River Catchment Trust (for example: UPRCT (2005)) offset the need for regional stormwater
basins by use of detention storages (OSD) on properties.
46
Philosophy of Urban
Stormwater Management
• The spatial, temporal and cumulative allocation of required treatment capacity must be
defined using a catchment management strategy. It is unlikely that transfer of local
stormwater management requirement to a downstream regional location will be a linear or
average process
• A scheme must result in the desired and measurable changes in flow (and water quality)
resulting from the infrastructure and stormwater strategies within the same catchment
• The funds obtained from stormwater off-sets must be tied to measurable deliverables in
the catchment
• The scheme must provide for regional infrastructure in a reasonable time period that is
consistent with the timing of upstream development.
• The relative financial contributions from upstream developers must be proportional to their
flow and pollutant loads that will be managed by the regional scheme
• The scheme must have the same life cycle or equivalent life cycle as the life cycle of the
upstream development (e.g. short-term mitigation strategy, such as flow and erosion
management, cannot be used for a long-term offset to a developed area stormwater
management)
• If water quality is part of the scheme, consideration should be given to the bio-availability
of pollutants removed through the different upstream and catchment wide management
methods
• Clear ownership and rules about the off-set scheme should be established and risk should
be mitigated through the adoption of appropriate ratios, and
• The ongoing maintenance and renewal costs associated with the regional infrastructure
must be allocated to ensure the performance of the scheme does not deteriorate over
time.
Stormwater off-set schemes that transfer management of stormwater volumes and peak flow
to other locations have the potential for ecological impacts in local waterways or downstream
receiving waters. The ultimate objectives of an off-set scheme should include performance
targets that also consider secondary effects (such as impacts on local waterways) and
monitoring strategies should be implemented to measure effects of strategies.
Chee (2015) highlights that there is limited evidence of success of stormwater off-set
schemes and formal monitoring strategies would provide an opportunity to more critically
consider the evidence of how well schemes that have been implemented and their operation.
It is also emphasized that achieving equivalence in stream biodiversity and ecological
function is extremely difficult.
Coker et al. (2018) argue that stormwater off-sets should not result in avoided management
of stormwater runoff. They emphasize the substantial challenge of adequately considering
spatial, temporal and environmental influences of off-sets, and the importance of quantifying
the spatial extent of stormwater impacts from the development in question. It is highlighted
that unmitigated stormwater runoff from relatively small proportions of urban areas may
47
Philosophy of Urban
Stormwater Management
propagate severe impacts a long way downstream which can render the practice of
offsetting within a single catchment a difficult undertaking.
3.8. Acknowledgements
This chapter benefited from the support and guidance provided by Mark Babister and
Engineers Australia. The substantial contributions of the editorial panel of Andrew Allan,
Andrew King, Stephen Frost, Mike Mouritz, Tony McAlister and Tony Webber in shaping this
discussion are gratefully acknowledged. Additional review comments from James Ball, Bill
Weeks and Christopher Walsh also assisted in clarifying the content of this paper.
3.9. References
Aishett E., and Steinhauser, E. (2011), Does anybody give a dam. The importance of public
awareness for urban water conservation during drought. Submission by the Australian
National University to the Productivity Commission.
Argue, J.R. (2017), Water Sensitive Urban Design: basic procedures for 'source control' of
storm water - a handbook for Australian practice. Urban Water Resources Centre, University
of South Australia, Adelaide, South Australia, in collaboration with Stormwater Industry
Association and Australian Water Association. Adelaide.
Barry, M.E. and Coombes, P.J. (2006), 'Optimisation of Mains Trickle Topup Supply to
Rainwater Tanks in an Urban Setting', Australian Journal of Water Resources, 10(3),
269-276.
Barton A.F., Coombes, P.J., Sharma, A. (2007), Impacts of innovative WSUD intervention
strategies on infrastructure deterioration and evolving urban form. Rainwater and Urban
Design Conference 2007, Engineers Australia, Sydney.
Beven K.J. and Alcock R.E. (2012), Modelling everything everywhere: a new approach to
decision making for water management under uncertainty, Freshwater Biology, 57(1),
124-132.
Bozovic, R., Maksimovic, C., Mijic, A., Smith, K. M., Suter, I., Van Reeuwijk, M., (2017), Blue
Green Solutions. A Systems Approach to Sustainable and Cost-Effective Urban
Development, Imperial College, London, UK.
Chee Y.E. (2015), Principles underpinning biodiversity offsets and guidance on their rise. In
van der Ree R. Smith DJ and Grilo C (eds.) Handbook of Road Ecology. John Wiley and
Sons, Ltd, Chichester, West Sussex, pp: 51-59.
Clarke, R.D.S., (1990), Asset replacement: can we get it right? Water. pp.22-24
Coker, M.E., Bond, N.R., Chee, Y.E. and Walsh C.J. (2018), Alternatives to biodiversity
offsets for mitigating the effects of urbanization on stream ecosystems, Conservation
Biology.
48
Philosophy of Urban
Stormwater Management
Coombes, P.J. (2007), Energy and economic impacts of rainwater tanks on the operation of
regional water systems. Australian Journal of Water Resources, 11(2), 177-191.
Coombes, P.J. (2005), Integrated Water Cycle Management: Analysis of Resource Security.
Water, 32: 21-26.
Coombes, P.J. and Barry, M.E. (2014), A systems framework of big data driving policy
making - Melbourne's water future. OzWater14 Conference. Australian Water Association,
Brisbane.
Coombes, P.J. and Barry, M.E. (2008), The relative efficiency of water supply catchments
and rainwater tanks in cities subject to variable climate and the potential for climate change,
Australian Journal of Water Resources, 12: 85-100.
Coombes P.J. and Kuczera, G. (2002), Integrated Water Cycle Management - moving
towards systems understanding. WSUD02 Conference, Engineers Australia, Brisbane,
Australia.
Coombes, P.J., Argue, J.R. and Kuczera, G.A. (2000), Figtree Place: a case study in water
sensitive urban development (WSUD), Urban Water, 1(4), 335-343.
Daniell, K.A., Coombes, P.J. and White, I. (2014), Politics of innovation in multi-level water
governance systems. Journal of Hydrology, 519(C), 2415-2435.
EA., (2006). Australian Runoff Quality: a guide to Water Sensitive Urban Design. Editor in
Chief: T.H.F. Wong. Engineers Australia. Canberra.
Goyen A.G. (1981), Determination of rainfall runoff model parameters. Masters Thesis, NSW
Institute of Technology, Sydney.
Goyen, A.G. (2000), Spatial and Temporal Effects on Urban Rainfall / Runoff Modelling. PhD
Thesis. University of Technology, Sydney. Faculty of Engineering.
Haensch, J., Wheeler, S.A., Zuo, A. and Bjornlund, H. (2016), The impact of water and soil
salinity on water market trading in the southern Murray-Darling Basin, Water Economics and
Policy, 2(1), article No.1650004, pp: 1-26.
Kraemer, R.A., Kampa, E., and Interwies, E., (2004), The role of tradable permits in water
pollution control, Inter-American Development Bank.
Kuczera G.A., Lambert, M., Heneker, T., Jennings, S., Frost, A. and Coombes, P.J. (2006),
Joint probability and design storms at the crossroads. Australian Journal of Water
Resources, 10: 63-79.
49
Philosophy of Urban
Stormwater Management
Lloyd, S.D., Wong, T.H.F. and Porter, B. (2002), The planning and construction of an urban
stormwater management scheme. Water Science and Technology. International Water
Association, 45(7), 1-10.
Lloyd, C.J., Troy, P. and Schreiner, S. (1992), For the Public Health, Longman Cheshire,
Melbourne.
McAlister, T., Coombes, P.J. and Barry, M.E. (2004), Recent South East Queensland
Developments in Integrated Water Cycle Management - Going Beyond WSUD. Cities as
Catchments:WSUD2004, Adelaide, SA, Australia.
Meadows D.H., (1999), Leverage points: places to intervene in the system, The
Sustainability Institute, Hartland, USA.
Mitchell, V.G., McMahon, T.A. and Mein, R.G. (2003), Components of the total water balance
of an urban catchment. Environmental Management, 32: 735-746.
PMSEIC. (2007), Water for Our Cities: building resilience in a climate of uncertainty. A report
of the Prime Minister's Science, Engineering and Innovation Council working group,
Australian Government, Canberra.
Paul, M.J. and Meyer, J.L., (2001), Streams in the urban landscape,. Annual Review of
Ecology and Systematics, 32: 333-365.
Pezzaniti, D., Argue, J.R. and Johnston, L. (2002). Detention/retention storages for peak
flow reduction in urban catchments: effects of spatial deployment of storages. Australian
Journal of Water Resources, 7(2), 1-7.
Phillips, B.C. and Yu, S. (2015), How robust are OSD and OSR Systems?. Proceedings, 3rd
International Erosion Control Conference and 9th International Water Sensitive Urban
Design Conference, 20-22 October 2015, Darling Harbour.
Poelsma, P.J., Fletcher, T.D. and Burns, M.J. (2013). Restoring natural flow regimes: the
importance of multiple scales. Proceedings of Novatech Conference, Lyon, France, pp:
23-27.
Shortle, J., (2013), Economics and environmental markets: Lessons from water-quality
trading. Agricultural and Resource Economics Review, 42(1), 57-74.
UPRCT (2005), Onsite stormwater detention handbook, Upper Parramatta River Catchment
Trust.
Walsh, C.J. (2004), Protection of in-stream biota from urban impacts: minimise catchment
imperviousness or improve drainage design?. Marine and Freshwater Research, 55:
317-326.
50
Philosophy of Urban
Stormwater Management
Walsh, C.J., Roy, A.H., Feminella, J.W., Cottingham, P.D., Groffman, P.M. and Morgan, R.P.
(2005), The urban stream syndrome: current knowledge and the search for a cure. Journal
of the North American Benthological Society, 24: 706-723.
Walsh, C.J., Fletcher, T.D. and Burns, M.J. (2012), Urban stormwater runoff: a new class of
environmental flow problem, PLoS ONE 7(9), e45814.
Walsh, C.W., Fletcher, T.D., Bos, D.G. and Imburger, S.J., (2015), Restoring a stream
through retention of urban stormwater runoff: a catchment-scale experiment in a social-
ecological system. Freshwater Science, 34(3).
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Water by Design (2014), Off-site stormwater quality offsets discussion paper, Healthy
Waterways, Brisbane.
Weinmann, E. (2007), Hydrology and Water Resources: the challenge of finding the water
balance. Australian Journal of Water Resources, 11(2), 121-132.
Whelans and Halpern Glick Maunsell (1994), Planning and Management Guidelines for
Water Sensitive Urban (Residential) Design, report prepared for the Department of Planning
and Urban Development the Water Authority of Western Australia and the Environmental
Protection Authority.
Wong, T.H.F. (2006), Water Sensitive Urban Design: the journey thus far. Australian Journal
of Water Resources, 10(3), 213-221.
Wong, T.H.F., Knights, D. and Lloyd, S.D. (2008), Hydrologic, water quality and geomorphic
indicators of catchment Effective Imperviousness, Australian Journal of Water Resources,
12(2), Engineers Australia, pp: 111-119.
51
Chapter 4. Stormwater Volume
Management
Steve Roso, Marlene van der Sterren
With contributions from John Argue, Brett Phillips and Urban Book Editor Peter Coombes
4.1. Introduction
Progressing from the urban stormwater philosophy discussed in Book 9, Chapter 3, this
chapter provides introductory guidance on the design of ‘volume management facilities’.
These are discrete infrastructure measures in various forms and configurations, each of
which are designed to store and release runoff volumes to manage the changes caused by
urbanisation. They are linked by conveyance infrastructure (refer to Book 9, Chapter 5) to
form an urban stormwater network.
This chapter focusses on the concept design phase of a volume management facility and
outlines the detailed design process. Before applying the content in this chapter it is
assumed that the general position of the facility within the catchment is already largely
understood, and preferably informed by a catchment strategy, as discussed in Book 9,
Chapter 3 and Book 9, Chapter 5.
Stormwater storages receive runoff volumes from the catchment via upstream conveyance
infrastructure. The manner in which these runoff volumes are managed depends on the
practice that is adopted. The storage and release of runoff changes the characteristics of the
runoff hydrograph and is a fundamentally important feature of all volume management
facilities.
There is considerable legacy terminology used to describe these facilities including detention
(or retarding), retention, extended detention or slow release. These terms are a derivative of
outlet structures and different operational strategies that change the behaviour of stormwater
storages.
Stormwater storages designed in accordance with ‘detention’ practices include those where
runoff is temporarily stored and simultaneously released via an outlet structure
(Figure 9.4.1). This process typically lowers peak discharge and attenuates the hydrograph
so that the average time of release is delayed. The storage volume and capacity of the outlet
must be determined by catchment wide modelling to achieve target outflow peak discharges
at the catchment outlet.
52
Stormwater Volume
Management
Assuming the stormwater storage is empty at the beginning of a storm, the potential
hydrograph change that can occur depends on:
• the outlet’s discharge capacity relative to the peak discharge of the storm;
• the size of the storage basin volume relative to the total runoff volume from the storm; and
As a general rule, if the storage volume is large relative to the total runoff volume, the
greater the potential hydrograph attenuation that can occur. This performance also depends
on the outlet capacity. A small outlet capacity relative to peak inflows will tend to favour
attenuation of small storms and large storms it will overflow early, whereas a large outlet
capacity will tend to favour attenuation of large storms and small storms will pass through
the facility without attenuation in storage. While the storage and outlet structure are separate
physical components of a volume management facility, they must be designed in an
integrated manner since the capacity of the storage will effect the performance and sizing of
the outlet structure and vice versa. This is a critical aspect of the design of a volume
management facility with detention characteristics that requires an iterative approach to
sizing.
53
Stormwater Volume
Management
Figure 9.4.3. Developed Catchment with Retention as Compared to Detention and Slow
Drainage Strategies
54
Stormwater Volume
Management
The hydrographs in Figure 9.4.2 represent runoff from a rural catchment and from the urban
landscape developed on it. Ideal retention performance of the storage is reproduction of the
rural hydrograph followed by outflow of the remaining stored runoff via slow release over a
longer duration (typically greater than 24 hours). Argue (2017) outlines that it is difficult to
achieve this outcome and recommends a storage volume equal to the total additional runoff
expected from the development and the emptying time of volume management facility is a
function of outlet infrastructure.
The outflow hydrograph resulting from this approach should be similar to that shown in
Figure 9.4.3 (developed catchment with retention). A first approximation solution is likely to
produce a different outflow hydrograph from the required result. Continuous simulation of the
volume management facility is recommended with the aim of adjusting the design i.e.
storage and outflow configuration, to produce the desired outflow hydrograph.
The concept design phase of volume management facilities commences with a thorough
understanding of the volume management objectives intended for the facility (refer to Book
9, Chapter 4, Section 2). Once these objectives are defined, consideration can be given to
the configuration of the facility and how its components might be sized and positioned to
best meet the objectives and local site conditions (refer Book 9, Chapter 4, Section 3 and
Book 9, Chapter 4, Section 4). Detailed design then follows to comprehensively define the
facility to permit construction (refer to Book 9, Chapter 4, Section 5).
An adequate number of facilities are required within catchments to ensure that the controls
will significantly affect peak discharges, volume targets and water quality targets at
catchment outlets. A key aspect of the design of storage based measures is to ensure that
the storages are empty or nearly empty at the commencement of a flood producing rain
event. It is essential to determine the spectrum of design flood events that these facilities will
manage (refer to Book 9, Chapter 3).
55
Stormwater Volume
Management
This objective seeks to reduce concentrations and loads of • Maintain visual amenity
contaminants within urban runoff to pre-determined and
acceptable levels. This is achieved by: delaying some of the • Improved water quality
runoff volume for a period of time (hours to days) (detention), or prior to discharge or
storing part of the stormwater on-site (retention) and passing prior to harvesting
the retained water through treatment processes where physical, activities
chemical and biological processes reduce contaminants in the
water column. Storage of stormwater can also provide some
limited water quality treatment through settlement, even where
this objective is not necessarily sought.
56
Stormwater Volume
Management
An early design task should examine the relevance of the objectives from Table 9.4.1 for a
design in the context of prior studies, investigations, catchment strategies and receiving
waterbody conditions. This process allows the designer to establish a preliminary
understanding of the behaviour of the site, the catchment and receiving system. Another
important task is to check local stormwater authority and state government policy
requirements and standards. In the absence of background studies and local authority
guidance, the designer should critically assess the relevance of the above-listed objectives
from first principles. The ‘associated benefits’ listed in Table 9.4.1 may assist.
If there are indeed multiple objectives sought for a design, it may be advantageous to design
a single facility that will meet all the desired objectives. However, current stormwater
management practice incorporates multiple solutions across scales to better manage risk
profiles (refer Book 9, Chapter 3). Figure 9.4.4 shows how more than one design objective
can be relevant to a site or a catchment, or an entire stormwater management strategy. For
example, design objectives for a facility or a strategy may include:
• control peak discharge, improve water quality and harvest (or infiltrate) stormwater.
Where possible the design process should pursue performance characteristics that target all
the desired objectives. This goal is most likely to be achieved when a particular management
strategy is selected as the primary objective, for example peak discharge reduction or water
quality improvement, and the subsidiary objectives are incorporated by exploiting
opportunities made available by the primary objective.
57
Stormwater Volume
Management
58
Stormwater Volume
Management
Each of these components can be configured and combined with the other components in
different ways to meet different design objectives. The size, shape and material of each
59
Stormwater Volume
Management
component can also be selected to respond to performance criteria and site constraints.
Some components can be omitted depending on the design objectives. For example
treatment processes are only required where the design seeks to improve water quality or
the impacts of the storage on improving water quality need to be enhanced.
The volume management facility configurations that are in common use in Australia are
listed and described in Table 9.4.3. Further guidance on selecting a specific design
configuration is also provided in Book 9, Chapter 4, Section 4 and Book 9, Chapter 4,
Section 5
60
Stormwater Volume
Management
Normally
requires pre-
treatment.
Infiltration An infiltration zone, in the A porous floor in the base Removed Lot
System floor of a drainage pit, of the structure with contaminants
swale, basin, trench or and volumes of Site
61
Stormwater Volume
Management
Normally
requires pre-
treatment.
aNote those devices without treatment processes may still provide water treatment benefits due to the effects of
temporary storage and/or harvesting of runoff.
bScale definitions taken from Book 9, Chapter 6
The following sub-sections outlines the concept design phase of a typical volume
management facility. Four concept design tasks are described:
• Choosing the best location for the facility (Book 9, Chapter 4, Section 4)
• Choosing the best design solution, having regard to the design objectives and site
variables (Book 9, Chapter 4, Section 4)
• Collaboration and integration with other relevant professional disciplines (Book 9, Chapter
4, Section 4)
62
Stormwater Volume
Management
While these tasks are presented in this sequence, the tasks should not necessarily be
completed in this sequence nor in a linear fashion. There is often a need for iteration and
concurrent completion of design tasks. For example, collaboration and a preliminary sizing
may be required to inform the selection of a preferred location. Once the preferred location is
determined, the preliminary sizing must be updated.
Concept design can only commence once an overall catchment strategy has been
established (refer to Book 9, Chapter 3) and design objectives determined (refer to Book 9,
Chapter 4, Section 2). These foundational design aspects are assumed to have been
resolved prior to implementing the following guidance. In particular a decision must be made
as to the general position of the facility or strategy within the catchment. For example, it
should be decided prior to commencing concept design whether the facility will be
constructed to service a catchment comprising a single lot, a neighbourhood, or an urban
precinct that is large in scale. With this overall constraint in mind, the following concept
design tasks should be considered.
Where there is flexibility, it is best to choose a site that presents the smallest design
challenge and meets the objectives for the project. The following discussion is intended to
assist in this regard.
Topography
Volume management facilities may be located on or adjacent to the lowest point in the
catchment to be serviced. This maximises the catchment area to be managed. Similarly the
location may also need to capture flows from upstream conveyance infrastructure. If the site
cannot easily service the relevant upstream sub-catchment then performance against the
design objectives may be compromised.
While catchment hydrology (refer to Book 9, Chapter 6) is an integral part of the design
process, even before such calculations are undertaken, the concept design should be
informed by a general appreciation of the catchment draining to the proposed facility. As a
minimum, the size of the catchment area draining to the facility needs to be determined so
that preliminary sizing can be undertaken.
The location chosen may need to be adequately elevated (or able to be raised using an
embankment), so that hydraulic performance of the outlet structure is not adversely
influenced by backwater. This is a particular consideration for facilities that have treatment
processes and vegetation or where the storage is intended to be well drained.
Areas in low-lying coastal districts must also consider the effects of high-tide and possible
future changes to the tide level due to sea-level rise (refer to Book 4, Chapter 4 and Book 6,
Chapter 5). Frequent backwater flooding from regional flood events should also be avoided,
unless its impact can be assessed and proven acceptable.
The average ground slope in the location chosen should ideally be no steeper than 5%.
Steeper sites are not precluded, however they will require more careful consideration of the
63
Stormwater Volume
Management
type and shape of storage to avoid excessive earthworks. It may also introduce the need for
vertical retaining wall elements which may be undesirable if they hamper access, introduce
safety risks, increase maintenance and increase longer-term facility replacement and
renewal costs.
Soils
Ideally the soils in the chosen location will be suitable for construction and sufficiently deep
to avoid excavation into rock.
Where an embankment is to be formed, the soil properties should allow tight compaction in
layers to form a cohesive matrix and stable slope within the range of 1 in 2 to 1 in 10.
The soils used to construct any embankments or spillways should also have a very low
permeability, particularly where significant volumes of water are to be stored or where long-
term water storage is intended. If the soil type is not suitable then other soil materials will
need to be imported for blending or replacement, or other materials considered such as clay
liners.
Sites with dispersive and acid sulphate soils will require a careful selection of storage
solution. If unavoidable, then the design must include appropriate management measures.
Groundwater Characteristics
Where stormwater infiltration is one of the overall design objectives, the site selected must
be underlain by geologic strata that allow this infiltration to occur. Long-term groundwater
behaviour in the vicinity should also be profiled, and a site selected where the elevation of
the infiltration zone is not substantially below normal groundwater levels.
If infiltration is not required or desired, then a site should be chosen where the groundwater
profile is unlikely to intersect the storage profile. This will simplify construction and ensure
the storage can be more easily drained.
The stream baseflow, flow regimes and runoff water quality characteristics will also be
relevant where water quality improvement or stormwater harvesting or infiltration objectives
are targeted.
The quality of the groundwater store should also be investigated and water quality criteria for
infiltration will need to be observed in accordance with local guidelines and Australian and
New Zealand Environment and Conservation Council (ANZECC, 2000).
Vegetation
The selected site should not require the damage or removal of valuable trees or large stands
of native vegetation. If it is determined that this cannot be avoided then special approvals
may be required and a flora and fauna specialist should be engaged to assist to provide
advise the design team. An environmental offset planting may be necessary.
If the facility is intended to be vegetated then an appropriate depth and quality of surface soil
is required to support healthy plant growth.
64
Stormwater Volume
Management
This is a basic guide aimed to provide an indicative starting point for the inexperienced urban
stormwater designer and should not be interpreted as a barrier to innovative strategies or a
replacement for first principles analysis. Those with experience will recognise opportunities
for hybrid solutions that have broader application. For example, a hybrid facility involving a
detention (retarding) basin with managed aquifer recharge (retention) and stormwater
harvesting (retention) may provide a more comprehensive design solution to a volume
management problem and for protection of urban waterways.
It is noted that in Table 9.4.4 there are several solution and objective combinations that are
flagged as “suitable with limitations”. This means that the solution may not always perform
well with respect to the relevant objective, however it can in some circumstances. For
example, a particular managed aquifer recharge facility may not normally provide control of
peak discharge in large floods when the water levels in the aquifer are high. However, it may
still afford some benefits in small floods and greater benefits if aquifer levels are low. Some
further information about these possible limitations is provided in Book 9, Chapter 4, Section
5.
65
Stormwater Volume
Management
The size of the structure is the first aspect to investigate. Ultimately the size of the structure
is determined by detailed calculation and modelling, however in the very early stages of
planning it may be possible to use simple hand calculations and ‘rules of thumb’.
Preliminary sizing will depend on local rainfall conditions, climate patterns and performance
criteria. A value is often selected based on prior experience with the design of other nearby
facilities. For example, in the case of an infiltration measure, the estimated surface area can
then be combined with length and width limitations to estimate the total requirement for land
area at a preliminary level of accuracy.
The shape of the facility must then be considered. The shape of the facility will be largely
governed by a combination of factors including:
• Minimising and balancing earthworks – to suit the site topography and drainage and
minimise the volume of earthworks relative to the volume of runoff stored. At the same
time have regard to the design of adjoining infrastructure such as stormwater conveyance,
roads and buildings.
• Visual and landscape objectives – there may be visual and landscape objectives sought
for the facility that might influence overall shape of the facility.
• Maintenance and safety objectives – Suitable allowance should be made for maintenance
access and safe batter slopes.
• Achieving suitable length to width ratios – where the facility targets water quality
improvement the length to width ratio must sit within a suitable range, typically between
3:1 and 10:1.
While determining the preliminary shape of the structure, consideration should also be given
to the need for any vertical wall elements, the location of outlet structures and the position
and alignment of any embankments.
The design of a volume management facility is a task best undertaken in close collaboration
with the client representative, relevant stakeholders and the overall urban design team
including:
• Urban Designers;
• Civil Engineers;
66
Stormwater Volume
Management
• Landscape Architects;
• Geotechnical Engineers.
This collaboration should occur early in the design process to minimise re-work and
maximise the potential for integrated outcomes. For example good opportunities exist for co-
location of volume management facilities within areas that also perform recreation,
landscape and environmental functions.
Since the position of volume management facilities is often tightly controlled by site
topography and hydraulic constraints, it is also important that the design is undertaken in
conjunction with the overall bulk earthworks and stormwater conveyance solution to yield an
overall efficient and low cost design.
More recently, investigations have also focused on understanding the performance of entire
linked systems of water cycle management within urban catchments that can reveal the
cumulative impacts of integrated or combined strategies that better represent real systems
(Coombes et al., 2002b; Coombes, 2005; Walsh et al., 2012; Coombes and Barry, 2015).
These issues are discussed in Book 9, Chapter 3. This body of research and practice has
evolved since the previous version of ARR 1987 (Pilgrim, 1987) and represents significant
new thinking in the stormwater industry.
Many authors have established that the use of volume management at a distributed scale
may not be required to provide significant reductions in peak discharges at the property
scale because reducing runoff volumes at the top of catchment provide substantial
reductions in peak flows throughout catchments (for example: Herrmann and Schmida
(1999), Andoh and Declerck (1999), Argue and Scott (2000), Vaes and Berlamont (2001)).
Argue and Scott (2000) used a large catchment scale model to conclude that distributed
peak discharge control (on-site detention) and volume management (rainwater harvesting)
systems produce similar hydrographs at the catchment outlet. It was acknowledged that the
peak discharges on a lot scale may be larger for volume management than for flow
management. However, it was found for medium to large catchments that the cumulative
effect of volume reductions obliterates the effect of peak discharges at individual sites. This
indicates that the cumulative effects of distributed reductions in stormwater runoff volumes
can be significant at a catchment scale due to the reduction in overall volume discharged to
the catchment outlet (refer to Book 9, Chapter 3). These results are consistent with the basic
elements of peak flows which are volume and time. Reducing either element must reduce
peak flows within the catchment.
Coombes et al. (2001), Coombes et al. (2003) also found that at the lot scale the flow
management (detention) systems reduced the peak discharge at the lot scale and volume
management (rainwater harvesting) provided smaller changes in peak discharges at lot
67
Stormwater Volume
Management
scale but significantly reduced the volumes of stormwater runoff which reduced peak
discharges at the street and catchment scale. It was argued that flooding is a volume driven
process and peak discharges at the lot scale had little or no bearing on the floods at a
catchment scale. Use of first principles processes such as continuous simulation and
detailed systems analysis rather than empirical assumptions (for example antecedent
conditions associated with event based analysis) has also revealed that the shape of
catchment hydrographs may be significantly altered by distributed and integrated solutions
within catchments (for example; Coombes and Barry (2009), Coombes (2015)). van der
Sterren (2012), Burns et al. (2013) and Coombes (2015) highlight the benefits of replacing
the common design requirements with treatment trains on properties and throughout urban
areas to manage peak discharges and flow regimes throughout and at the outlet of urban
catchments.
The design and analysis of these facilities must include the interactions with other
stormwater management facilities and urban form in the catchment and catchment
behaviours. The adopted modelling approach should also use rainfall time series and
resolve full hydrographs of a total duration that is relevant to the objective being analysed.
For peak discharge control, this may only be minutes or hours. For water quality
improvement and stormwater harvesting applications, this may be years or decades. The
model must have sufficient catchment resolution and detail to adequately represent the
linked hydrologic processes in the catchment. Lumped models that simplify catchment
representation and behaviours should be used with caution.
The modelling approach should allow different storm scenarios to be tested since the
performance of a volume management facility may be highly sensitive to the selected storm
characteristics and volumes. For example, volume management facilities will have a greater
impact on peak discharges under conditions where the storm burst occurs in front of a storm,
rather than under conditions when the storm burst occurs towards the back of a storm, when
the detention storage is already partially full.
A designer may therefore need to consider using an ensemble of complete storms with a
storm burst of around the critical duration or a storm burst only to determine the benchmark
condition(s) (Phillips and Yu (2015); Book 9, Chapter 6). If a design approach adopts a storm
burst only approach, then for a given Annual Exceedance Probability (AEP) the peak flows
are assessed for a range of storm burst durations and the storm burst duration that gives the
highest peak flow is adopted as the critical storm.
If a design approach adopts an ensemble of complete storms of a given AEP, then the
designer will need to determine if the benchmark condition is to be based on the 50th
percentile peak flow or on a different percentile of peak flow. Preliminary testing indicates
that adopting the 50th percentile is a very good indicator of the results from more complex
Monte-Carlo approaches in most circumstances. Ultimately, the decision of what percentile
of peak flow to adopt can be informed through identifying the level of risk the community is
willing to accept within the catchment.
68
Stormwater Volume
Management
Once a base model is established, which includes the proposed facility, the model should be
capable of iterative changes to the dimensions of the storage and the outlet structure. Using
a judgement driven and iterative approach, the model is used to determine an optimised
configuration that results in the required hydrologic performance for the selected range of
storms.
For more detailed guidance regarding the use of computer modelling in urban stormwater
design refer to Book 9, Chapter 5 and Book 9, Chapter 6.
Detention basins can be designed to suit a range of catchment sizes. Community and
regional scale basins may have considerable community benefits as areas for recreation and
may be built around specific sizes and shapes of fields for sports such as football, netball
and cricket. The sides of basins are usually sloping earth embankments, suitable for
occasional spectator use. Basins used for passive recreation may include stands of trees
(within the basin but not on any fill embankment), lawns and other vegetation.
Basins may be placed directly across a watercourse, or located off-stream, with flows in
excess of a certain flow rate being diverted into them. They can be arranged in a widened
section of drainage easement zoned both for recreation and drainage purposes.
Detention basins themselves are not suited to the improvement of water quality or harvesting
and infiltration of stormwater. However other types of volume management facilities can be
nested inside. For example a constructed wetland can be located in the floor of a large
detention basin storage to also target water quality improvements.
Available Guidelines
69
Stormwater Volume
Management
There are many guidelines on community and regional detention including ACT Department
of Urban Services (1998), Hobart City Council (2006), Department of Water, Western
Australia (2007), Melbourne Water (2010), Queensland Department of Energy and Water
Supply (2013). These guidelines can be readily used for designing and modelling detention
systems, using the modelling and storm patterns as described in Book 9, Chapter 6.
Flood Capacity
The final sizing of any basin should be completed with the aid of a computer model. The
selected model must accurately simulate the hydraulic behaviour of the basin outlet,
especially when a partially full pipe flow or tailwater submergence occurs (Queensland
Department of Energy and Water Supply, 2013). When located in-stream, the hydraulic
modelling should also represent the stream conditions and the stream flows discharging
through the basin in addition to the urban areas directed to it.
Large community and regional basins can be considered dams, as they can store significant
volumes of stormwater, and therefore may pose a potential threat to communities residing
downstream of a basin. As a result, the design must have regard to the ANCOLD (Australian
National Committee on Large Dams, 2000) guidelines. A detailed risk assessment of a storm
exceeding the Dam Crest Flood should be considered in the design of a detention basin
within an urban area due to the potential severe consequences of the sudden failure of a
basin on any urban development located on the floodplain downstream.
Detention basins should be designed with a flood capacity to convey appropriate extreme
storms safely through the basin in accordance with the Hazard Category of the basin as
defined by ANCOLD, as is the case for conventional dams.
Depending on the findings of the ‘Initial Assessment’ a more detailed assessment (ANCOLD,
2000b) including a Dam Break analysis for both ‘flood failure’ and ‘sunny day’ scenarios may
be required.
With increasing urbanisation there are now many catchments which contain a series of
detention basins. Each basin within a catchment should be investigated not only individually
but also collectively within the catchment, including all basins modelled as a whole
(Melbourne Water, 2010).
• The consequences of one basin failure cascading downstream into lower basins should be
evaluated; and
• The effect of long period releases from upper basins superimposing on flows through
lower basins may require a revision of the basins’ operation throughout the catchment.
Embankment Design
70
Stormwater Volume
Management
and stabilisation should be specified and protection provided to cater for cracking or
dispersive soils. Impervious zones of an earthen embankment should take the form of a
centrally located ‘core’ rather than an upstream face zone to reduce the effects of drying
which may lead to cracking.
If the earth fill for any embankment is taken from borrow areas, these areas should be kept
as far away from the embankment(s) as practicable. Should the borrow area penetrate any
alluvial sand layers or lenses, the embankment’s cut-offs should be taken to at least one
metre below the estimated depth of such sand layers/lenses at the detention basin floor.
Chimney intercept filters and filter/drainage blankets should be used for all high and extreme
hazard category detention basins. Such filters may also be required for lower hazard
category detention basins. All earthen embankments constructed from dispersion soils must
have a chimney filter and downstream filter/drain (Melbourne Water, 2010).
Suggested basin freeboard requirements for a variety of basins are provided in Table 9.4.5.
External earthen embankment slopes and their protection should take into account long term
maintenance of the structure. The side slopes of a grassed earthen embankment and basin
storage area should not be steeper than 1(V):4(H) to prevent bank erosion and to facilitate
maintenance and mowing.
The surfaces of an earthen embankment and overflow spillway must be protected against
damage by scour. The degree of protection required is subject to the calculated flow velocity.
• 2 m/s < V < 7 m/s a dense well-knit turf cover incorporating a turf reinforcement system;
and
Practical maintenance access should be provided to the full length of the embankment and
any hydraulic structures passing through it.
Basin Floor
The floor of basin shall be designed with a suitable grade that provides positive drainage to
the basin outlet and to prevent water logging. Detention basins may require underdrains to
71
Stormwater Volume
Management
positively drain the bottom of the detention facility for ease of maintenance. If there are
frequent trickle flows entering the basin then a low flow channel or pipe passing through the
basin should be considered.
Primary Outlets
The key function of primary outlets is to release flows from a detention basin at the designed
discharge rate. Some typical primary outlets are shown in Figure 9.4.6. Book 6 details how
these outlets can be hydraulically designed.
Pipe or box culverts are often used as outlet structures for detention basin facilities. The
design of these outlets can be for either single or multi-stage discharges. A single stage
discharge system typically consists of a single culvert entrance system, which is not
designed to carry emergency overflows (for example, when pipes are blocked). A multi-stage
inlet typically involves the placement of a control structure at the inlet to the culvert. In
particular, details on the hydraulics of rectangular weirs are given in Book 6, Chapter 3 and
Book 9, Chapter 5.
Secondary Outlets
In general, the capacity of secondary outlets (typically spillways) should be based on the
hazard rating of the structure as defined by the ANCOLD seven level rating system. The
hazard rating defines the required ‘Fall back’ Design Flood. In some cases where the
required ‘Fall back’ Design Flood is considered to be impractical, a full risk assessment of
the basin may allow a lesser capacity spillway in line with ALARP (As Low As Reasonably
Practicable) principles(Melbourne Water, 2010).
72
Stormwater Volume
Management
The design capacity of spillways should account for the possible reduced capacity of primary
outlets which have the potential to become blocked during a major storm. The assessment
of the possible blockage should be undertaken in accordance with the guidance provided in
Book 6.
Recommendations for the design of outlet structures are provided by (ASCE, 1985) while the
Design of Small Dams US Bureau of Reclamation (1987) provides procedures for the sizing
and design of free overfall, ogee crest, side channel, labyrinth, chute, conduit, drop inlet
(morning glory), baffled chute and culvert spillways.
In New South Wales, OSD was developed and first implemented by Ku-ring-gai Council,
closely followed by Wollongong City Council (O'Loughlin et al., 1995). Since then many
councils in Greater Sydney and elsewhere have implemented OSD systems. Other Councils
outside of NSW have also adopted On-Site Detention, such as Hobart City Council (TAS),
City of Casey (VIC), Manningham City Council (VIC), Melton Shire Council (VIC) and the
City of Tea Tree (SA).
It is important to note that the imposition of OSD requirements at the lot scale is often done
on the assumption that there are broader flood benefits at a catchment scale. However, in
73
Stormwater Volume
Management
some cases there may be little or no catchment wide benefit from OSD, as the overall
volume of runoff is not reduced, merely detained for a period of time. This effect is not
always sufficient to influence catchment scale floods. OSD performance is also sensitive to
the temporal pattern of rainfall.
Establishment of OSD policy therefore needs careful assessment at the outset using a
catchment wide strategy to ensure the overall catchments to which the policy is intended to
be applied are indeed suitable.
Available Guidelines
There are many guidelines on the sizing or design of OSD, for example Department of
Irrigation and Drainage (2000), Upper Parramatta River Trust (2005), Hobart City Council
(2006) and Derwent Estuary Program (2012). These guidelines can be readily used for
designing OSD systems, using the modelling approaches outlined in Book 9, Chapter 6.
These documents can assist in the design of OSD systems, however, designers are
encouraged to determine if the method identified in the guidelines are consistent and make
suitable for using the contemporary flood estimation techniques identified in Book 9, Chapter
6 and the issues identified in Book 9, Chapter 3.
Flood Capacity
Historically, the primary objective of OSD controls was to manage flooding in a 1% AEP
event only. Further implementation and development on OSD has resulted in many
authorities now requiring OSD systems to reduce the post-development flows to adopted
benchmark peak discharges over a range of AEPs up to and including the 1% AEP event.
• Permissible Site Discharge (PSD) or Site Reference Discharge (SRD), which are defined
as the maximum allowable discharge leaving the site (determined using catchment-based
assessment of lot-based measures) with PSD giving a single discharge rates and SRD
giving multiple discharge rates for different rainfall frequencies; and
• Site Storage Requirement (SSR), which is defined as the volume required for overall
storage.
It should be noted that if the objective of OSD control is to manage flooding in a 1% AEP
event only then typically only a single set of PSD and SSR values are defined. However,
where authorities require OSD systems to perform over a range of AEPs a nest of frequency
staged storages and outlets is required with multiple PSD and SSR values. An example of
an OSD design is provided in Figure 9.4.8.
74
Stormwater Volume
Management
Figure 9.4.8. Frequency Staged Below Ground On-Site Detention System (adapted from
Upper Parramatta River Trust (2005))
In the event that catchment wide assessments have not been conducted, one of the
following site controls can be applied to enable the design of OSD systems:
1. The post-development flows of the subject site should be controlled to meet the pre-
development flows for the site for a range of complete storms; or
2. Determine the capacity of the drainage system and divide by the area of lots that drain to
the system. This gives an indicative estimate of the amount of the unit runoff (i.e. the
PSD).
Either of these approaches are not as effective as designs based on a holistic catchment
assessment, but may assist in the short term in managing nuisance flows in existing systems
immediately downstream from sites.
75
Stormwater Volume
Management
OSD systems may comprise above ground storage or underground storage or a combination
of both. It is critical to select an appropriate storage type by considering the site layout, costs
and effectiveness of OSD.
Above ground storage has advantages in terms of flexible configuration of site levels to
achieve the required storage volume, capacity to incorporate water quality treatment through
infiltration and treatment media, low construction cost and potentially low maintenance.
The main types of above ground storages include landscaped storages, parking and paved
storages, and rain water tanks with dedicated airspace for detention.
Where storage is not provided by a rain water tank the typical requirements listed in
Table 9.4.6 should be considered.
Any direct inflow point into a vegetated system (e.g. roof drainage or
driveway runoff) should include a small energy dissipation device to
reduce velocity and prevent erosion of the basin floor.
Mulch utilised in the above ground storages should not be able to float
and plants should be capable of withstanding frequent inundation as per
the design depth and frequency.
76
Stormwater Volume
Management
Below ground storage tanks may be considered under the following conditions:
• Infeasible to construct above ground storages due to site constraints or topography; and
Below ground OSD storage tanks are usually made of reinforced concrete and can be pre-
cast or cast in-situ to meet individual site requirements. When designing below ground tanks
then typical design considerations include those listed below in Table 9.4.7.
Table 9.4.7. Below Ground On-Site Detention Storage Design Considerations (Department
of Irrigation and Drainage, 2000), (Department of Irrigation and Drainage, 2012)
77
Stormwater Volume
Management
Below ground storage could be provided by modular system which could include one or
more parallel rows of pipes connected by a common inlet and outlet chamber. The size of a
modular unit is determined by the storage volume requirements, site constraints and the
number of conduits or modular units which can be installed. When designing modular
storage systems typical design considerations are similar to the design considerations for
below ground storages as outlined above. Further guidance on conduit storage systems is
provided by Department of Irrigation and Drainage (2000), Department of Irrigation and
Drainage (2012).
The designer of an OSD system faces a challenging task to achieve a balance between
creating sufficient storages that are attractive and complementary to the architectural design,
minimising personal inconvenience for property owners/residents and limiting costs.
These demands can be balanced by providing storage with a frequency staged storage
approach. Under this approach, the design of OSD adopts combined storages multiple outlet
approach, which can consist of an above ground storage and below ground storage.
Underground storage is designed to store runoff for more frequent storm events, whilst the
remainder of the required storage, up to the design storm event, is provided as above-
ground storage.
78
Stormwater Volume
Management
This approach is likely to limit the depth of inundation and extent of area inundated in the
above ground storage so that the greatest inconvenience to property owners or occupiers
occurs very infrequently. It recognises that people are generally prepared to accept flooding
which causes inconvenience as long as it does not cause a significant damage or does not
happen too often. Conversely, the less the personal inconvenience the more frequently the
inundation can be tolerated.
Outlet Structures
The outflows from OSD systems are typically controlled by orifices. Details on the hydraulics
of orifices are discussed in Steward (1908); Medaugh and Johnson (1940); Lea (1942);
Brater et al0. (1996); Bryant et al. (2008) and USBR (2001).
The orifice outlets should have a minimum internal diameter of at least 25 mm and need to
be protected by a mesh screen to reduce the likelihood of the primary or secondary outlets
being blocked by debris.
Upstream Drainage
The stormwater drainage system (including surface gradings, gutters, pipes, surface drains
and overland flowpaths) for the property must:
• be able to collectively convey all runoff to the OSD system in a 1% AEP event with a
duration equal to the time of concentration of the site; and
• ensure that the OSD storage is by-passed by all runoff from neighbouring properties and
any part of the site not being directed to the OSD storage, for events up to and including
the 1% AEP event.
Maintenance
While Councils are ultimately responsible for ensuring these systems are maintained through
field inspections and enforcing the terms of any positive covenant covering OSD systems,
the designer’s task is to minimise the frequency of maintenance and make the job as simple
as possible (Upper Parramatta River Trust, 2005).
Available Guidelines
Further guidance on rain water harvesting can be found in the following documents:
• Rain water Tank Design and Installation Handbook HB 230 refer to http://
www.rainwaterharvesting.org.au
79
Stormwater Volume
Management
• Interim Rain water Harvesting System Guidelines (NSW Department of Planning and
Environment, 2015)
Modelling
Rain water harvesting systems were historically designed and considered as a stand-alone
facility. This process results in assumptions that rain water harvesting systems do not
contribute to the control of quantity or quality of stormwater discharges from a site or
throughout a catchment. It was often argued that rain water harvesting does not provide
these benefits due to the uncertainty associated with antecedent conditions of storm events
(how full is the storage prior to the design storm?). Methods to determine the antecedent
conditions in rain water storages prior to storm events and for design rain water harvesting
systems were developed and demonstrated by Coombes et al. (2001), Coombes et al.
(2002b), Hardy et al. (2004), Coombes (2005), Coombes and Barry (2007), Coombes and
Barry (2009) and Coombes (2009). This applied research and monitoring has provided a
design process for rain water harvesting systems that requires continuous simulation at sub-
daily intervals – preferably six minute time steps to determine the dynamic airspace (drawn
down of storages by water demands). This process can also determine any detention
airspace requirements of rain water storages prior to given storm events for allow integration
with surrounding stormwater management strategies and use in catchment models reliant on
design storms. These concepts have been applied by Phillips et al. (2005) for example and
can be used to address concerns about antecedent conditions in linked stormwater designs.
The design process for rain water harvesting systems has been enhanced by many authors
including Burns et al. (2013) and van der Sterren (2012) (for example) to also account for
flow regimes to protect urban waterways.
The rain water or stormwater harvesting system should be designed using continuous
simulation (as identified in Coombes and Barry (2007)) and should consider the following:
• Potential magnitude and frequency of rain water use and any rate of leakage from a leaky
tank;
• Size of inlet configuration, overflow and use (e.g. can the rate of flow be discharge into the
tank and out of the storage without surcharging); and
Upstream Drainage
The design of rain water harvesting systems can include gutter guards, leaf diverters, first
flush devices and filter socks can limit the transfer of sediment and debris into rain water
storages. Mesh screens on inlets, outlets and overflow devices will exclude animals and
mosquitoes and other insects from entering storages therefore minimising the risk of harmful
microorganisms and disease-carrying mosquitoes entering the tanks.
Runoff that is not collected in the storage and overflows from the storage should be diverted
away from storage foundations, buildings or other structures (enHealth, 2012).
80
Stormwater Volume
Management
Storage Location
The location of the storage infrastructure will be dependent on aesthetic and space
requirements for the chosen device. The tank must also be located where sufficient roof area
can be drained by gravity to the top of the tank.
If the storage system is below-ground, site soil characteristics and surface flows will need to
be considered. Surface flows should be prevented from entering the tank and soil conditions
are particularly important if there are salinity or acid sulphate soil concerns which would
affect the integrity of the structure (Department of Water, Western Australia, 2007).
The tanks should be connected to internal domestic demands, typically toilet flushing.
Appropriate flow rates need to be maintained for the occupant and therefore the majority of
rain water supply systems will require a pump to distribute water to internal and external
plumbing fixtures. A pump should be sized to balance the required flow and pressure for the
intended uses of the rain water from the storage while minimising energy use. Generally
flows of less than 30 L/min are suitable for most residential applications (NSW Department
of Planning and Environment, 2015).
Local government or State Government policy requirements may exist in regards to pumps
and connections.
Outlet
Runoff that is not collected in the storage and overflows should be diverted away from
storage foundations, buildings or other structures (enHealth, 2012). This water should be
directed into gardens, infiltration systems or the public stormwater management network.
The overflow water should not be allowed to cause nuisance to neighbouring properties or to
areas of public access.
The increased uptake of rain water harvesting also creates an opportunity to adopt an
integrated approach to lot scale stormwater management by designing the facility to control
of peak discharge and harvest runoff volume. This approach may result in rain water tanks
with three outlets, one for use of rain water (e.g. connected to selected indoor plumbing or
garden irrigation) down the bottom of the tank, one for orifice discharge (i.e. the OSD outlet)
half way up the tank, and the third outlet is an overflow at the top of the tank (as per
Figure 9.4.9) as originally proposed by Coombes et al. (2001). The dedicated airspace
above the minimum level outlet provides for additional attenuation of peak discharges.
81
Stormwater Volume
Management
Figure 9.4.9. Rain Water Tank with Dedicated Air Space (adapted from Coombes et al.
(2001))
Bioretention basins primarily target water quality treatment objectives for small to medium
catchments. In some circumstances it may also contribute to peak discharge control. Certain
design types can also be used to promote the infiltration of stormwater into the groundwater
system.
82
Stormwater Volume
Management
Available Guidelines
• Facility for Advanced Water Biofiltration (2009) “Guidelines for Filter Media in Biofiltration
Systems”
Design Considerations
The core elements of a bioretention facility including a basin with filter media, an inlet
structure and an outlet structure.
In practice a typical basin filter area requirement is between 1% and 2% of the catchment it
serves. However the overall size of the basin will vary depending on its catchment and the
treatment performance sought.
The shape of the basin is flexible but needs to facilitate even distribution of inflows across
the filter media’s surface. The shape factor should therefore ideally approach a length to
width ratio of 1 (i.e. square), though rectangular layouts are acceptable and common.
An indicative maximum catchment area constraint of about 10 hectares applies since areas
greater than this normally produce trickle flows which can compromise the performance of
the vegetation and filter media. It also becomes more difficult to evenly distribute inflows
across a large filter area and manage scour velocities. This catchment area constraint will
vary depending on local climate and soils.
Designs can be scaled down to lot scale and street scale sub-catchments. These facilities
are sometimes referred to as bio-pods, rain gardens and tree pits.
83
Stormwater Volume
Management
The basin is designed to be frequently inundated for a short period of time, however the
volume temporarily stored and the release rate are not normally effective at controlling peak
discharge in large floods. Hybrid design opportunities exist where the bioretention basin is
nested inside a larger detention basin facility to target peak flood discharge as well as water
quality.
The floor of the basin comprises of a carefully blended soil filter media, minimum 400 mm
depth, with a prescribed hydraulic conductivity of between 100 mm and 300 mm/hr. Over
time the conductivity changes as the media settles and plants establish. The plant root zone
enhances the water quality treatment performance of the filter and also helps to maintain an
equilibrium level of hydraulic conductivity in the media.
Beneath the filter media are a sand transition layer and then a gravel drainage layer. The
sand transition layer limits progressive migration of the filter media into the drainage layer.
The drainage layer includes a network of slotted pipes that collect treated stormwater for
discharge. This drainage layer can be designed as a saturated sump to sustain plant growth
during extended dry seasons.
Bioretention basins are normally lined with low permeability clay or a plastic membrane. It is
possible to design the system without a liner to encourage infiltration into the local
groundwater table, however success with this approach will heavily depend on plant choice
and climate.
Inlet Structures
The inlet structure receives flow from the upstream conveyance network. Typically the inlet
comprises a small headwall pipe outlet, roadside kerb and gutter or an open channel swale.
For large catchments a high-flow bypass is required to limit velocities within the basin and
avoid scour of plants and filter media. For large catchments a coarse sediment capture zone
(sometimes referred to as a ‘sediment forebay’) is also required to capture sediment and
prevent smothering of the filter media. Regular clean-out of the coarse sediment capture
zone is required. Maintenance access is therefore important.
Outlet Structures
The primary outlet is the filter media underdrainage system described previously. This is
collected into an outlet pit before discharge into the downstream conveyance system. The
secondary outlet normally comprises of an overflow pit or weir that is engaged once the
hydraulic conductivity of the filter media is exceeded. The level of the weir is normally
between 0.1 m and 0.3 m above the filter surface level. For larger systems a small armoured
spillway or weir may also be provided to augment outlet capacity during a large storm.
The outlet discharge level should be sufficiently elevated above local backwater and tide
levels to ensure the overall facility is free-draining. Emptying time for these systems can be
critical and should be checked.
Bioretention basins should be thickly vegetated to encourage water treatment, enhance the
long-term performance of the filter media and suppress weed growth. A wide range of plant
species may be suitable, but those that tolerate dry conditions, can be periodically inundated
and have fibrous root systems are preferred. Native sedges, rushes, grasses, tea tree, paper
84
Stormwater Volume
Management
bark and swamp oak have all been found to perform well. The planting scheme that is
chosen should blend with the surrounding landscape and habitat.
85
Stormwater Volume
Management
A constructed wetland is most suitable for water quality improvement on catchments larger
than approximately 10 hectares (indicative only and subject to local climate and design
features). Subject to design and location it may also provide some peak discharge control. It
is not directly suitable for harvesting or infiltration of stormwater as this can compromise the
sustainability of vegetation. If this objective is sought a separate downstream pond facility
should be provided.
Available Guidelines
• Laurenson and Kuczera (1998) “The Constructed wetlands manual” Sydney, New South
Wales
Design Considerations
The inlet pond receives stormwater inflow from the upstream conveyance system. The depth
and size of the pond should be sufficient to lower flow velocities and promote settling of
course sediment particles. Regular clean-out of sediment from this area is required. Reliable
maintenance access for machinery should therefore be considered in the design.
86
Stormwater Volume
Management
The inlet zone contains drainage structures that direct low flows out of the inlet pond and into
a downstream wetland area.
During a storm the wetland area fills to a depth of about 0.5 m above the normal operating
level. Once this threshold is reached, high flows are directed around the wetland area via a
high flow bypass. This flow split is necessary to avoid re-suspension of sediment and plant
damage in the wetland area.
Wetland Area
The wetland area is designed with a range of different ponding depths up to 1.5 m,
perpendicular to the flowpath. These different depth zones promote a diversity of
macrophytes and semi-aquatic plants and enhance the wetlands treatment capacity. The
majority of the wetland area should comprise emergent macrophytes however deeper zones
are important for diversity and to sustain the ecosystem during drier periods. The overall
shape of the wetland should rest within a length to width ratio of between 3 and 10. Typically
the total wetland area represents about 5% of the catchment area treated however this
varies depending on the climate and treatment performance that is sought.
Outlet Structure
The stormwater that is temporarily held in the wetland after rain is progressively released via
a restricted outlet. A typical residence time of 48 hours is sought, however this can vary
depending on the site constraints and plant selection.
A secondary outlet is also required to limit the depth of submergence over the wetland.
Plant selection requires specialist input to design a planting scheme suited to the hydrologic
regime and climate and therefore likely to establish and maintain a thick vegetation cover.
The majority of the wetland footprint should be designed to support emergent macrophytes,
however deeper zones are important for diversity and to sustain the ecosystem during drier
periods. Regional biodiversity guidelines should be consulted for selection of appropriate
plant species.
Opportunities should also be sought to integrate the wetland into passive open space
recreation and/or local natural habitat.
The wetland should be well sealed with low permeability material to ensure water retention
during dry periods. The bed of the wetland should also be lined with topsoil as a growth
medium for the selected plants.
The initial establishment period is critical, careful maintenance is required including weeding
and replacement of losses. Progressive flooding of the wetland is also needed to avoid
drowning of small plants. Predation by birds is also sometimes a challenge that needs to be
managed, particularly during the establishment phase.
87
Stormwater Volume
Management
prevent saltwater or other contaminants from entering the aquifer (Department of Water,
Western Australia, 2007).
MAR may be used as a means of managing water from a number of sources including
stormwater. The MAR schemes can range in complexity and scale from the precinct scale,
through local authority infiltration systems for road runoff and public open space irrigation
bores, through to the regional scale, which involves infiltration or well injection of stormwater
and provision of third pipe non-potable water supply for domestic use.
Available Guidelines
• Natural Resource Management Ministerial Council et al. (2009) “Australian Guidelines for
Water Recycling - Managed Aquifer Recharge - National Water Quality Management
Strategy - Document No 24, Canberra.
Design Considerations
System Components
As an example, a MAR scheme for infiltration of treated stormwater into a shallow aquifer
could contain the following structural elements (Melbourne Water, 2005; Department of
Water, Western Australia, 2007):
• soakwells, swales or infiltration basins used to detain runoff and preferentially recharge
the superficial aquifer with harvested stormwater;
• an abstraction bore to recover water from the superficial aquifer for reuse;
88
Stormwater Volume
Management
• a reticulation system (in the case of irrigation reuse) (will require physical separation from
potable water supply);
• a water quality treatment system for recovered water depending on its intended use (e.g.
removal of iron staining minerals);
An MAR system may also incorporate the following additional elements (Melbourne Water,
2005; Department of Water, Western Australia, 2007):
• a control unit to stop diversions when flows are outside an acceptable range of flows or
quality;
• a constructed wetland, detention pond, dam or tank, part or all of which acts as a
temporary storage measure (and which may also be used as a buffer storage during
recovery and reuse);
• well(s) into which the water is injected (may require extraction equipment for periodic
purging);
• an equipped well to recover water from the aquifer (injection and recovery may occur in
the same well);
Site Suitability
Factors to consider in evaluating the suitability of an aquifer for a MAR scheme include
(Melbourne Water, 2005; Department of Water, Western Australia, 2007):
• adverse impacts on the environment and other aquifer users (e.g. reduced pumping
pressure for nearby irrigators);
89
Stormwater Volume
Management
These facilities assist to manage stormwater volume through infiltration of stormwater that
enters the groundwater system. They may also contribute to peak discharge control where
rainfall intensities are low relative to the permeability of the infiltration zone. They are not
intended to provide standalone water quality treatment and should ideally be accompanied
by a treatment facility to prevent groundwater contamination.
Available Guidelines
• Healthy Waterways by Design (2006) “Water Sensitive Urban Design Technical Design
Guidelines for South-east Queensland”
• Argue and Pezzaniti (2012) “WSUD: basic procedures for ‘source control of stormwater –
a Handbook for Australian practice”
Design Considerations
System Types
There are several different types of infiltration systems that are available to the designer,
each of which suit different sites and applications. These are:
• Infiltration Trenches;
• Infiltration Basins;
• Soakage Well;
• Infiltration Swales.
90
Stormwater Volume
Management
Infiltration Trenches
An infiltration trench is a trench filled with gravel or other aggregate (e.g. blue metal), lined
with geotextile and covered with topsoil. Often a perforated pipe runs across the media to
ensure effective distribution of the stormwater along the system. Recharge storages can also
be formed using modular plastic open crates or cells which can be laid in a trench or in
rectangular formation. Such systems are typically 0.5 m to 1.5 m deep, surrounded by
geotextile and covered with topsoil. Stormwater discharged into these systems is often pre-
treated to reduce ongoing maintenance of such systems. Systems usually have an overflow
pipe for larger storm events. There are a range of products which have various weight-
bearing capacities so that the surface of the system can be used for parkland or vehicle
parking areas. These systems can be combined to treat a large area (Department of Water,
Western Australia, 2007).
Community and regional infiltration basins are typically installed within public open space
parklands. They can consist of a natural or constructed depression designed to capture and
store the stormwater runoff on the surface prior to infiltrating into the soils. Basins are best
suited to sandy soils and can be planted out with a range of vegetation to blend into the local
landscape. The vegetation provides some water quality treatment and the root network
assists in preventing the basin floor from clogging. Pre-treatment of inflows may be required
in catchments with high sediment flows (Department of Water, Western Australia, 2007).
Soakage Wells
One method for infiltration of urban runoff into suitable soils is using soakage wells (for soils
with hydraulic conductivity values > 1 x 10-6 m/s). These systems are used widely in
Western Australia as an at-source stormwater management control, typically in small scale
residential and commercial applications, or as road side entry pits at the beginning of a
stormwater system. Soakage wells can be applied in retrofitting scenarios and existing road
side entry pits/gullies can be retrofitted to perform an infiltration function (Department of
Water, Western Australia, 2007).
Soakage wells consist of a vertical perforated liner with stormwater entering the system via
an inlet pipe at the top of the device (refer Figure 9.4.14). The base of the soakwell is open
or perforated and usually covered with a geotextile. Alternatively, pervious material, such as
gravel or porous pavement, can be used to form the base of the soakwell. Where source
water may have a high sediment load, there should be pre-treatment, such as filtering, as
soakage wells are susceptible to clogging.
91
Stormwater Volume
Management
Figure 9.4.14. Leaky Well Infiltration System (adapted from Argue (2017))
Permeable Pavement
There are two types of pervious pavements that are effective in intercepting and diverting
surface runoff into the host soil body:
• Porous paving: grassed surface integrated with a sandy-loam and plastic ring-matrix layer
laid above a substructure of sand/gravel mix placed under optimum moisture content
conditions.
The abstraction capabilities of permeable paving system slots and gravel-filled tubes can be
as high as 4,000 mm/h when new – a performance which can show little deterioration over
time where surface sediment loads are “light” or where the supply is pre-treated. Pre-
treatment in a typical urban street context would require the insertion of a simple sediment
trap (2.0 m2 capacity) immediately upstream of the paving (requiring annual clean-out). The
alternative to pre-treatment is regular (five-year intervals) cleaning of the paved surface.
Grassed surface paving shows infiltration capacity of, typically, at least 100 mm/h when new
and, like permeable paving, shows little deterioration over time where supply sediment loads
are relatively “light”. Porous paving is unsuited to the urban street context where permeable
paving is used but can be relied upon for many decades of low maintenance service
receiving runoff from, for example, a (conventional) paved carpark surface. “Low
maintenance” in this context involves little more than regular mowing. The continued
impressive performance of a porous paved surface is accounted for by the dynamic nature
of the interaction – maintaining infiltration capacity - which takes place between the grass
roots and the host soil.
92
Stormwater Volume
Management
Infiltration Swales
Infiltration swales are shallow grassed channels – typically 0.3 m to 0.5 m (maximum) deep,
5 m to 6 m wide in residential streets – with longitudinal slopes, preferably, less than 3%.
They have wide application in stormwater retention systems for three main reasons:
• They can be effective in retaining pollutants conveyed in stormwater (Breen et al., 1997;
Lee et al., 2008) and
• They can fulfil a role in stormwater harvesting through soil moisture enhancement and,
possibly, aquifer recharge and recovery.
Figure 9.4.15. Main Components of an Infiltration Swale (with Filter Strip) (adapted (Argue,
2017),(Argue, 2013))
Swales only abstract flows up to a limit set by the infiltration capacity of the near-carriageway
“filter strip” and channel bed. All exceedances above this capacity pass as open channel
flow conveyed downstream within the boundaries of the swale. Another practice is to
terminate such a swale in a “dry pond” perhaps in the vicinity of a major road intersection.
93
Stormwater Volume
Management
Site Selection
Due to their flexibility in shape, infiltration systems can be located in a relatively unusable
portion of the site. However, design will need to consider clearance distances from adjacent
building footings or boundaries to protect against cracking of walls and footings.
Identification of suitable sites for infiltration systems should also avoid steep terrain and
areas of shallow soils overlying largely impervious rock (non-sedimentary rock and some
sedimentary rock such as shale). An understanding of the seasonal and inter-annual
variation of the groundwater table is also an essential element in the design of infiltration
systems.
Soils
Soil types, surface geological conditions and groundwater levels determine the suitability of
infiltration systems. Infiltration techniques can be implemented in a range of soil types, and
are typically used in soils ranging from sands to clayey sands. While well-compacted sands
are suitable these measures should not be installed in loose aeolian wind-blown sands.
Soils with lower hydraulic conductivities do not necessarily preclude the use of infiltration
systems, but the size of the required system may typically become prohibitively large, or a
more complex design approach may be required, such as including a slow drainage outlet
system. Care should also be taken at sites with shallow soil overlying impervious bedrock,
as the water stored on the bedrock will provide a stream of flow along the soil/rock interface
(Department of Water, Western Australia, 2007).
Groundwater
The presence of a high groundwater table limits the potential use of infiltration systems in
some areas, but does not preclude them. There are many instances of the successful
application of infiltration basins on the Swan Coastal Plain where the basin base is located
within 0.5 m of the average annual maximum groundwater level. The seasonal nature of
local rainfall and variability in groundwater level should also be considered. Infiltration in
areas with rising groundwater tables should be avoided where infiltration may accelerate the
development of problems such as waterlogging and rising salinity (Department of Water,
Western Australia, 2007).
Pre-treatment
In general, stormwater runoff should not be conveyed directly into an infiltration system, but
the requirement for pre-treatment will depend on the catchment e.g. residential or industrial.
Pre-treatment measures include the provision of leaf and roof litter guards along roof gutters,
vegetated strips or swales, litter and sediment traps, sand filters and bioretention systems.
To prevent infiltration systems from being clogged with sediment/litter during road and
housing/building construction, temporary bunding or sediment controls need to be installed.
It may also be necessary to achieve a prescribed water quality standard before stormwater
can be discharged into groundwater (Department of Water, Western Australia, 2007).
Emptying Time
Emptying time is defined as the time taken to completely empty a storage associated with an
infiltration system following the cessation of rainfall. This is an important design
consideration as the computation procedures typically assume that the storage is empty
prior to the commencement of the design storm event.
94
Stormwater Volume
Management
Table 9.4.8. Interim Relationship between Annual Exceedance Proability and ‘Emptying Time
(Argue, 2017)
EY AEP (%)
1 0.5 0.2 10 5 2 1
Emptying Time (days) 0.5 1.0 1.5 2.0 2.5 3.0 3.5
A stormwater harvest pond is best suited to applications that target the harvesting and re-
use of larger quantities of stormwater for non-potable use. Below a certain size threshold a
pond may not be an economic way of storing water, in which case an alternative may be an
underground tank.
A stormwater harvest pond does not directly target the improvement of water quality,
however it can provide a minor contribution to this outcome in some circumstances where
there is suitable irrigation demand. Similarly a stormwater harvest pond does not directly
target the control of peak discharge but it may contribute to minor reductions in peak flow
downstream for smaller storms.
Available Guidelines
Design Considerations
Embankment Design
The size of the pond relative to the estimated catchment yield is a balance between capital
cost of construction and the reliability of supply. This sizing must be undertaken using a
water balance of the site with realistic estimates of rainfall, runoff and demand.
Liners
95
Stormwater Volume
Management
A stormwater harvest pond requires a low permeability liner with freeboard above the normal
maximum operating level. This can comprise of a non-dispersive compacted clay liner, or a
synthetic membrane.
A well utilised stormwater harvest pond is not normally full and will experience significant
fluctuations in water level. The liner may therefore need underdrainage to prevent excessive
groundwater pressures developing on the outer wall of the membrane.
Treatment
Depending on the anticipated end-use of the stored water, the water extracted from the
facility may require treatment to improve water quality to the required standard. This may
involve filtration using a graded sand filter or similar.
Drainage Structures
A stormwater harvest pond requires a suitable inlet structure armoured against erosion and
designed to accommodate potential inflows when the pond is fully drawn down.
The outlet structure typically comprises of an enclosed conduit or spillway, with invert set at
the maximum operating level. The capacity of this spillway should be designed with the
same level of consideration given to a detention basin spillway, with a capacity and
freeboard matched to the level of accepted risk.
4.6. References
ACT Department of Urban Services (1998), Design standards for urban infrastructure,
section 1 : stormwater, 1.9 Retarding Basins, p: 4.
Australian and New Zealand Environment and Conservation Council (ANZECC) (2000),
Australian guidelines for urban stormwater management, National Water Quality
Management Strategy Report No.10, Canberra, p: 72.
American Society of Civil Engineers (1985), Stormwater detention outlet control structures,
New York: ASCE.
Andoh, R.Y.G. and Declerck, C. (1999), Source control and distributed storage - a cost
effective approach to urban drainage for the new millennium?, Proceedings of the 8th
International Conference on Urban Storm Drainage, Engineers Australia, 30 August - 3
Septemeber, Sydney.
Argue, J.R. (2017), Water Sensitive Urban Design: basic procedures for 'source control' of
storm water - a handbook for Australian practice. Urban Water Resources Centre, University
of South Australia, Adelaide, South Australia, in collaboration with Stormwater Industry
Association and Australian Water Association. Adelaide.
Argue, J.R. (2013), WSUD and green infrastructure: a cost-effective and sustainable
strategy for urban re-development, Proceedings of the 8th International WSUD Conference,
25-29 November 2013, Gold Coast.
96
Stormwater Volume
Management
Argue, J.R. and Pezzaniti, D. (2007), WSUD (Stormwater) Practices - Learning from nature:
A discussion, Proceedings of the Rainwater and Urban Design Conference, IEAust. Sydney.
Argue, J.R. and Pezzaniti, D. (2009), The need for quantity-related criteria in regional WSUD
guidelines: A demonstration, Proceedings of the 2009 SIA NSW and VIC Annual
Conference, SIA NSW & SIA VIC, 7-10 July, Albury.
Argue, J.R. and Pezzaniti, D. (2010), The quantity domain of WSUD: Flooding and
environmental flows, Proceedings of Stormwater 2010: SIA National Conference, 9-12
November, Sydney.
Argue, J.R. and Pezzaniti, D. (2012), Use of WSUD 'source control' practices to manage
floodwaters in urbanising landscapes: Developed and ultra-developed catchments,
Proceedings of Stormwater 2012, National Conference of the Stormwater Industry
Association, 15-19 October, Melbourne.
Argue, J.R. and Scott, P. (2000), On-site Stormwater Retention (OSR) in residential
xatchments: A better option?, Proceedings of the 40th Floodplain Management Conference,
NSW Floodplain Management Authorities, 9-12 May, Parramatta.
Brater, E.F., King, H.W. and Lindell, J.E. (1996), Handbook of hydraulics for the solution of
hydraulic engineering problems, 7th ed. New York: Mc-Graw-Hill.
Breen, L., Dyer, F., Hairsine, P., Riddiford, J., Sirriwardhena, V. and Zierholz, C. (1997),
Controlling sediment and nutrient movement within catchments, Industry report 97/9.
Melbourne: Cooperative Research Centre for Catchment Hydrology.
Bryant, D.B., Wilden, K.A. and Socolofksy, S.A. (2008), Laboratory measurements of tidal jet
vortex through inlet with developed boundary layer, Proceedings of the 2nd International
Synopsium on Shallow Flows, p: 6.
Burns, M.J., Fletcher, T.D., Walsh, C.J., Ladson, A.R. and Hatt, B. (2013), Setting objectives
for hydrologic restoration: from site-scale to catchment-scale, Proceedings of the Novatech
8th International Conference, Lyon, France.
Coombes, P.J. (2005), Integrated Water Cycle Management: Analysis of Resource Security.
Water, 32: 21-26.
Coombes, P.J. (2009), The use of rainwater tanks as a supplement or replacement for onsite
stormwater detention (OSD) in the Knox Area of Victoria, Proceedings of the 32nd Hydrology
and Water Resources Symposium, 30 November to 3 December, pp: 616-627.
Coombes, P.J. and Barry, M.E. (2015), A systems framework of big data for analysis of
policy and strategy, Proceedings of the 2015 WSUD and IECA Conference, Sydney.
Coombes, P.J. and Barry, M.E. (2009), The spatial variation of climate, household water use
and the performance of rainwater tanks across Greater Melbourne, WSUD09 Conference,
Engineers Australia, Perth
97
Stormwater Volume
Management
Coombes, P.J. and Barry, M.E. (2007), Optimisation of mains trickle top-up volumes and
rates supplying rainwater tanks in the Australian urban setting, Proceedings of the 13th
International Conference on Rain Water Catcment Systems 2007, 21-23 August, Sydney,
Australia.
Coombes, P.J., Kuczera, G and Kalma, J.D. (2000), Economic benefits arising from use of
Water Sensitive Urban Development source control measures, Proceedings of the 3rd
International Hydrology and Water Resources Symposium, IEAust. 20-23 November, Perth.
Coombes, P.J., Frost, A. and Kuczera, G. (2001), Impact of rainwater tank and on site
detention options on stormwater management in the Upper Parramatta River Catchment.
Newcastle: Department of Civil, Surveying and Environmental Engineering, University of
Newcastle.
Coombes, P.J., Babister, M. and McAlister, T. (2015), Is the science and data underpinning
the rational method robust for use in evolving urban catchments, Proceedings of the 36th
Hydrology and Water Resources Symposium, Engineers Australia, Hobart.
Coombes P.J., Boubli, D. and Argue, J.R. (2003), Integrated water cycle management at the
Heritage Mews development in Western Sydney, Proceedings of the 28th International
Hydrology and Water Resources Symposium, 10-13 November, Wollongong.
Coombes, P.J., Kuczera, G. Kalma, J.D. and Argue, J.R. (2002a), An evaluation of benefits
of source control measures at the regional scale, Urban Water, 4(4), 307-320.
Coombes, P.J.,Frost, A., Kuczera, G., O'Loughlin, G. and Lees, S. (2002b), Rainwater tank
options for stormwater management in the Upper Parramatta River Catchment, Proceedings
of the 27th Hydrology and Water Resources Symposium, 20-23 May, Melbourne, pp:
474-482.
Department of Irrigation and Drainage (2000), Urban stormwater management manual for
Malaysia (Manual Saliran Mesra Alam Malaysia), volume 7, Detention (Chapter 18, 19, 20),
volume 8 Retention (Chapters 21, 22).
Department of Irrigation and Drainage (2012), Urban stormwater management manual for
Malaysia (Maunual Saliran Mesra Alam Malaysia), ed. 2, Chapter 6: Rainwater Harvesting,
p.18.
Derwent Estuary Program (2012), Water sensitive urban design: Engineering procedures for
stormwater management in Tasmania, Hobart.
Facility for advancing water biofiltration (2009), Guidelines for filter media in biofiltration
systems, Melbourne: FAWB.
Hardy, M.J., Coombes, P.J. and Kuczera, G.A. (2004), An investigation of estate level
impacts of spatially distributed rainwater tanks, 2004 International Conference on Water
Sensitive Urban Design, Engineers Australia, Adelaide, Australia.
98
Stormwater Volume
Management
Healthy Waterways Water By Design (2006), Water sensitive urban design technical design
guidelines for South-east Queensland, Brisbane.
Hobart City Council (2006), Water sensitive urban design site development guidelines and
practice notes, Hobart: Hobart City Council.
Institution of Engineers, Australia (1985), Guidelines for the design of detention basins &
grassed waterways for urban drainage systems, April , pp: 11.
Joliffe, I.B. (1997), Roof tanks - A role in stormwater management, Proceedings of Future
Directions for Australian Soil and Water Management, 9-12 September, Brisbane.
Laurenson, E.M. and Kuczera, G.A. (1998), Annual exceedance probability of probable
maximum precipitation - report on a review and recommendations for practice. Report
prepared for the NSW Department of Land and Water Conservation and the Snowy
Mountains Hydro-Electric Authority.
Lea, F. C. (1942), Hydraulics, 6th ed. New York: Longmans, Green and Co.
Lee, A., Hewa, G., Pezzaniti, D. and Argue, J.R. (2008), Improving stream low flow regimes
in urbanised catchments using water sensitive urban design techniques, Australian Journal
of Water Resources, 12(2), 121-132.
Medaugh, F.W. and Johnson, G.D. (1940), Investigation of the discharge coefficients of small
circular orifices, Civil Engineer, 7(7), 422-424.
Melbourne Water (2010), Guidelines for the design and assessment of flood retarding
basins, p: 9.
New South Wales Department of Planning and Environment (2015), Interim rainwater
harvesting system guidelines (Building Sustainability Index: BASIX), Sydney: NSW
Government.
New South Wales Government (2004), Managing urban stormwater: Soils and construction,
1(4), Sydney: NSW Government.
99
Stormwater Volume
Management
O'Loughlin, G., Beecham, S., Lees, S., Rose, L. and Nicholas, D. (1995), On-site stormwater
detention systems in Sydney, Water Science and Technology, 32: 169-175.
Phillips, B.C. and Yu, S. (2015), How robust are OSD and OSR Systems?, Proceedings of
the 3rd International Erosion Control Conference and 9th International Water Sensitive
Urban Design Conference, 20-22 October 2015, Darling Harbour.
Phillips, B.C., Goyen, A.G. and Lees, S.J. (2005), Improving the sustainability of on-site
detention in urban catchments, Proceedings of the 10th International Conference on Urban
Drainage, IAHR/IWA, 21-26 August, Copenhagen.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Queensland Department of Energy and Water Supply (2013), Queensland urban drainage
manual, Provisional Third Edition, Chapter 5: Detention/retention systems, p: 17.
Steward, C.B. (1908), Investigation of flow through large submerged orifices and tubes,
Bulletin, No.216. Madison: University of Wisconsin.
United States Department of Interior: Bureau of Reclamation (1987), Design of small dams,
Washington: USBR.
Upper Parramatta River Catchment Trust (2005), On-site stormwater detention handbook,
4th edition, Parramatta: UPRCT, p: 212.
Vaes, G. and J. Berlamont (2001), The effect of rainwater storage tanks on design storms.
Urban Water, 3(4), 303-307.
Walsh, C.J., Fletcher, T.D. and Burns, M.J. (2012), Urban stormwater runoff: a new class of
environmental flow problem. PLOSone DOI: 10.1371/journal.pone.0045814.
van der Sterren, M. (2012), Assessment of the impact of rainwater tanks and on-site
detention on urban run-off quantity and quality characteristics. PhD Thesis, University of
Western Sydney.
van der Sterren, M., Rahman, A. and Dennis, G.R. (2013), Quality and quantity monitoring of
five rainwater tanks in Western Sydney, Australia, Journal of Environmental Engineering,
139(3), 332-340.
van der Sterren, M., Rahman, A. and Ryan G. (2014), Modelling of a lot scale rainwater tank
system in XP-SWMM: A case study in Western Sydney, Australia, Journal of Environmental
Management, 141: 177-189.
100
Chapter 5. Stormwater Conveyance
Benjamin Kus
With contributions from the Book 9 editors (Peter Coombes and Steve Roso)
5.1. Introduction
Stormwater conveyance combines hydrological and hydraulic methods to safely convey
stormwater generated by rain falling on urban surfaces to an outlet. Analysis of conveyance
infrastructure typically includes the hydrology of sub-catchments that transfer rainfall runoff
to inlet structures feeding a network of other conveyance infrastructure including pipes, open
channels, roadways and open space.
Conveyance infrastructure is one of the many tools available to the designer for urban
stormwater management which is part of the process of managing the water cycle. For
example a stormwater management strategy for an urban area will include a wide range of
measures to manage stormwater runoff volumes and flow rates, for example on-site
detention, bio-retention, rain water, stormwater harvesting and infiltration systems. These
may alter the inflows to and the design of the stormwater conveyance network as shown in
Figure 9.5.1. These volume management measures (as described in Book 9, Chapter 4) can
operate at different scales such as source and neighbourhood controls that alter inputs to
conveyance networks and regional controls that mitigate outflows from conveyance
networks.
This chapter focuses on the design and analysis of stormwater conveyance networks.
101
Stormwater Conveyance
102
Stormwater Conveyance
This section provides an overview of the philosophy and objectives for design of stormwater
conveyance networks. The primary focus of this section is hydraulics and hydrology, and
design safety requirements. Nevertheless, there are other important aspects that should be
considered during the planning and design of conveyance networks. These include
constructability, aesthetics, future maintenance, direct costs, long term economic factors,
and the potential liability created by a conveyance network. The design should also account
for the practicality of replacing conveyance infrastructure at the end of its design life.
A key hydraulic criterion is to define a conveyance network that restricts surface flows to
safe limits. The primary design requirement is that stormwater depths should not be greater
than a threshold value above the top of inlet pit or invert of a road gutter. This prevents inlet
pits filling to the brim under design conditions which inhibits stormwater flows from entering
conveyance networks. The threshold depth is typically set by the relevant approval authority
and is in the order of 150 mm. Approval authorities also typically specify maximum velocities
of surface flows and minimum velocities of flows in conveyance infrastructure.
In situations where surface flows are conveyed through public places, including footpaths,
roads and public places, it is important to ensure that unacceptable hazards to people are
not created (refer to Book 6, Chapter 7). Keeping the depth and velocity-depth attributes of
surface flow within acceptable limits will minimise these hazards. When the primary purpose
of a pathway is for conveyance of stormwater, it will usually be more efficient to convey flows
in a dedicated watercourse that can accept higher velocity and depths of flows. These types
of flow paths can be designed for dual uses (stormwater conveyance and public access)
provided that the design ensures that people cannot be trapped by stormwater flows.
These limits are intended to ensure that stormwater conveyance networks operate at given
levels of service without causing flooding of properties, nuisance or hazard to pedestrians
and to traffic on streets. An approval authority typically specifies the design AEP of the minor
and major storm events required for different land uses. Designs usually involve minor
system capacity criteria for design of conveyance infrastructure and major system
assumptions to ensure the urban area can safely cope with larger storm events as shown in
Figure 9.5.2.
103
Stormwater Conveyance
Figure 9.5.2 shows that the minor system is used to define the performance of the
conveyance networks which include overland and bypass flows on roads, and performance
of conveyance infrastructure (such as pipes and culverts). The major conveyance system
includes the road profile and overland flow paths, and aims to ensure the safety of
pedestrian and vehicle traffic whilst avoiding property damage and risk to life. In the absence
of guidance from a consent authority, the design AEP storm events are selected to reflect
104
Stormwater Conveyance
the importance of a facility or urban area and the consequences of failure. Some examples
are:
• Conveyance networks in streets: 0.5 EY to 5% AEP for minor flows, 2% AEP or 1% AEP
for major flows (refer to Book 9, Chapter 3) (note that the street profile is part of the major
conveyance network);
• Trunk conveyance networks: 1% AEP or higher, with checks on effects created by PMP
storm events;
• Stormwater quality and sediment control devices: 4 EY to 1 EY but may address the full
spectrum of rainfall frequencies (refer to Book 9, Chapter 3);
• On-site detention (refer to Book 9, Chapter 4): the requirements vary but should aim to
improve the performance of stormwater management scheme at a sub-catchment scale;
and
• Large detention basins (refer to Book 9, Chapter 4 and Book 9, Chapter 6) that may
endanger lives if failure occurs: 1% AEP with checks using probable maximum
precipitation (PMP) storm events.
Both design and analysis processes involve modelling the operation of a conveyance
network that is subject to critical rare storm events that produce maximum flow rates for the
selected AEP events. The selection of critical storm events will involve finding storm
durations or particular storm patterns within ensembles or continuous sequences of rainfall
that create maximum outputs for a particular location. Typically, the design of a conveyance
network is shaped and sized to cater for critical storms for selected AEP events. This
approach recognises that:
• Failures can occur in response to rare or extreme storm events or other factors such as
blockages due to poor maintenance, and exacerbating circumstances such as high tide
levels in coastal areas;
• Ideally the acceptable level of risk should be set by community values and economic
analysis, and;
• The effects of potentially rare failures should be limited by providing a ‘fail safe’ system
that does not fail disastrously.
Analysis techniques should include sensitivity checks to ensure that damage and risks to
lives due to failures are limited. Some failures of the network and overflows can be expected
105
Stormwater Conveyance
during major storm events, as shown in Figure 9.5.2, but the network should operate without
causing safety hazards or large-scale property damage.
Figure 9.5.3 shows that the stormwater design process includes hydrological modelling of
rainfall runoff from urban surfaces to generate inputs to hydraulic modelling of the
conveyance network. This process usually incorporates a hydrological model than translates
design or real rainfall patterns into design flow rates and volumes of stormwater arriving at
inlet structures within a conveyance network. A hydraulic model then converts these inflows
into flow characteristics (depths, elevations, widths, velocities, and volumes) throughout the
network. The design analysis then determines attributes of the conveyance infrastructure,
including pipe diameters and invert levels. The steps in the conveyance design process
include:
1. Define the real world situation to be modelled. This will include land use, demographics,
topography, urban form, local climate, upstream and downstream conditions, and location
within a river basin or waterway catchment.
2. Determine the objectives and design standards that should apply to the drainage network.
106
Stormwater Conveyance
3. Locate any available rainfall runoff data that can be used to calibrate models used to
design the drainage network or collate the most appropriate parameters for the
catchment.
4. Choose the rainfall inputs, hydrological and hydraulic modelling methods for design or
analysis:
a. Rainfall inputs may be design storm temporal patterns of storm bursts or full volume
storms, ensembles of peak burst or full storms, or long sequences of real rainfall (refer
to Book 2 and ARR Data Hub).
b. The hydrology and hydraulic models may be hand calculations but will typically be
some form of computer model (refer to Book 5 and Book 7).
5. Analyse land uses, road and open space networks, and topography to develop the
connectivity stormwater runoff processes throughout the catchment. This includes
gathering information such as:
7. Using topography, rainfall, land uses, the spatial location of other urban infrastructure and
knowledge of the capacity of various drainage inlet structures, define the spacing of
nodes in the conveyance network and the routing processes. The routing processes can
include gutter flows, overland flows, bypass flows and pipe, culvert or channel flows. The
routing processes can include gutter flows, overland flows, bypass flows and pipe or
culvert or channel flows.
8. Calibrate or validate the hydrology and hydraulics of the existing catchment to any
gauged data or nearby flood frequency information or accepted parameters for the area.
9. Use the model to design the capacity and spacing in inlet structures, and to size the
conveyance infrastructure. This design process will be guided by the objectives and
design standards that are applied to the project at Step 2. This process includes:
a. definition of a trial layout of a drainage system made up of inlets, pipes, open channels,
and storages; and
1 Determine the adequacy and safety of the design for all relevant storm events.
0.
1 Prepare plans, specifications and design reports and provide essential instructions on
1. how to build the conveyance network.
107
Stormwater Conveyance
1 Review the design, obtain approval from the required authorities and proceed with
2. construction or implementation.
Urban stormwater conveyance networks are usually a dendritic or tree-like structure that
transports stormwater by gravity. Stormwater runoff is collected using inlet structures (pits) in
different branches that converge at junctions along main lines and flow toward an outlet.
Inlets structures located at the top of and along network branches:
Examples of underground pipe conveyance networks used in New South Wales and
Queensland are provided in Figure 9.5.4.
Figure 9.5.4 shows different configurations of conveyance networks used in New South
Wales and Queensland which highlights that a range of configurations are favoured across
different jurisdictions. For example, in some Queensland jurisdictions, pipes are located
under road centrelines and manholes at junctions in the conveyance network are used as
collectors from inlet pits. Differences in terminology also occur across jurisdictions. For
example, in New South Wales ‘kerb and gutter’ is used, while in Victoria and Queensland the
term ‘kerb and channel’ is employed.
108
Stormwater Conveyance
In some cases, maintenance holes, junctions or junction boxes (pits) are provided as nodes
linking branches in the conveyance network. Other pits that are intended to overflow are
called surcharge pits, overflow pits, or ‘bubble up’ pits. In established urban areas, looped
networks may occur where additional pipes are added to provide additional conveyance
capacity which can change the behaviour of the original conveyance network.
Conveyance infrastructure (for example: pipes, culverts, channels, and swales) are, mostly,
constructed as straight sections with constant slope. Pipe and culvert conveyance
infrastructure are available in standard dimensions supplied by the manufacturers. For
example, the diameters of PVC pipes range from 90 mm to about 600 mm, and the
diameters of reinforced concrete pipes start at 225 mm and increase to over 2 metres. Road
authorities usually specify a minimum pipe diameter of 300 mm to 375 mm within road
reserves to improve maintenance outcomes.
It is vital that conveyance networks include overland flow paths to control major stormwater
runoff events. These overland flow paths should be within road profiles or through open
space and pedestrian pathways. Flow paths through private property should be provided as
a last resort and will require an easement (a legal instrument providing a right to drain
stormwater through a property and permitting authorities to enter the site for maintenance).
Overland flows directed through private property can create hazards and inhibit the
development and value of the property as the required easement cannot be blocked or built
upon.
Conveyance infrastructures (pipes) are designed to limit surface flows on roads to avoid
nuisance to pedestrians and motorists. This process incorporates the design of roads
including profiles of high locations (most often along road centrelines) and low locations
(most often the inverts of gutters). The trapped low points in road networks require the
provision of sag pits which will usually inform the required network of conveyance
infrastructure (pipes) that can be realised by ‘joining up the dots’ between pits as shown in
Figure 9.5.5.
109
Stormwater Conveyance
110
Stormwater Conveyance
Figure 9.5.6 indicates the location of inlet pits at the top of a conveyance network. The street
gutters are part of the conveyance network and are utilised to transport stormwater towards
inlet pits that are situated at intervals to ensure that acceptable flows are carried in road
gutters. The width and depth of flows in gutters are limited to allow unimpeded access for
pedestrians and vehicles. A maximum width of gutter flow of 2 m to 2.5 m with a maximum
flow depth of 150 mm is generally acceptable. Local authorities typically provide guidance on
these values. Locations of inlet pits to ensure adequate conveyance of stormwater may also
be determined from percentages of stormwater runoff captured by each pit, and the depth of
flow and the velocity-depth product of flows in the road gutter.
A designer typically prefers collecting all stormwater runoff from the upper side of the street
as shown in Figure 9.5.6 with an inlet pit at Point D which avoids the need for a conveyance
branch in the street. This possibility is evaluated by establishing a trial location (A) where
stormwater runoff from the corresponding catchment is calculated and the corresponding
width of gutter flow is estimated. The width of flows will increase along the gutter length as
the areas of contributing catchments increase. A pit must be located whenever any of the
criterion limits (such as flow width, depth, or velocity-depth product) are reached. Note that
the design process is about limiting surface flows.
Capture of all stormwater runoff at inlet pit B reduces surface flow to zero just downstream of
the pit, with surface flows increasing again along the gutter due to lateral inflows from the
catchment. However, it is unlikely that on-grade pits will capture all stormwater runoff from
catchment areas during minor storm events that create bypass flows downstream of the pit.
This is shown at inlet pit C where the flow width increases and reduces due to the pit with a
bypass, and some width of flow just downstream of the pit. The flow widths along the gutter
will typically follow a saw-tooth pattern.
Figure 9.5.6 also highlights that an inlet pit must be located upstream of a tangent point at
an intersection to prevent excessive surface flows at the kerb return. Bypass flows from this
inlet pit are collected at inlet pit D. The other pits at the intersection are located along the
path of surface overflows to collect both minor and major overflows. This configuration of
inlet structures (pits) allows pedestrians to cross at street corners without being exposed to
large widths of flows.
The location of inlet pits in the conveyance network may also be driven by a need to provide
an inlet at a significant location, such as near a school with street crossings or at a change in
road alignment. Aspects of good design practice include location of inlet pits upstream of
driveways and avoidance of clashes with other services. A conveyance network also
includes additional pipe connections from private property that should be incorporated in the
design or analysis of the conveyance network for the street. This may include directly
connected pipes from sources such as inter-allotment drainage, on-site volume management
systems (such as onsite detention and rain water tanks), or major commercial
developments. The first inlet pit in the conveyance network for the street may be receiving
considerable pipe flow from upstream private property.
The designer needs to decide on the density of inlet pits in the conveyance network. This
decision will typically be guided by the local authority. For example, the two arrangements of
inlet pits at an intersection may be acceptable in two different scenarios as shown in
Figure 9.5.7.
111
Stormwater Conveyance
A transition into the computer age heralded the design and analysis of urban conveyance
networks using computers to operate drainage software or spreadsheet manipulation that
often implemented the Rational Method. The sizing of conveyance infrastructure was based
on estimates of peak flows. Increases in computing power allowed greater access to
software that integrated hydrology and hydraulics to more accurately analyse or design
conveyance networks using hydrograph methods. Additional details about urban modelling
approaches are provided in Book 9, Chapter 6.
112
Stormwater Conveyance
networks as envisaged in 1987 (Coombes, 2015). Since the 19th century, the Rational
Method and hand calculations has evolved into modern rainfall runoff models (refer to Book
9, Chapter 3). The catchment area has been subdivided into sub-catchments. Average
rainfall intensity derived from storm bursts has been modernised to include temporal
patterns, spatial variation, relationships between different burst rainfall depths and durations,
and the capture of partial areas effects. The runoff coefficient for estimation of stormwater
runoff has been replaced with processes that account for the degree of urbanisation and
spatial distribution of different land uses, addition of loss models to determine rainfall excess,
accounting for pervious and directly or indirectly connected impervious surfaces, and
inclusion of depression storages.
These evolving models account for modern urban features including distributed storages
such as rain water tanks, bio-retention and on-site detention; detention basins and the
spatial distribution of urban features (refer to Book 9, Chapter 4). Modern design criteria
include analysis of the volume, timing and frequency of stormwater runoff to determine peak
flow rates, water quality and requirements. This is done to mimic natural regimes of
volumetric flows to protect waterway health (Walsh, 2004; Walsh et al., 2016). Management
of the volume of stormwater runoff and the frequency of runoff events from urban
catchments is now seen as a key design objective to mitigate downstream flooding and
protect the health of urban waterways.
Predictions of peak stormwater flows using the Rational Method may not adequately
represent the fundamental processes occurring within contemporary urban catchments
(Coombes et al., 2015). This concern is particularly relevant to modern stormwater
management methods, such as Water Sensitive Urban Design (WSUD), that include
cascading integrated solutions involving retention, slow drainage via vegetation, harvesting
and reuse of stormwater and the disconnection of impervious surfaces. These distributed
solutions within catchments alter runoff volume and timing in a variable manner throughout a
catchment (refer to Book 9, Chapter 3). These dynamics are more likely to be revealed by
advanced analysis methods. Importantly, provision of optimum designs for urban stormwater
management is dependent on testing solutions across the full range of urban dynamics. The
limited urban data available for characterising the parameters underpinning the urban
Rational Method for average urban conditions remains a challenge.
The design procedure using computer models is typically implemented more easily and
accurately than the simpler design methods. A main advantage is the ability of a computer
model to rapidly perform design procedures once a system is set up and the necessary data
is entered. In addition, use of computer software allows simultaneous analysis of both minor
and major storm events to adequately size inlet structures and conveyance infrastructure,
ensuring safe overland flow outcomes.
The procedures for design of conveyance networks have evolved from simplifying
assumptions required for hand calculations, such as assuming that pipes are flowing full but
113
Stormwater Conveyance
not under pressure. Modern methods include more calculations and checks, and can apply
unsteady flow hydraulic simulations throughout conveyance networks. These complex
calculations are implemented using computers. The amount of calculations is now so large
that simple numerical checks using hand calculations are not possible. However, ‘sanity
checks’ can (and should) be made to compare results from models using simplified
procedures such as estimating flowrates per unit area. These simple checks will provide
estimates that are different to the results produced by computer models, however, this
process should assist in avoiding gross errors.
Peak flowrates and hydrographs calculated by rainfall-runoff models are inputs to hydraulic
models that determine the characteristics (elevations, depths, widths and velocities) of
stormwater flows throughout catchments. Hydraulic modelling is based on physics and
requires that the geometry of components of a conveyance network should be carefully
defined. Key hydraulic concepts such as Continuity, Conservation of Mass, Energy, and the
Bernoulli's Equation, are covered in Book 6, Chapter 2. The Friction Equations including
Darcy-Weisbach, the Manning and the Colebrook-White Equations (Book 6, Chapter 2) are
all important considerations for the hydraulic design of conveyance networks.
A range of hydraulic models can be used to design conveyance networks. The performance
of the models can be illustrated by Hydraulic Grade Lines (HGL) and Eenergy Grade Lines
(EGL, also called Total Energy Line, TEL). These grade lines are described in books on fluid
mechanics and hydraulics and are useful for understanding flow phenomena.
Designers need to consider that the effectiveness of inflow structures is impacted by the
inflow of stormwater through grates or kerb inlets, and by the energy losses or pressure
changes that are created by inlet structures (refer to Figure 9.5.8). Historically, pit losses
were simplified as a single simple coefficient that approximates the reality of entry losses to
the pit, losses within the pit and exit losses from the pit. A simple single coefficient is
generally used for many different types of inlet structures.
114
Stormwater Conveyance
It is desirable for an inlet structure to maximise collection of stormwater runoff. However this
objective must also include the safety and convenience of pedestrians, cyclists and
115
Stormwater Conveyance
motorists, and costs of infrastructure. Open pit structures that may provide the greatest inlet
capacity are unacceptable in most environments. The design of inlet structures must not
permit children to enter the pit or the conveyance network.
Grates and kerb inlet pits (also referred to as side entries or lintels) are typical inlet
structures that are deployed either separately or in combination. Capacities of inlet
structures can be improved by providing extensions to kerb inlets, deflectors (ribs or grooves
that direct water into an inlet), depressed grates and gutters, or clusters of inlet structures
that include adjacent installation of two or three standard pits. Grates and depressions of
inlet structures should not be hazardous to road users, including cyclists, and their use
should be avoided on busy narrow roads. Aspects of inlet structures for bicycle safety are
discussed by the U.S. Federal Highway Administration (Burgi and Gober, 1977).
There is limited information on simple relationships available for the capacity of many types
of inlet structures. Many investigations of pit entry capacities have utilised hydraulic models.
A range of significant historical studies were published by Burgi and Gober (1977), the
Australian Road Research Board (1979), NSW Department of Main Roads (1979) and
Marsalek (1982). More general information about capacities of inlet structure are provided by
Searcy (1969), Jens (1979), Marsalek (1982), Mills and O'Loughlin (1986), and Argue
(1986).
More recent laboratory experiments have examined capacities of different inlet structures at
the Manly Hydraulics Laboratory in NSW and at the University of South Australia. The
relationships obtained from laboratory tests do not extend to flow rates that may occur in
extreme flood events such as 1% AEP or probable maximum floods. However, these
relationships are still useful for most design problems as inlet structures in urban areas are
predominantly used to admit inflows from minor or more frequent events into conveyance
networks.
The US Federal Highway Administration (NHI, 2013) has published the general procedure
for determining inflow capacities of on-grade pits in their Hydraulic Engineering Circular No.
22 (HEC-22). The efficiency of various grate types and impacts on inlet capacities for a
range of approach grades and velocities are important considerations for urban conveyance
networks. In addition to grate and kerb inlets, the capacities of slotted drain inlet structures
are also relevant for locations where interception of wide sheet flow is desirable and low
sediment and debris is expected. The HEC-22 pit inlet procedures are a useful source of
information to aid design of inlet structures.
The capacities of sag pits are generally independent of upstream gutter slopes and are
governed by weir and orifice equations which are dependent on the depth of ponding. The
weir equations apply to flows that enter the pit at its edges or at the edges of bars in a grate.
Alternatively orifice equations are applied when water ponds above the inlet structure at
depths typically exceeding about 0.2 m. The depth of ponding increases to a threshold level
and stormwater will overflow as bypass flow by passing over a ‘weir‘ such as a road crown or
driveway hump or wall.
The approach and cross-fall grades of roads can affect the availability of storage volumes
surrounding sag pits which can indirectly affect the overall behaviour of sag pits. These
116
Stormwater Conveyance
issues can be considered using hydrodynamic analysis of sag pits as small detention
structures. Sag pits must have sufficient inflow capacity to accept the total inflows of
stormwater runoff to avoid undesirable ponding of stormwater in intersections to limit
obstruction to turning traffic, onto footpaths, into adjacent private properties or basement car
parks, or over the crown of a road during a minor storm.
Basic calculations for determining approximate inlet relationships for grated sag pits were
derived by Searcy (1969). However, it is preferable to utilise the HEC 22 procedures rather
than the sag pit Equation (9.5.1) and Equation (9.5.2) when side entry inlet relationships are
required.
For a grate,
or
where
P is the perimeter length of the pit excluding the section against the kerb (m) (bars can be
disregarded),
A is the clear opening of the grate (m2), i.e. total area minus area of bars, and
The relationship for inlet capacity between depths of 0.12 and 0.43 is described by ARR
1987 as indefinite and Equation (9.5.1) was recommended in that situation. For an inlet
structure that is not located in a depression, the following relationships are recommended:
or
117
Stormwater Conveyance
A is the clear opening of the grate (m2), i.e. total area minus area of bars, and
Charts of the inlet capacity of depressed kerb inlets at sag points are provided by Searcy
(1969) and in (NHI, 2013).
Calculation of relationships for inlet capacity of on-grade pits is more complex than for sag
pits as several factors can change the capacity of inlets. These factors include:
• grade of the approach gutter (or channel) which will vary flow velocity;
• road cross-fall which impacts the flow width and consequently the maximum allowable
flow depth at the inlet;
• entry conditions leading into the pit chamber such as gutter depressions (Figure 9.5.10)
and the angle of the throat (inlet to the pit) (Figure 9.5.11).
Figure 9.5.10. Kerb Inlet Gutter Depressions from HEC-22 (NHI, 2013)
118
Stormwater Conveyance
Figure 9.5.11. Kerb Inlet Throat Angles from HEC-22 (NHI, 2013)
The basic calculations to determine approximate relationships for inlet capacities of grate,
side entry and combination inlets are provided by Searcy (1969). However, Equation (9.5.3)
and Equation (9.5.4) should not be used in preference to HEC 22 procedures which have
been hydraulically tested, and where the efficiency of various grate types is provided along
with calculations for throat entry conditions. As an illustration, typical relationships for 1 m
and 2 m on-grade kerb inlets are shown in Figure 9.5.12 that were derived using the HEC 22
procedures.
Additional Information
Many different types of inlet structures are used across Australia and this chapter has only
discussed some of the configurations. It is recommended that local capacity relationships,
knowledge and experience, and types of inlet structures should be employed in designs for
urban stormwater conveyance networks. In the absence of mandated design procedures
that may be provided by a local authority, preferences should be given to local knowledge
and experience, and to laboratory based methods. The designer and local authority should
119
Stormwater Conveyance
also accept first principles hydraulic analysis and evolving science in the selection of inlet
capacities.
Additional resources available to the designer include those provided local and state
authorities such as Vic Roads, the (QUDM, 2013) and from older resources including the
National Capital Development Commission (1981), the Victoria Country Roads Board (1982)
and the New South Wales, Department of Housing (1987).
The usual pit entry capacity relationships may not be adequate for analysis of conveyance
networks subject to major rainfall events. In these situations, larger depths of surface flows,
velocities and loads of debris may occur, and the inlet capacities of pits will be make for
additional discussion). A blockage factor of 50% is generally applied for sag pits for minor
and major systems in situations when experimental results or observations are not available.
The blockage factor for on-grade pits can vary from 0% and 20% in response to local
conditions. Additional advice on blockage factors is provided by Weeks et al. (2013) as
shown in Table 9.5.1. Higher blockage factors are often applied for events rarer than the 1%
AEP.
Table 9.5.1. Suggested Design and Severe Blockage Conditions for Inlet Pits Book 6,
Chapter 6
Type of structure Blockage conditions
Design blockage Severe blockage
Sag kerb inlets Kerb inlet only 0-20% 100% (all cases)
Grated inlet only 0-50%
Combined inlets Capacity of kerb
opening with 100%
blockage of grate
On grade kerb inlets Kerb inlet only 0-20% 100% (all cases)
Grated inlet only 0-40%
(longitudinal bars)
Grated inlet only 0-50%
(transverse bars)
Combined inlets 10% blockage of
combined inlet
capacity on
continuous grade
Ultimately relationships for the capacity of inlet structures determine the magnitudes of
bypass flows and are an essential consideration in the design of conveyance networks.
Designers must be concerned that flow widths, depths and product of depths and velocities
are within appropriate limits at locations upstream and downstream of an inlet structure.
These factors can be controlled by the careful location of inlet structures and by limiting
bypass flows using infrastructure with sufficient inlet capacities.
120
Stormwater Conveyance
(pipes) are full and surcharging in response to pressure flows. These losses at pits are offset
by the increased capacity of pressurised pipes and the entire pressurised conveyance
network may cope with greater flow rates. Energy losses at inlet structures are expressed as
a function of the velocity V0 in the outlet or downstream pipe:
Where:
This energy loss at the inlet structure creates a change in the total energy line (TEL) as
shown in Figure 9.5.13. The associated change in the hydraulic grade line (HGL) is likely to
be different in response to different pipe diameters and flow rates upstream and downstream
of the structure. The position of the HGL is important to designers as it determines the
location of the water surface and the degree of surcharge or overflow which may occur at
that location in the conveyance network.
Where:
ΔP/γ is the pressure head change (m) relating to a change of pressure of ΔP kN / m2 and
the specific weight of water kN/m3, and
A similar relationship can be applied to water levels within inlet structures which may be
slightly higher than the HGL level due to the conversion of some kinetic energy to pressure
energy when stormwater flows through a pit:
Where:
WSE is the elevation of the pit water surface (m) relative to the downstream HGL elevation,
and kw is a dimensionless coefficient.
121
Stormwater Conveyance
Studies using hydraulic models can be used to derive reliable values of energy losses and
pressure changes for different types of pits and junctions. A significant study by Sangster et
al. (1958) dealt with pipes flowing full and produced a set of design aids for a selected
configuration of inlet structures which are now called “Missouri Charts”. Hare (1980); Hare
(1983) produced information on other configurations. The charts are complex and provide
many possible geometric configurations of inlet structures. Careful judgement is required to
select the appropriate chart for a particular configuration of a structure, and in practice,
iterative calculations are required to converge to a suitable value of the pressure loss
coefficient.
122
Stormwater Conveyance
This iterative process can be quite time consuming for large conveyance networks. Attempts
have been made to replace dependence on charts with semi-analytical methods. These
range from relatively simple methods suggested by Argue (1986), Hare et al (1990) and Mills
and O'Loughlin (1998) to more in-depth methods suggested by Parsell (1992) and the US
FHWA HEC-22 procedure from which the algorithm described by GKY and Associates Inc
(1999) and Stein et al. (1999) has been developed. The FHWA HEC-22 procedure was
developed using research and laboratory efforts improving the methodologies of the
‘Corrective Coefficient Energy-Loss Method’ (Chang and Kilgore, 1989) and the ‘Composite
Energy Loss Method” Chang et al. (1994). It is also the only method which considers part-full
and full pipe flow, drops in pits and other situations.
A summary paper by O'Loughlin and Stack (2002) compared the different algorithms and
could not find significant differences which suggested that no single method was superior.
However, the information indicated that a viable algorithm can be developed, and that further
testing and development is required for the methods to acceptably match the full range of
configurations of inlet structures provided by the Hare (1983). The FHWA algorithm appears
to provide a significant advance in the determination of head losses and pit pressure
changes in stormwater conveyance networks. Comparisons with alternative algorithms and
experimental data indicated that simpler methods may provide equivalent results for losses.
Determining pressure changes in practice is complex due to the many possible geometric
configurations of inlet structures. Geometric configurations of pits can vary according to:
• a number of secondary factors, including slopes of pipes, shape and size of inlet structure,
depths of sumps in the structure below the invert of the outgoing pipe, streamlining (or
benching) of the pit and the entrance to the outlet pipe, and location of the confluence of
the incoming pipes.
• ratios of grate flow entering the top of structures compared to the outflow; and
• tailwater levels.
The design calculations typically need to be repeated to achieve converging values. When
designing to satisfy a freeboard requirement, revised coefficients may lead to circular
alteration of pit and pipe inlet capacities which requires the designer to intervene.
An analysis of the hydraulic grade line of a pipe requires an estimated value of ku at each
inlet structure. Some government authorities may provide suggested values and
experienced designers are likely to have developed ‘rule-of-thumb’ methods for determining
123
Stormwater Conveyance
these initial estimates of ku. Engineers are encouraged to use these methods in hydraulic
design wherever the methods have proven to be effective.
Guidance for initial estimates of ku is provided in Figure 9.5.14 for a range of common pit
configurations. These are not absolute or recommended values for final analysis of a
network and are only indicative starting points of iterations required to converge to a final
value. These estimations assume shallow pipes with typical minimum covers and no
increases in outlet pipe diameters. Deeper inlet structures may increase values of ku and
increases in outlet pipe diameters may reduce values of ku
124
Stormwater Conveyance
Figure 9.5.14. Approximate Pressure Change Coefficients, ku, for Inlet Structures
125
Stormwater Conveyance
Simplified Approach
As discussed earlier, simplified design methods are available such as those presented by
Mills and O'Loughlin (1998), Hare et al. (1990), and Argue (1986). Although these simpler
methods may provide similar results to more complex semi-analytical methods, further
laboratory research and development was recommended to account for the full range of pit
configurations considered by the original Missouri Charts (Sangster et al., 1958) and by Hare
(1980). Whilst simplified design methods may be considered for use during simple, non-
critical pit and pipe network designs, use of Missouri Charts and Hare’s results is preferred.
Recommended Approach
The Missouri Charts (Sangster et al., 1958) and the results from Hare (1980) remain widely
accepted and are relevant to an estimated 85% of the possible configurations of inlet
structures. The example charts presented in Figure 9.5.15 and Figure 9.5.16 are based on
this information (QUDM, 2013). The first chart (Figure 9.5.15) was derived from the original
Missouri Chart 2 with modification from the Department of Transport (1992) for an inlet
structure with grate flow only. The pressure change coefficient ku depends on the
submergence ratio S/D0 and iterative calculations are required.
The second example chart (Figure 9.5.16) was modified from the Missouri Chart 4 to include
the results from(Hare, 1980). The inlet structure accommodates flows straight through the pit
for a submergence ratio S/Do of 2.5 and also considers inflows through grates. Here ku
depends on the ratio Du/Do and provides flow ratios Qg/Qo ranging from 0 to 0.5. A
correction factor needs to be added from Table 9.5.2 when the submergence ratio S/Do does
not equal 2.5.
126
Stormwater Conveyance
127
Stormwater Conveyance
128
Stormwater Conveyance
Table 9.5.2. Correction Factors for ku and kw for Submergence Ratios (S/Do) not Equal to 2.5
(QUDM, 2013)
S/Do Qg/Qo
0.00 0.10 0.20 0.30 0.40 0.50
1.5 0.00 0.11 0.22 0.33 0.44 0.55
2.0 0.00 0.04 0.08 0.12 0.16 0.20
2.5 0.00 0.00 0.00 0.00 0.00 0.00
3.0 0.00 -0.03 -0.06 -0.09 -0.12 -0.15
3.5 0.00 -0.04 -0.08 -0.12 -0.16 -0.20
4.0 0.00 -0.05 -0.10 -0.15 -0.20 -0.25
Large energy losses and pressure changes can be avoided by attention to simple rules in
detailed design and construction. One principle is to ensure that jets of water emerging from
inlet pipes do not impinge directly on pit walls. Wherever possible the stormwater jets from
inflow should be directed into outlet pipes. Hare (1983) states that changes of flow direction
should generally occur on the downstream face of pits, rather than at the upstream face or
centre. Losses may be reduced by use of curved pipelines, precast bends and slope junction
fittings at changes of flow direction. Typical loss factors for these fittings are:
Benching
The recommended Missouri Charts do not include the effect of benching to reduce energy
losses. Potential decreases in pressure change coefficients as a result of benching are
provided in Figure 9.5.17, (Table 7.16.4 in QUDM (2013)).
129
Stormwater Conveyance
Computer Models
Various procedures have been implemented in computer software. Some unsteady flow
computer programs allow for pressure losses in rather simplistic ways, such as increasing
pipe friction factors to include estimated pressure losses. Other complex procedures
employed by computer software include:
• numerical methods.
130
Stormwater Conveyance
The necessary calculations combine these inflows and route them throughout a network by
determining the water depths and velocities in the conveyance infrastructure. Simpler
methods or models can do this for steady flows with unchanging flow rates whereas more
complex models are required for unsteady and time-varying flow ates. Hydraulic grade lines
(HGL) and energy grade lines (EGL) can be used to define flow depths, pressures and
energies in conveyance networks as shown in Figure 9.5.18. Hydraulic models must allow
for overflows when water levels exceed limits or pass over barriers. Additional information
about hydraulic models is provided in Book 6 and Book 9, Chapter 6.
131
Stormwater Conveyance
Figure 9.5.18(a) demonstrates a simple model that accepts peak inflows derived from a
hydrological model. It is assumed that steady flows occur in each pipe reach or link.
Hydraulic grade lines are assumed to be located at the obvert (upper inside surface) of pipes
and the flow condition is described as “flowing full but not under pressure”. Allowances for
local losses are provided by a small drop (up to 90 mm, depending on change of flow
direction) within inlet structures. The capacity of pipes can be calculated easily by applying a
friction formula such as the Manning Equation and accounting for the grade of the pipe. The
conveyance network is assumed to behave as a network of open channels and no
allowance is made for upstream or downstream surcharges.
Figure 9.5.18(b) shows a second approach to hydraulic analysis that also assumes steady-
state conditions where peak flows occur as pressure flows in pipes and the HGL is located
above or along the pipe obvert. This method includes energy losses and pressure changes
at inlet structures that are likely to be greater than open channel flow assumptions where
water levels are below pipe obverts. Capacity of pipes is also dependent on downstream
water levels which may create backwater effects on flows in pipes.
These methods accept peak flow from hydrological models and assume that peak flows
occur simultaneously throughout the conveyance network. Flow rates are constant within
each link and the calculated HGLs and EGLs represent upper envelopes of these flows. This
process will usually estimate lower pipe capacities than unsteady flow assumptions.
Figure 9.5.18(c) presents unsteady flow processes that are created by the inflow to the
conveyance network of full hydrographs typically generated in computer models and real
rainfall depths and patterns. The simulations account for the changes in water levels and
flow characteristics in pipes and the network throughout storm events. These processes
include dynamic effects such as fast-travelling waves generated by changes in flow
conditions that can create shock losses in the conveyance network. This model is applied
using computers that process and solve finite difference computations. A steady flow system
is assumed to be independent of time and only requires one set of calculations. However
calculations in computer models of unsteady flows are repeated for many time steps and
pipe reaches are divided into several sections during the calculation processes.
All three hydraulic models can be utilised for design and analysis. The first and simplest
method can be used for design of small networks where downstream conditions may
ultimately be varied to account for the actual behaviour of the conveyance network. These
adjustments may not be possible for design of a fixed conveyance network, and the
estimated capacities and impacts of conveyance network may be incorrect.
The assumed steady-state flows and a connected hydraulic grade line throughout a network
of the second method is more suitable for basic design and analysis tasks. This method is
likely to provide more efficient designs as it more closely reproduces real hydraulic
behaviour and allows for surcharging of pits and pressure flows. This process may be used
as a checking procedure by working backwards from the receiving water level towards the
top of the catchment.
This model was presented by ARR 1987 (Pilgrim, 1987) as the preferred hand calculation
method for hydraulic design of simple pipe networks. Calculations typically involve two
iterations for a conveyance network. The first iteration commences at the top of the
catchment, accumulating the flows arriving at each inlet structure, and allows for possible
bypass flows at pits. The calculated flows through the conveyance network are used to
determine the sizes of pipes and the invert levels at their ends whilst ensuring that HGLs do
not rise above a limit, usually 0.15 m below the surface level of inlet structures. This design
procedure involves a series of trials with increasing pipe sizes selected from the
132
Stormwater Conveyance
commercially available diameters. The smallest commercially available pipe diameters that
meet the design requirements are typically selected.
The iteration of the calculations commence at the outlet using a set tailwater level and
project the HGLs upward towards the top of the catchment by considering the HGL slope
due to pipe friction and the local pressure changes at each inlet structure. The previously
calculated flow rates, pipe diameters and water levels in pits can be used in design charts
such as the Missouri and Hare charts to determine local pressure changes at pits (refer to
Book 9, Chapter 5, Section 5). When the upstream process of calculations reaches an inlet
structure with two or more pipe branches, the calculations progress separately and upstream
in each branch.
This projection process can be employed for part-full pipe flows, and for pressurised and full-
pipe flows. However, the straight water surface profiles assumed for part-full flows will not be
exact. A more accurate procedure is to project water surfaces upstream using the gradually-
varied flow methods commonly called backwater curve computations.
Some designers of conveyance networks are still using the simple, steady flow procedures.
However the unsteady flow models produce more realistic behaviours in response to
hydrographs and flow volumes that are essential for analysis volume sensitive systems that
include volume management facilities (refer to Book 9, Chapter 4). Modelling using unsteady
hydraulic (and hydrology) assumptions is the preferred method for detailed analysis of
conveyance networks that need to respond to strict constraints and where realistic modelling
of network behaviour is needed. This approach is also essential for analysis of existing
conveyance networks to replicate an existing deficit in performance or to reproduce a known
flooding problem. There are many software products currently available that can be utilised
for these types of analysis and design (refer to Book 9, Chapter 6).
Overland flow paths typically convey stormwater when the capacity of the minor system
conveyance network is exceeded as bypass flows between inlet structures along a kerb and
gutter in a street, along swales in rural or grassed areas, or sometimes undesirably through
private property. Calculations for overland flows are similar to open channels in that they can
be defined as a number of different channel sections with constant cross-sections and
slopes. However a key difference between overland flow paths and open channels is that
overland flow paths are typically limited to shallower flow depths to meet safe design criteria.
Open channels typically convey stormwater at greater depths and flow rates.
The depths, widths and velocities of overland flows should be limited to meet objectives for
safety and erosion. A range of conditions may be applied when a cross-section of a road is
133
Stormwater Conveyance
to be used to convey major and minor flows and the limiting factor is deemed to be the most
restrictive criteria. These criteria include risks to pedestrians, particularly children, and the
importance of the road for transport purposes. The following conditions should apply when
guidance from a consent authority is not available:
• The depth of stormwater flows at the kerb (dg) should be limited to the lower side of a
street to prevent uncontrolled overflows from entering properties. For streets with 150 mm
high kerbs and a footpath with a substantial slope towards the gutter, a suitable limiting
depth may be 200 mm or to the height of a water-excluding hump on a property driveway
plus an appropriate freeboard. In addition a maximum width of flow should not be
exceeded in the carriageway. Greater depths may be tolerated where a street is
significantly lower than the land on both sides and in tropical areas with greater intensity
rainfalls. A suitable freeboard should apply to floor levels of habitable rooms in properties
adjoining the road.
• The product of depth and velocity (dg.V), with V being the average velocity in the gutter,
should not exceed 0.4 m2/s for safety of pedestrians, 0.3 m2/s for stability of parked
vehicles (depending on size), or as directed by the consent authority (refer to Book 6,
Chapter 7).
• Depths of stormwater flows should not exceed the height of the crown of the road during
minor storms or where flows are to be contained on one side of a street. This includes
locations that include ponding of stormwater such as at sag pits. Depending on the
importance of the road (local, collector, arterial) and the importance of access, limits on
width of flow of 2 to 2.5 m are typical.
• Widths of flows may be limited to allow clear lanes in the centre of a road for passage of
vehicles. Flow depths should not exceed the height of the crown of a road by more than
50 mm for major overland flow paths not considered part of the trunk drainage system and
in new development areas.
Dimensions of Flow
The Manning Equation can be used to calculate flows in trapezoidal style overland flow
paths. Sheet flow is commonly estimated using a version of the kinematic wave equation for
flow distances up to 130 m and then sheet flows are then concentrated into some form of
gully or defined overland flow path (NHI, 2013).
Equations for road gutters can be extended to calculate flows along full road cross-sections
during major events. For a given flow rate, the normal depth corresponding to steady,
established flow can be found by simple iterative calculations using a friction formula such as
the Manning Equation. Although these assumptions may not be entirely valid, the errors
involved may be generally acceptable.
Design charts of the flow capacities of roadway cross-sections can be prepared using
Equation (9.5.8). Allowable zones are defined by various limiting conditions and criteria as
shown by the example in Figure 9.5.19.
134
Stormwater Conveyance
Figure 9.5.19. Flow Capacity Chart for One Side of an 8 m Carriageway with 3% Cross-fall
135
Stormwater Conveyance
The following general equation developed by the U.S. Bureau of Public Roads (Searcy,
1969) is recommended to determine flows in streets. With reference to Figure 9.5.20(a), the
equation is:
where Q (m3/s) is the total flow rate which is estimated by dividing the section as shown in
Figure 9.5.20(a) and applying the equation by Izzard (1946) for a triangular channel with a
single cross-fall:
� = 0.375��8/3�1/2
0 �/� (9.5.9)
Where:
Zg and Zp are the reciprocals of the gutter and pavement cross-slopes (m/m),
136
Stormwater Conveyance
Equation (9.5.8) can be applied in simplified form when flows are contained in a gutter or on
one side of a road. Clarke et al. (1981) estimated values for F of about 0.9 for simple
triangular channels and 0.8 for gutter sections of the type shown in Figure 9.5.20(a). These
assumptions may be used in the absence of more precise information. Typical values of
Manning's n are 0.012 for concrete, 0.014 for asphalt, 0.018 for flush seal and 0.025 for
stone pitchers (Dowd et al., 1980).
Consider the face of the kerb to be vertical in situations where the face of a kerb is relatively
steep. Equation (9.5.8) can be applied to “lay-back” kerbs with sloping faces by assuming
that zg is equal to w/dg as defined in Figure 9.5.20(b).
Open channel flow equations, such as the Manning Equation, can also be used to determine
flows in lined gutters or unlined drains or swales.
137
Stormwater Conveyance
Flow depths and widths for a specified flow rate can be determined using Equation (9.5.8).
Velocities are estimated by dividing the flow rate by the corresponding flow area. Travel
times for stormwater conveyance can be derived by dividing gutter length by flow velocity.
Distributed lateral inflows as shown in Figure 9.5.20(c) can generate flow rates and
characteristics such as width, depth and velocity that vary along a gutter. In this situation the
average flow velocity occurs at about 60% of the distance along the gutter towards the inlet
structure. Gutter flow calculations that use of the total flow arriving an inlet structure will
overestimate velocities impacting on the structure.
Other Considerations
Gutter flow times depend on flow rates and it is necessary to specify a time in order to
estimate a flow rate. A set of iterative calculations are required. In these calculations, a
velocity or time is assumed, and a flow rate calculated. Then a check is undertaken to
determine whether the total time of flow in the overland and gutter flow paths agrees with the
original assumption.
A precise calculation of gutter flows must allow for concentrated inflows such as bypass
flows from an upstream pit at the upper end of the gutter or an outflow from a large site at
some point along the gutter. A representative design flow rate must be estimated to permit
calculation of the average velocity and travel time.
Parked vehicles and driveways may interrupt and widen surface flows. The limited
experimental evidence available suggests that these effects are localised. Allowance for this
effect may be needed for streets where close parking of vehicles is likely but specific
allowance does not appear necessary at other locations. The design process should account
for possible future alterations to gutter and road profiles including resurfacing of roads.
Effects of possible pit blockages must be assessed at locations where overflows may cause
significant damage.
It is also important to consider the longevity of an overland flow path and this is especially
relevant for flow paths through private property. Blockages are likely to occur due to lack of
maintenance, or by post construction modifications such as from garden beds and mulch, or
by modifications designed to enclose domestic pets.
It is often necessary to locate structures within minor overland flow paths including property
fencing, sound-control barriers and above ground services. When designing overland flow
paths that may contain these types of structures it is important to consider the potential for
flows to be redirected by these barriers.
5.6.3. The Hydraulic Grade Line (HGL) and Energy Grade Line
(EGL)
The hydraulic (HGL) and energy grade line (EGL) concepts are derived from the Bernoulli
Equation and assist with the analysis of complex flow problems. The HGL is determined by
138
Stormwater Conveyance
plotting the relationship for pressure head �/� and height above an arbitrary datum z at key
locations in a conveyance network using the following equation:
Similarly, the EGL adds the velocity head V2/2g to the HGL to provide a relationship for EGL
that can be derived at key locations in the conveyance network:
The vertical distance to point (such as the centre of a conduit) below the HGL represents the
pressure head or pressure energy at a point. Negative heads or partial vacuums may occur
at siphons and the conduit is above the HGL. The HGL coincides with the water surface for
open channel flows, except at points such as brinks of weirs where non-hydrostatic
conditions prevail. Water rises to the level of the HGL in an inlet structure (pit) that acts as a
vertical riser.
The EGL is located above the HGL and represents the total energy (velocity + pressure +
potential) available to the flow that is expressed as a height (metres) equivalent to flow
energy per unit weight in joules (or newton-metres) per newton.
Grade lines typically slope downwards in the direction of flow in conveyance networks and
slope represents energy losses due to pipe friction. The HGL and EGL are parallel for steady
flows. The grade lines generally have a different slope to the pipe in closed conduits under
pressure (with the HGL above the pipe). The grade-lines are parallel to open channels that
are subject to steady and uniform flows since the friction loss equals the potential energy
loss represented by the slope of the conduit.
Changes in the shape or direction of conduits create turbulence and local losses that are
represented as sharp drops in EGLs. Significant energy losses are typically assumed to act
at the centre of inlet structures in analysis of conveyance networks. The HGL is also
assumed to change at the centre of inlet structures as illustrated in Figure 9.5.21. These
assumptions differ from the actual location of losses at the entry and exit of inlet structures
(pits).
139
Stormwater Conveyance
Figure 9.5.21. Flow Behaviour in a Surcharged Pipe System Showing Energy Grade Lines
and Hydraulic Grade Lines
Changes in the shape or direction of conduits can create turbulence and local losses that
are represented as sharp drops in the EGL. Losses occur at entrances and exits to pits, pipe
bends, and at contractions, expansions, junctions, and valves in conveyance networks.
Except for expansions and contractions of conduits, these losses have the following
relationship:
�2 (9.5.12)
ℎ = �.
2�
where h is the loss in m, and k is a loss coefficient multiplied by the velocity head of the
downstream flow.
The losses at bends depend on the radius of the bend and have a typical value of kb = 0.5.
Contractions in conduits (decreases in pipe diameter) are subject to low levels of losses with
a typical factor kc of 0.05. Expansions in conduits (increases in diameter) generate higher
losses hL that are dependent on the upstream Vu and downstream Vd velocities:
2
� � − ��
ℎ� = �exp . (9.5.13)
2�
140
Stormwater Conveyance
Valves have variable loss factors which can become very large as the valve closes.
The estimation of flow rates through conveyance networks that are flowing full is made by
relating the available energy or head to the losses as expressed by the velocity head. The
following calculation shows how flowrates can be determined from the available head and
the assumed energy losses along a 300 mm pipe discharging from a reservoir as shown in
Figure 9.5.22.
The pipe diameter reduces from 300 mm to 200 mm at the middle of the pipe branch. The
energy loss at the following expansion is assumed to be 0.5 times the velocity head in the
downstream pipe and all friction values (f) in the Darcy-Weisbach Equation are set at 0.02.
The water level in the reservoir is 57.0 m above a height datum and the total head available
is 57.0 - 46.0 = 11.0 m. The various losses are all functions of the velocity heads in the
�1 �1 2
pipes. Since V3 = V1 and �2 = �1 . � = �1 . �2
, the sum of the losses will be:
2
� � �2 4
� � 12
�� + � . + �� + � . . + �exp + � . + ����� = 11 m (9.5.14)
� 1 � 2 �1 � 3 2�
430 55 0.3 4
385 �12
0.5 + 0.02. + 0.1 + 0.02. . + 0.5 + 0.02. + 1.0
0.3 0.2 0.2 0.3 2�
(9.5.15)
= 11 m
141
Stormwater Conveyance
Thus,
�12
84.28 ⋅ = 11 m,
2�
0.5
11.0 × 19.60
�1 =
84.28 (9.5.16)
= 1.60 m/s and
� = �1�1
= � 4 ⋅ 0.3 2
⋅ 1.60
= 0.113 m3/s
� �2
The Manning Equation can also be used with friction losses expressed by 2� �2 43
x 2�
�
since slope of the energy gradeline is S = hf/L.
Conduits that are flowing partially full in stormwater conveyance networks can exhibit
complex behaviours. A maximum flow capacity is achieved when conduits are operating at
less than full flows. However it is not good practice to design conduits with partial flows as
disturbances may eliminate free surfaces in conduits and cause a transition to pressurized
full flows that may lead to surcharges.
This assumes that flows in conduits are open channel flows with atmospheric pressure at the
surface. Submergence at the entrance and tailwater levels affecting the outlet of conduits
generates further complications. In addition, large air bubbles and air pockets can occur in
conduits that operate in partially full conditions resulting in pressures that can be above or
below atmospheric pressure. The theory of open channel hydraulics is addressed in Book 6.
Complex Procedures
A more complex and correct procedure for analysis of conveyance networks is to apply
partial differential equations of unsteady flow varying in space (the distance along a conduit)
x and time t that is defined as steps or intervals. These numerical models divide river,
channel or pipe reaches into segments and define the transfer of mass and momentum
between adjacent segments using the Saint Venant Equations for conservation of mass and
momentum in unsteady flows as described in Book 6, Chapter 2. The equations must be
solved iteratively using finite difference or finite element models and matrix calculations that
may require longer computing times.
These more complex calculation processes are quite different from water surface projection
methods such as the ‘standard step’ procedure. Nevertheless the same outputs are
produced such the HGL levels at points along a conduit and at different times during a flow
event. The equations allow for pipe friction and local losses, and also incorporate pressure
changes at inlet pits and junctions.
142
Stormwater Conveyance
Modelling of urban conveyance networks is typically carried out using a range of computer
software packages that provide different levels of rigour or precision which involve trade-offs
between speed and accuracy. However the designer should be aware of other important
considerations such as stability. Unlikely high or low pressures, water levels, and flow rates
are generated when iterative calculations become unstable. The usual way of achieving
stable results with a computer model is to choose a shorter time step or adjustment of
factors affecting the relative time steps in space and time. Small errors in volumes or flows
(typically < 1%) can be accepted in order to achieve faster running times.
Priessmann Slot
Methods of analysis must allow for flows that change from partially full to full conduit flows
and back again. Modelling procedures that account for unsteady flow regimes employ the
Priessmann Slot assumption. This mixed flow problem is simplified by the addition of a
hypothetical slot in the pipe which allows the depth of flow to exceed the pipe diameter and
provides pressurized flow effects (Yen, 1986; Butler, 2004). The width of the slot must not be
too wide to significantly impact on continuity and should be determined to ensure that the
gravity wave speed equals the pressure wave speed.
The hypothetical slot allows the analysis of the conveyance network to be treated as an
open channel flow problem. However, a limitation of this approach is that it cannot accurately
simulate the formation and impact of air pockets or negative pressures results from shocks.
Outlet Structures
Regardless of whether flow within the conduit is full or part full, suitable transition is required
at the end of a conveyance conduit, where flow discharges to the receiving environment.
The transition structure, or outlet structure should accommodate potential for high velocity
and/or turbulent flow. This can be achieved through armouring of the surface using material
such as rock or concrete, along with gradual transition of geometry from that of the conduit,
to that of the receiving channel or basin. Energy dissipation and/or flow dispersion can be
achieved at the same time using appropriate outlet structure design. This is particularly
necessary where stormwater settlement processes are expected in a receiving basin
structure such as a bio-retention basin or constructed wetland.
The outlet structure may also represent an opportunity for removal of gross-pollutants prior
to stormwater passing into a receiving channel or structure. This can be achieved using
various forms of screens, baskets and mechanical filters. The impact of these structures,
whether clear, partially blocked or fully blocked on the hydraulic performance of the
conveyance infrastructure needs careful assessment at the design stage.
5.6.5. Culverts
The simplest conveyance network is a single-pipe culvert which is a common component of
highway and railway networks that is located wherever an embankment crosses a waterway
or drainage path. These transport crossings may only involve a single pipe (or multiple
parallel pipes). However, the hydraulic calculations can be complicated. Culvert hydraulics
are comprehensively described by Normann et al. (2005). The treatment of culvert
hydraulics (or headwalls) is divided by two flow conditions:
1. Inlet controls – dependent on the orifice effect at the culvert entrance; and
2. Outlet controls – dependent on full, pressurised flow conditions through the pipe or on
high tailwater levels.
143
Stormwater Conveyance
Inlet Control
Inlet conditions for culverts are created by the vena contracta effects shown in Figure 9.5.23.
The streamlines of flows entering a culvert cannot turn abruptly and the curvature of flows
continues into the culvert creating a jet with a diameter less than that of the culvert. This
process reduces the available cross-sectional area of flows and the overall flow rate. The
ratio between the jet and the pipe diameters is 0.6 for a square-edged entrance. Values for
other entrance types are shown in Figure 9.5.24.
144
Stormwater Conveyance
The correction coefficient for the reduced area is Cc and Cu is the factor for the velocity
being less that the theoretical value of V = √ 2gh where h is the pressure head on the orifice
(m) and g is the acceleration due to gravity (m/s2). The overall correction coefficient is C =
Cc.Cu.
The general case of inlet control is presented in Figure 9.5.25 where it is observed that the
culvert barrel has a greater capacity than the entrance as it is flowing partially full. As
indicated, Figure 9.5.24 shows that the capacity of the culvert can be improved by modifying
the entrance by rounding sharp edges and changing the streamlines. These improvements
may be useful in situations when additional capacity is required.
The general equation governing orifice flow for a circular pipe is:
��2 0.5
� = �� = � 2�ℎ (9.5.17)
4
h is the head on the orifice, usually taken from the upstream water surface to the centre of
the orifice (m), and
145
Stormwater Conveyance
The hydraulics is more complicated when the entrance to the culvert is not completely
submerged. This may involve three different states depending on the headwater height
above the invert HW and the culvert diameter or height D:
• Partially full flow for HW < 0.8D is a weir type flow as water pours into the pipe;
• Partially full flow with 0.8 < HW < 1.2D is similar to weir flow; and
The stated limits of 0.8D and 1.2D are approximate. These three zones lead to the
behaviour demonstrated in Figure 9.5.26 where the inlet control relationship changes
depending on the headwater elevation. It is also possible to have two different flow rates at
the same water elevation which depends on whether the culvert is operating as an inlet or
outlet controlled system. These states can also depend on whether flows are increasing or
decreasing.
146
Stormwater Conveyance
Figure 9.5.26. Inlet Control versus Elevation of Headwaters (U.S. Department of Transport,
2005)
A range of design aids are generally available in the form of nomographs used to calculate
headwater levels for various situations involving circular, box and other types of culverts. A
better approach is to use computer software to model culvert hydraulics.
Outlet Controls
Outlet control occurs when a culvert is not capable of conveying as much flow as the inlet
can accept. The controlling section is generally at the culvert exit where subcritical or
pressurised flow conditions are occurring or further downstream of the culvert due to
tailwater conditions. Two outlet-controlled situations are provided in Figure 9.5.27. The
difference between upstream headwater and the tailwater levels drives the flows through the
culvert. Energy losses are added and equated to the available head.
147
Stormwater Conveyance
Figure 9.5.27. Example of Outlet Control Situations (U.S. Department of Transport, 2005)
These calculations involve backwards projection of the HGL that commences at the tailwater
level if this submerges the outlet. Different computer models make various assumptions for
free outfalls. It is assumed that the level will be half way between the pipe obvert and the
critical depth, and it is necessary to determine that critical depth from nomographs or
equations. However other computer models assume that it is the lower of (a) the critical
depth and (b) the normal depth.
� = �����1.5 (9.5.18)
Lw is the width or length of the weir perpendicular to the direction of flow, and
148
Stormwater Conveyance
Culvert and overflow weir outflows can be combined into a composite relationship as shown
in Figure 9.5.26. This calculation should account for inlet and outlet controls and usually the
most conservative relationship that provides the lowest flow rate for a given depth is
accepted.
The real behaviour of a culvert is more complex and involves a phenomenon called 'priming'.
As upstream water levels rise, culverts tend to remain under inlet control until they run full.
As upstream water levels decline, culverts tend to remain at full flows in an outlet control
configuration until there is a sudden reversion to inlet control and decline in headwater level.
Since culverts are often used as outlets for detention basins and conveyance networks. The
relationships presented above can be applied to specify the elevation and discharge
relationships needed for routing of flows through volume management facilities.
5.7. References
Argue, J.R. (1986), Storm drainage design in small urban catchments: a handbook for
Australian practice, Special Report No.34, Australian Road Research Board.
Australian Road Research Board (1979), The hydraulic capacity of a side entry pit
incorporating a deflector and grate, Pavement Surface Drainage Symposium, Sydney,
Australia
Austroads (2013), Guide to Road Design Part 5A: Drainage-Road Surface, Networks, Basins
and Subsurface, Austroads
Burgi, P.H. And Gober, D.E. (1977), Bicycle-safe grate inlets study, United States Federal
Highway Administration, Environmental Division, 77(24), The Division.
Butler, D., Davies. J.W. (2004), Urban Drainage, Ed.2, London: Taylor & Francis.
Chang and Kilgore (1989), HYDRAIN V5.0 Federal Highway Administration's Research
Report.
149
Stormwater Conveyance
Chang, F.M., Kilgore, R.T., Woo, D.C. and Mistichelli, M.P. (1994), Energy losses through
junction manholes, Report No. FHWA-RD-94-080, Federal Highway Administration.
Clarke, W.P., Strods, P.J. and Argue, J.R. (1981), Gutter/pavement flow relationships for
roadway channels of moderate or steep grade, Proceedings of the First National Local
Government Engineering Conference, Adelaide, pp: 130-137.
Coombes P.J. (2015), Transitioning drainage into urban water cycle management,
Proceedings of the WSUD 2015 Conference, Engineers Australia, Sydney.
Coombes P.J., Babister M. and McAlister, (2015), Is the Science and Data underpinning the
Rational Method Robust for use in Evolving Urban Catchments, Proceedings of the 36th
Hydrology and Water Resources Symposium, Engineers Australia, Hobart.
Dowd, B.P., Loakim, R. And Argue J.R. (1980), The simulation of gutter/pavement flows on
South Australian urban roads, ARRB Proceedings, 10(2), 145-152.
GKY and Associates Inc, (1999), User's Manual for HYDRAIN, Integrated Drainage Design
Computer System: Version 6.1, Federal Highway Administration, US Department of
Transportation, Washington D.C., accessible at <www.fhwa.dot.gov/bridge/hyd.htm>
Hare, C.M. (1980), Energy losses and pressure head changes at storm drain junctions,
Thesis, University of Technology, Sydney.
Hare, C.M. (1983), Magnitude of Hydraulic Losses at Junctions in Piped Drainage Systems,
Civil Engineering Transactions, Institution of Engineers, CE 25 (1).
Hare, C.M., O'Loughlin, G.G. and Saul, A.J. (1990), Hydraulic Losses at Manholes in Piped
Drainage Systems, Proc. International Symposium on Urban Planning and Stormwater
Management, Kuala Lumpur.
Hare, C.M., O'Loughlin, G.G. and Saul, A.J. (1990), Hydraulic Losses at Manholes in Piped
Drainage Systems, Proceedings of the International Symposium on Urban Planning and
Stormwater Management, Kuala Lumpur.
Izzard, C.F. (1946), Hydraulics of runoff from developed surfaces. In Proceedings of the 26th
Annual meeting of the Highway Research Board, National Research Council, Washington,
DC EEUUA, pp: 129-150.
Jens, S.W. (1979), Design of urban highway drainage, Publication No. TS-79-225, Federal
Highway Administration, Washington, D.C.
Laurenson, E.M., Mein, R.G. and Nathan, R. (2010), RORB - Version 6 Runoff Routing
Program, User Manual, Monash University Department of Civil Engineering and Sinclair
Knight Merz
Marsalek, J. (1982), Road and bridge deck drainage systems, Report RR228, Research and
Development Branch, Downsview: Ontario Ministry of Transportation and Communications.
Mills, S.J. and O'Loughlin, G.G. (1986) Workshop on Piped Urban Drainage Systems,
Swinburne Institute of Technology and University of Technology, Sydney (first version 1982,
latest 1998)
150
Stormwater Conveyance
Mills, S.J. and O'Loughlin, G. (1998), Workshop on urban piped drainage systems,
Swinburne University of Technology and University of Technology, Sydney.
NHI (2013), Urban Drainage Design Manual, HEC-22, ed. 3, National Highway Institute, U.S.
Department of Transportation, Federal Highway Administration.
Department of Main Roads NSW (1979), Model analysis to determine hydraulic capacities of
kerb inlets and gully pit gratings, NSW: Department of Main Roads.
Normann, J.M., Houghtalen, R.J. and Johnston, W.J. (2005), Hydraulic Design of Highway
Culverts, U.S. Department of Transportation, Federal Highway Administration, FHWA-NHI-
01-020, Hydraulic Design Series No.5, Ed.2.
O'Loughlin, G. and Stack, B. (2002), Algorithms for pit pressure changes and head losses in
stormwater drainage systems, Global Solutions for Urban Drainage, pp: 1-16.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
QUDM (2013), Queensland Urban Drainage Manual. Department of Energy, Water and
Energy Supply, Queensland Government.
Sangster, W.M., Wood, H.W., Smerdon, E.T. and Bossy, H.G. (1958), Pressure Changes at
Storm Drain Junctions, Engineering Series Bulletin No. 41, Engineering Experiment Station,
University of Missouri.
Stein, S.M., Dou, X., Umbrell, E.R. and Jones, J.S. (1999), Storm Sewer Junction Hydraulics
and Sediment Transport, US Federal Highway Administration Turner-Fairbanks Highway
Research Centre, VA
Vennard, J.K. and Street, R.L. (1982), Elementary Fluid Mechanics, 6th Edition, Wiley, New
York.
Victoria Country Roads Board (1982), Road design manual. Chapter 6., Drainage. Kew,
Victoria.
Walsh C.J. (2004), Protection of in-stream biota from urban impacts: minimise catchment
imperviousness or improve drainage design?, Marine and Freshwater Research, 55:
317-326.
151
Stormwater Conveyance
Walsh, C.J., Booth, D.B., Burns, M.J., Fletcher, T.D., Hale, R.L., Hoang, L.N., Livingston, G.,
Rippy, M.A., Roy, A.H., Scoggins, M. and Wallace, A. (2016), Principles for urban stormwater
management to protect stream ecosystems, Freshwater Sci., 35: 398–411.
Weeks B., Witheridge G., Rigby E., Barthelmess A. and O'Loughlin G. (2013), Blockage of
hydraulic structures, Australian Rainfall and Runoff revision Project 11, Stage 2 Report,
Engineers Australia.
Yen, B.C. (1986), Hydraulics of sewers, Advances in Hydroscience, Urban Storm Drainage,
14: 1–115.
152
Chapter 6. Modelling Approaches
Peter Coombes, Steve Roso, Mark Babister
The authors collaborated with Mikayla Ward and Sophia Buchanan to produce the
Brownfield and Greenfield case studies.
6.1. Introduction
Urban stormwater management responds to an increasing number of performance
objectives including to mitigate property damage, avoid risks to human life, enhance the
amenity of urban settlements, and protect surrounding environments (refer to Book 9,
Chapter 3, Section 3 and Book 9, Chapter 5, Section 2). This involves consideration of the
full spectrum of rain events, from frequent to rare (refer to Book 9, Chapter 3), from the
perspective of flooding, water quality, provision of infrastructure, protection of environments,
and enhancing amenity of urban areas. The assessment of urban stormwater behaviour,
performance against objectives and associated design tasks, involves complex analytical
problems that are better resolved using a computer-based model system.
A computer model involves use of software or a complex spreadsheet. Compared with hand
calculation, computer models permit rapid numerical calculation across large spatial and
temporal domains, while facilitating testing of multiple suites of parameters and inputs (refer
Book 9, Chapter 3, Section 4). This in turn allows the model to be calibrated to best
represent the real world conditions that are under assessment. Models can be a useful tool
to assist our thinking, and can be readily documented and reviewed, ultimately leading to
better assessments and design outcomes.
Reliable estimates are nevertheless conditional upon best practice application of the
computer model. It is important to remember that models are only tools to guide our thinking
about design and management. The purpose of this chapter is to provide guidance on the
selection and application of modelling approaches within urban catchments, having regard to
the techniques described in other books of ARR. The chapter is structured as follows:
• Book 9, Chapter 6, Section 2 describes tasks that are characteristic of urban modelling.
• Book 9, Chapter 6, Section 3 discusses current trends in urban modelling. This may assist
with planning a long-term strategy for technology adoption, research, and training. A
general description of the types of computer models commonly applied in urban
stormwater practice is provided as an aid for model selection.
This chapter is not intended to duplicate content in other chapters. Where relevant detail is
available elsewhere, references to other books and chapters are provided.
In the context of this chapter, an ‘urban model’ can be defined as a conceptual or computer-
based modelling system that performs hydrologic, hydraulic, water balance, or water quality
153
Modelling Approaches
Emerging urban stormwater analysis and solutions are based on a systems approach that
incorporates multiple linked scales (Book 9, Chapter 3). The USEPA (2008) highlights that
past practice of designing individual items of stormwater infrastructure at a single centralised
scale has been inadequate for managing urban flooding and water quality in waterways.
Stormwater management needs to be designed as a system that integrates structural and
non-structural attributes of design with site characteristics and performance objectives.
More recently, the USEPA (2008) established that green infrastructure solutions distributed
at multiple scales throughout urban catchments partially disconnected impervious surfaces.
They also contributed to improved stormwater quality and avoided flood damages (Atkins,
2015). These insights are consistent with earlier Australian applied research finding that both
the peak flows and volumes of stormwater runoff are required for the design of stormwater
infrastructure (Goyen, 1981), and the local scale was the basic building block of cumulative
urban rainfall runoff processes (Goyen, 2000).
Many methods for modelling stormwater runoff are based on regional scale assumptions and
processes. However, inclusion of local scale processes in analysis improves knowledge of
within catchment outcomes and whole of catchment responses.
It is suggested that a catchment with less than 10 percent impervious surfaces, or with less
than 10 percent of the natural conveyance areas modified, would not be considered an
‘urban catchment’. In which case, the advice in this chapter may have less relevance.
However, each catchment is different, some natural or rural catchments contain sub-
catchments that are urbanised (for example, in semi-urban areas). The relevance of this
chapter to a specific modelling investigation needs to be determined by the reader through
application of judgement and experience.
There are some important differences between modelling of urban catchments compared
with modelling other types of catchments. Urban areas can include:
• Stormwater conveyance infrastructure. This includes a network of inlet structures and non-
natural flow paths that provide for greater concentrations and velocities of flow (refer Book
9, Chapter 6, Section 2).
154
Modelling Approaches
• A greater variety of land uses at different scales with different connectivity to catchment
outlets (refer Book 9, Chapter 6, Section 4).
The density of land uses and associated infrastructure within an urban catchment also
changes with time. The urban modelling process must therefore consider the information
needs of the stakeholder and ensure the temporal scenarios being modelled are relevant.
There are also differences relating to the availability and use of model input data. Modelling
in urban areas has intensive requirements related to representation of urban form, land
uses, and stormwater infrastructure. Therefore, collection and collation of input data can
become a significant component of the overall urban modelling task (refer Book 9, Chapter
6, Section 2).
1. Impervious areas which are directly connected to the conveyance network or urban
waterway – referred to as Directly Connected Impervious Areas (DCIA).
2. Impervious areas which are indirectly connected to the conveyance network, typically
where impervious surface runoff flows over pervious surfaces before reaching the
conveyance network (e.g. a roof that discharges onto a lawn). These are referred to as
Indirectly Connected Impervious Areas (ICIA). Alternatively, the responses of these
impervious surfaces are disconnected from sub-catchment outlets by volume
management measures (refer Book 9, Chapter 4).
These two configurations of impervious surfaces provide different hydrologic responses with
Directly Connected Impervious Areas contributing to runoff more quickly than Indirectly
Connected Impervious Areas (refer Book 5, Chapter 3, Section 4).
For large urban catchments, isolating the separate hydrologic effects of these two types of
impervious surfaces is challenging. Book 5, Chapter 3, Section 4 instead describes a
concept referred to as Effective Impervious Area (EIA) that encompasses the combined
hydrologic effect of both directly and indirectly connected impervious areas. The estimated
EIA value for a catchment is calculated and then applied to hydrologic calculations using the
adopted modelling software.
The approach described in Book 5, Chapter 3, Section 4 involves estimation of EIA via linear
regression of site stream flow gauge and rainfall data. In situations where there is insufficient
available data to allow this technique to be used, the ratio of EIA to Total Impervious Area
(TIA) has been established for a collection of gauged catchments that allows EIA to be
estimated based on an estimate of TIA (Refer Book 5, Chapter 3, Section 4).
TIA is a measurable catchment feature that is typically estimated using GIS methods (refer
Book 5, Chapter 3, Section 4). The selection of a technique for estimation of TIA will depend
155
Modelling Approaches
on catchment scale, data availability, accuracy requirements, and whether the catchment
scenario being investigated relates to an existing or future condition.
From Book 5 the recommended ratio of EIA/TIA for the majority of urban catchments sits
within the range of 50% and 70%. For example, if the TIA for an urban catchment was
measured to be say 55% then the EIA for that same catchment would be somewhere
between 27.5% and 38.5% of the total catchment area.
However, when the EIA approach is used, it is important that the characteristics of the
catchment under investigation are compared to those of the catchments that have been
used to establish the recommended EIA/TIA ratio. Different catchments have different
stormwater management standards and land use patterns that may alter the overall degree
of connectivity between impervious surfaces and the drainage network serving the
catchment. Where there is higher connectivity, the EIA is also expected to be higher.
For some catchment investigations where there is strong connectivity between the
impervious surfaces and the downstream drainage system, the measured TIA value may be
the more suitable impervious surface value to be used for hydrologic modelling purposes.
For example, the analysis of a sealed carpark surface, where the entire impervious area is
directly connected to surface inlets, is more appropriately undertaken using a TIA estimate.
Also, where the scale of the catchment is small, for example an individual parcel of land or a
small development site, the use of TIA values in conjunction with a sub-catchment definition
that reflects actual stormwater connectivity may be more appropriate. To avoid over
estimation, designers should only use TIA for small scale catchments when they are
satisfied that all the impervious flow is directly connected. The effect of any volume
management infrastructure should also be explicitly reflected in these model simulations.
Consideration also needs to be given to the overall need for accuracy when deriving
estimates of impervious cover. The majority of techniques applied by designers typically
under or over-estimate actual impervious cover by between 10 and 20 percent (Roso et al.,
2006).
Predicted peak discharges and runoff volumes are sensitive to error in impervious cover
when modelling low rainfall events with both event based and continuous simulation models.
Roso et al. (2006) observed that a difference in impervious surfaces of +/- 10 percent from
actual conditions, can result in typical errors of 13% in peak discharge and 25% in runoff
volume. These errors decrease in situations where rainfall depths are higher and infiltration
losses less significant.
It is also noted that where a catchment has significant impervious cover, the variability of
runoff is reduced in comparison to a similar pervious catchment since infiltration losses have
less influence.
156
Modelling Approaches
While natural waterway conveyance is increasingly sought as a design objective for new
urban areas, traditionally urban drainage systems have been designed to transfer runoff
quickly and within a minimum corridor, often partly underground. This containment of flows
within conduits that have artificial linings and unnatural slope leads to faster average flow
velocity, greater volume, and significantly altered flood hydrographs compared to those from
comparably sized rural and natural catchments.
In order to accurately represent the hydrologic and hydraulic behaviour of an urban area, the
influence of conveyance infrastructure on routing and flow behaviour should be included
within the adopted urban flood model. For most applications, the model should be capable of
describing the effect of conveyance infrastructure on flow characteristics such as flow depth,
velocity, direction, surface level, and the hydraulic grade line showing hydraulic losses
including their position and size. Other important information includes the split of flow
between the minor and major flow path, maximum flow widths in gutters, maximum allowable
flow velocity in pipes, the location and direction of any diversions and breakouts, and the
extent of property inundation.
The effect of conveyance infrastructure on these flood characteristics varies across the
different types of urban flood models.
Some models reflect the performance of conveyance infrastructure explicitly, which requires
that the designer input a detailed physical definition of conduits and their hydraulic
characteristics. The typical data that must be collected and input to these models include:
• Conduit type;
• cross-sectional dimensions (e.g. pipe diameter, channel width and depth, profile);
• length;
• slope, or sufficient elevation data to allow slope to be calculated using length; and
This information must be gathered for each relevant piece of conveyance infrastructure that
is part of the network being investigated. In some cases, this data may be readily accessible
in an asset database. In other circumstances this data may require collection via ground
survey. When data cannot be obtained due to inaccessible structures, assumptions
regarding the network geometry may be required.
These models can be data intensive, but they also have potential to provide detailed and
accurate descriptions of flood behaviours.
Depending on the type of user interface and pre-processor associated with the adopted
model software, some of these data requirements may be automatically harvested from
other raw input data. For example, a three dimensional surface model may be used to
establish roadside gutter profile and slope automatically. Even so, some information will be
required such as in this case the plan position of the roadside gutter.
For large urban areas, particularly those that have become densely developed over a long
period of time, the task of collecting and collating all the dimensions of all conveyance
infrastructure can be a major undertaking. Further complexity and effort arises since each
157
Modelling Approaches
inlet structure has a potential hydrologic sub-catchment that must be defined and input to the
model. It is also possible that the size of the sub-catchment may change as flow rate
increases. An exception is a rainfall-on-grid model approach where sub-catchment definition
may not be required, but even still, substantial effort is required to ensure each inlet structure
is capturing a realistic amount of runoff.
In some cases the burden of this infrastructure definition task can be reduced through use of
simplified models and assumptions that do not explicitly model the performance of all
conveyance infrastructure items. For example, the capacity of underground drainage may be
an assumed proportion of the total runoff hydrograph or in some cases totally ignored. This
approach can be acceptable if the capacity of the underground system is small relative to the
size of floods being investigated. In this case the model construction may instead focus on a
more accurate definition of surface-based conveyance infrastructure and overland flow
paths.
Other models can provide flood estimates using an even more implicit description of
conveyance infrastructure. For example, rating curves and stage hydrographs may be used
for selected locations in conjunction with run-off routing hydrologic estimates. In this case
less physical data needs to be collected.
Any decision to simplify the description of conveyance infrastructure within a model needs to
be made recognising the accuracy requirements of the investigation and the risks associated
with any limitations that may be introduced. It is important that the impacts of simplifying
models and associated assumptions are fully understood. This is further discussed in Book
9, Chapter 6, Section 3.
Waterway crossings can have considerable hydraulic impact for floods within the range
where the crossing structure causes the cross-sectional area of the waterway to be
substantially reduced. In these circumstances additional energy is required to pass flow
through and/or over the structure causing increased pressure head upstream of the crossing
(afflux). Afflux is flow dependant and will change across the range of potential flood
discharges. This afflux can cause a significant storage volume to be engaged upstream of
embankments which can therefore also heavily influence downstream flood behaviour.
As well as causing afflux locally around the structure, the hydraulic behaviour associated
with waterway crossings can also have an impact on:
158
Modelling Approaches
These impacts are not necessarily confined to those that are in the immediate vicinity of the
investigation site or study area and may impact areas upstream or downstream. A
comprehensive urban flood investigation should therefore consider the impact of each
existing or proposed waterway crossing in the catchment (and adjoining catchment in the
case of cross-catchment diversion) and whether they could have an impact on local flood
behaviour.
Once the relevant waterway crossings have been identified, the urban modelling task is then
to suitably define the crossing structure within the model. This will normally include the
physical dimensions and shape of the waterway opening beneath the crossing deck and any
obstruction caused by associated railings, embankments, and utility services. Models may
also assist with identifying locations where bed shear stress increases are likely and the
design of scour protection measures (refer to Book 6, Chapter 3).
Consideration also needs to be given to blockage potential of the overall structure and which
blockage scenarios may be required in order to fully describe potential flood behaviour. Book
6, Chapter 6 provides further detail regarding blockage considerations.
As with conveyance infrastructure, some types of urban models may estimate the flood
behaviour impacts of the waterway crossing in an implicit manner through use of rating
curves and stage hydrographs. The impact of any such simplifications and assumptions on
model accuracy needs to be considered when selecting an appropriate model platform for
the investigation.
The hydrologic and hydraulic impact of these facilities can be significant and will vary
according to the design of the facility and size of the flood. For the urban designer, the task
associated with this infrastructure is the physical description and schematisation of the
facility within the model. This will normally include:
• Storage characteristics and how the volume stored varies with depth; and
• outlet characteristics and how the outlet influence depth and volume of water stored in the
facility
The way this model task is completed will depend on the type of model being used, but most
commonly involves entering a form of definition table describing storage volume with depth
along with details regarding the physical dimensions and elevations of the outlet structure.
Depending on the intended purpose of the urban modelling task, consideration should also
be given to antecedent conditions, whether the storage is partly utilised prior to the onset of
the storm burst and whether there is potential for blockage of the outlet structure at some
point in time and to what extent.
The hydrologic and hydraulic impact of a volume management facility may be distant from its
physical location (upstream or downstream). The designer must consider inclusion of all
volume management facilities that could potentially impact the investigation site. Also, as
159
Modelling Approaches
proposed storage volumes increase, the critical storm duration and pattern may
correspondingly change, necessitating the inclusion of additional rainfall scenarios into the
suite of model tests.
The facilities that perform this function are often co-located or are an integral part of a
volume management infrastructure facility. Where this is indeed the case then similar model
inputs are required such as the basin storage and outlet characteristics. However, a different
model platform may be necessary since the treatment process targets smaller storms and
occurs over longer time periods. For example, event based hydrologic models may not be a
suitable basis for these assessments. Instead a continuous simulation-based model would
be more suitable.
It is important that the data management system properly documents the source of the data,
the format, and the date of acquisition. Book 1, Chapter 4 provides comprehensive advice on
the use of data. A key challenge in urban catchments is that many urban drainage
components cannot be put directly into a model but need to be schematised. Examples
include converting a basin drawing into a stage storage table or representing a complex pit
system. It is important that the data management system properly documents this process
so the interpretation and schematisation is properly documented and can be reviewed or
refined later. While data can be classified in many ways, there are three broad types of data:
• Model inputs such as rainfall and temporal patterns that change between events;
• model components such as pipes, storages, terrain information and land use data; and
The digital age has changed many aspects of data collection with data often being easier to
find but often the original data sources are unclear with merged data sets representing the
largest part of this problem. This same problem exists in the model development process
where many data sets are interpreted and merged. While most urban catchments are
ungauged recent observed flood data can often be found on social media and older
historical flood information can be found in scanned historical records and newspapers.
160
Modelling Approaches
As these computing advances have occurred, urban modelling software platforms have been
adapted to harness some of the available computational speed increases. This permits the
modelling designer to consider:
• Increasing the physical size of the model domain. For example, model a larger urban
catchment;
• increasing the spatial and temporal resolution of the model to allow for finer grained
numerical calculations that account for location and connectivity of different land uses;
• more model iterations to support improved calibration and sensitivity analysis; and
161
Modelling Approaches
It can be expected that computational capabilities will continue to increase into the future
and that urban modelling software platforms will continue to be refined and improved to
harness more of the available capacity.
Currently, computing power is such that it is reasonable to expect that most urban hydrologic
model simulations, even relatively complex ones, can be undertaken within seconds or
minutes. It can therefore be assumed that pure hydrologic investigations are already
unconstrained by computing power regardless of the choice of model platform.
Computing power is still somewhat of a constraint for hydraulic simulations. Some of the
more complex finer resolution or larger domain hydraulic model simulations can take hours
or days per simulation. This may constrain the design of an urban hydraulic modelling
investigation and also means that due care must be taken when selecting a hydraulic model
platform. A hydraulic platform and method should be chosen that has computational
efficiency to match the problem at hand. Models with very long run times should be carefully
managed as they usually preclude comprehensive testing, checking or calibration.
The future will permit very large multi-catchment spatial domains to be modelled at the finest
level of temporal and spatial resolution necessary, with sufficient speed to allow
simultaneous and exhaustive exploration of hydrologic and hydraulic scenarios.
This trend may outpace our ability to improve the underlying science and gather sufficient
quality input data, and to respond with more informed design and management solutions.
Consideration will also need to be given to whether the ultimate outcomes of investigations
are improved by aggressively pursuing the full capabilities of available computing power.
In other words, at some point in the future, further improvements to computing power may
cease to provide any material value to urban modelling designers. Substantial further
research and data collection, for a range of urban catchment scales, is necessary to ensure
theory is able to keep pace with computing power.
• A move away from isolated storm bursts with a single pattern, towards consideration of
pre-burst rainfall and more complete storm bursts including an ensemble of equally likely
but different temporal patterns. This leads to more robust design and resolves some of the
issues that arise when trying to maintain probability neutrality between rainfall and flood
(refer Book 3 and Book 5). The future will see this trend continue with designs becoming
increasingly based on complete storms and continuous recorded or synthetic rainfall
sequences.
• The use of direct rainfall, also referred to as ‘rainfall-on-grid’ approaches which attempts to
explicitly resolve the accumulation of runoff progressively down the catchment, removing
the need to pre-identify flow paths and sub-catchments. This is a useful way to ensure
flow paths are not inadvertently omitted from an investigation. With further research and
software development this approach may in time also eliminate the need for hydrologic
162
Modelling Approaches
models to undertake surface routing. At this stage however, there is inadequate evidence
that a direct rainfall approach should be relied upon for this purpose with many parameters
being scale and approach dependent (refer Book 5).
• The hydraulic models applied in practice have increasingly changed from one-dimensional
to two-dimensional representations of the floodplain surface. This allows a more realistic
definition of potential flow paths which in turn improves the representation of flood
behaviour (refer Book 5 and Book 7).
With continuation of this trend it can be anticipated that model platforms will eventually
converge on more accurate representations of rainfall runoff and flood processes, requiring
different model inputs, parameters, and application techniques. Again, this will only occur
with adequate research and software development effort and data collection for a range of
urban catchment scales.
These approaches should be considered where better appreciation of natural variability and
uncertainty is required. This may include sensitive urban areas, a major waterway crossing,
large flood mitigation proposal or hydrologic design of regional scale water quality
infrastructure.
Machine learning algorithms are also being used for the prediction of stream flow using
statistical information drawn from historic rainfall and stream gauge data, providing an
alternative approach to hydrologic modelling.
Furthermore, modelling software platforms that have traditionally been tied to a single
computer, are now able to be offered as internet-based services. Into the future other new
163
Modelling Approaches
applications will be found for the internet that cannot be fully anticipated at this time but will
likely support further improvements in the application of urban models.
In parallel to the internet, an associated trend that has also emerged is a deeper interest and
reliance on Geographical Information Systems (GIS). These systems are used for the
storage, handling and display of physical catchment data, catchment parameters and
infrastructure data.
Spatial information systems have become an important support technology for the
application of urban models, with most platforms leveraging these tools for pre-processing
and post-processing of data, storage of data, data display, data enhancement and the
preparation of information products for stakeholders.
Urban modelling designers should consider how the model software platforms they use can
be used to accommodate these growing information needs.
Some of the more common types of models and methods are listed in Table 9.6.1. For each
type, a generic classification of its capability is also provided. This is a snapshot in time of
the capability of these models and will change with time. This classification is based on the
examples in Table 9.6.2. A subsequent section describes the performance of these models
at different spatial scales.
164
Modelling Approaches
165
Modelling Approaches
166
Modelling Approaches
Infiltration losses
Cursory treatment of Infiltration losses
infiltration losses
Surface
characteristics Spatial distribution of
Surface partially represented rainfall
characteristics not
fully represented
Surface
characteristics well
represented
(including surface
wave speed)
Channel and storage Channel Channel Channel
routing characteristics not characteristics characteristics and
represented partially represented flood wave speed well
represented
167
Modelling Approaches
When selecting a particular model or technique, the designer should in the first instance look
to match the estimation capabilities of the model, whether they be ‘limited’, ‘moderate’ or
‘strong’, with the nature of urban modelling problem that is being investigated.
For example, if channel routing and structure hydraulics are not aspects of the problem that
need to be investigated, then the model selected need not have any capabilities in these
areas. Equally, if it is expected that a particular problem will require significant capabilities in
(for example,) runoff generation, then a model with ‘strong’ capabilities in this area should be
considered.
Where the estimation capabilities are identified in Table 9.6.1 as ‘limited’, significant caution
must be adopted. As a minimum they should be applied by, or under the direct guidance of,
a designer who fully understands the limitations of these approaches. The tolerance for error
in the results should be considered and if greater accuracy is required then an alternative
more capable model platform applied.
As always, the level of experience of the designer is a significant factor. Someone with
significant experience and familiarity with a specific model may be able to extend its
capabilities to a level that achieves an acceptable level of estimation accuracy that is beyond
its normal capabilities if deployed by an average or less advanced user.
At the other end of the spectrum of potential scale, an urban model may be constructed to
represent all the stormwater catchments spanning an entire suburb or even a small city.
These larger models are often used for the purpose of regional flood mapping, establishing
flood levels for development purposes or the design of large-scale stormwater and road
crossing infrastructure.
When evaluating which type of model to adopt for a particular urban modelling project, the
spatial scale of interest is an important factor to consider since some particular models may
not be capable of competently representing all the complexity that is encountered at the
scale of interest.
Consider four example spatial scales with each physical footprint increasing by an
approximate order of magnitude as shown in Table 9.6.3 below.
168
Modelling Approaches
As a model’s spatial scale increases from ‘lot’ through to ‘precinct’ the more likely that the
catchment being modelled will contain a greater range of features of relevance to stormwater
behaviour such as:
• Urban waterways
In conjunction with this increase in the number of stormwater features, it follows that the
potential number of rainfall runoff processes encountered in a larger scale model will also
increase. In this context flood generation processes include damaging floods as well as
much smaller floods that are relevant to yield and water quality assessment.
Table 9.6.4 below provides a list of the flood generation processes encountered at each of
the four spatial scales described above. This listing is non-exhaustive and only provided to
demonstrate that there is a larger number of potential flood generation processes that can
be expected to occur as spatial scale increases from ‘lot’ scale to ‘precinct’ scale (growing
from approximately 8 to 32 in the example listing provided in Table 9.6.4).
Some degree of simplification of these flood generation processes normally occurs when
preparing an urban flood model. The flood generation processes listed in Table 9.6.4 have
different levels of importance and influence when trying to decide whether any simplifications
are possible. Each process has been indicated in Table 9.6.4 by one of two different symbols
as follows:
169
Modelling Approaches
A flood generation process that is less important. This process may be omitted 2
or simplified if accuracy of model estimates is not critical
Table 9.6.4. Example Flood Generation Processes at Different Model Spatial Scales
170
Modelling Approaches
171
Modelling Approaches
Most model platforms have some limitations on which processes they can represent. A
decision will be required at the commencement of model preparation as to whether the
selected model and the available data are capable of achieving the required level of
accuracy and reliability.
As a result of the expected increase in the number of flood generation processes with scale,
if a catchment investigation requires investigation across a large spatial scale, then the
designer can expect that a model or method with ‘strong’ estimation capabilities across
multiple flood process areas will be necessary (refer Table 9.6.1 and Table 9.6.2).
For example, the Rational Method, with ‘limited’ runoff generation and surface routing
capabilities, is not likely to be suitable for a ‘precinct’ scale estimate of peak flow as it cannot
adequately simulate the array of flood processes that are encountered, even in the simplest
of catchments. However, it may be suitable at a ‘lot’ scale in circumstances where storage
routing is not critical.
The resolution of model inputs and boundary conditions also needs to be considered. There
is little value in developing a high-resolution model with coarse lumped inflows or
considering the performance of a complex system using a single temporal pattern.
These capabilities are principally the domain of runoff-routing and continuous simulation
models. Other processes that effect total runoff volume such as harvesting and use of
rainwater may also be important considerations for smaller flood magnitudes. Figure 9.6.1
indicates the likely range of effectiveness for the different types of hydrologic models against
flood magnitude on x-scale and model scale on y–scale.
172
Modelling Approaches
Figure 9.6.1. Types of Urban Hydrologic Models and their Likely Application Range
For the companion hydraulic calculations during small floods, and where flooding is confined
to the pipe network or a simple channel, a pipe network model and/or 1D channel hydraulic
model will normally be adequate. Even some hydrologic model packages have the capability
to undertake basic hydraulic calculations.
For hydraulic calculations associated with large floods that exceed the normal capacity of a
channel, or where substantial overland flows develop, a 2D hydraulic model may have more
utility since the likelihood of complex flow patterns increases.
173
Modelling Approaches
Where there are multiple options arising from this process, the simplest model, capable of
the necessary calculations should be favoured. Other model selection criteria include,
availability of sufficient input data and parameter research, output data capabilities,
availability of other required functionality (e.g. water quality calculation), cost, and designer
familiarity with the model. A hydraulic model involves a more explicit representation of flow
routing and how storage is represented in the catchment. Generally, a hydraulic model will
be required where there is a need to understand both flow and flood levels. one-dimensional
pipe and channel models only provide this information at key locations but are well suited to
‘greenfields’ subdivision design, while a two-dimensional model provides a detailed spatial
representation of surface stormwater processes and may be more suited to brownfields
investigations.
174
Modelling Approaches
When modelling at small spatial scales it is simpler to closely represent each flood process
and its associated physical features and drainage connections explicitly. As spatial scale
increases it is sometimes possible to adopt some model simplifications to manage data
requirements and the general complexity of the modelling task. For example, when building
a ‘precinct’ scale model it may be possible to omit or simplify ‘lot’ scale processes. However,
models should not be simplified unless that consequences of spatial averaging, deterministic
assumptions and judgements is well understood. Where simplification is undertaken, efforts
should be made to fully understand the impacts of simplification and limits on validity of the
model outputs. For example, by comparison of results against a more detailed sub-model or
results generated by an alternative model.
Experience and careful judgement are required when choosing to omit or simplify those
processes that are suggested as being less important. In general, the omission or
simplification of such a process should only occur when the investigation does not demand
highly reliable estimates, for example, for preliminary sizing of structures or where flood risks
are low. Table 9.6.4 indicates those flood generation processes that may be less important
and therefore could be considered as an opportunity for simplification at each different
spatial scale.
Firstly there is spatial resolution of the model. For a hydrologic model this will relate to the
minimum size of sub-catchments. For a hydraulic model this will relate to the density of
sampling of the ground surface.
The adopted spatial resolution of a model will govern the density of reporting locations i.e.
where model results are output by the model software. It may also influence model accuracy.
Through experience a designer will develop an understanding of the optimum model spatial
resolution for each type of model and to what degree spatial simplification can be tolerated.
Then there is the temporal resolution of the model and the ability to extract output time
series that are fit for purpose. For example, the temporal resolution necessary for regional
water supply planning may be lower than required for calculation of stormwater harvest yield
from a small catchment. In this case a degree of temporal simplification to daily or monthly
data may be acceptable for a regional water supply planning task.
Again, through experience a designer will develop an understanding of the optimum model
temporal resolution and to what degree temporal simplification can be tolerated.
Drainage networks (also discussed in Book 9, Chapter 5) are now considered to be part of
more comprehensive stormwater management approaches (refer Book 9, Chapter 3) that
175
Modelling Approaches
respond to multiple water cycle objectives including protecting waterways, mitigating flood
risks, provision of water resources, managing the quality of stormwater runoff and enhancing
the amenity of urban areas. These approaches respond to a need to manage urban water
balances (discussed in Book 9, Chapter 2) and to also incorporate a range of storage
measures (refer Book 9, Chapter 4) that aim to manage flooding, stormwater quality and
provide additional water resources.
• Hydrological models that translate rainfall into stormwater runoff and evaluate behaviour of
storages;
• hydraulic models that evaluate or design the transfer of stormwater flows through
networks of infrastructure and across land surfaces;
• linked hydrology and hydraulic models that include detailed two-dimensional surface flows
with hydrodynamic conveyance networks;
• rainfall-on-grid models;
We should be mindful that all models are an approximation of reality that can be used to
enhance our understanding about the likely stormwater behaviours for particular urban
scenarios. The different hydrological and hydraulic models can be classified by their outputs
of peak flowrates, hydrographs, flood depths or continuous sequences of stormwater runoff.
These models can also be distinguished by the methods used to route rainfall runoff towards
inlet structures in urban conveyance networks or stormwater volume management
176
Modelling Approaches
measures. Models can also be described by different spatial detail such as lumped, semi-
distributed or distributed inputs (Figure 4.2.5, Book 4, Chapter 2, Section 6). Lumped
catchment models approximate the behaviour of the catchment using single average inputs
and assumptions. Semi-distributed models employ a range of sub-catchments with different
attributes and assumptions. In contrast, spatially explicit details are included in distributed
models – this detail may include the range of different land uses and properties in an urban
model or a grid of equal size and shape used throughout the model. An emerging type of
distributed hydrology and hydraulic model is the direct rainfall or rainfall-on-grid methods
(refer Book 6, Chapter 4, Section 7).
Empirical relationships can be utilised to determine peak flows from small catchments and
are applied to the design of roof gutters, downpipes, and infrastructure to manage
stormwater runoff from properties in accordance with standards such as AS/NZS 3500.3.
These approximate methods include nominal “deemed to comply” infrastructure
specifications or generally require information about catchment area and slope, and utilise
assumed runoff coefficients, time of concentration and design rainfall intensity in a lumped
catchment design process.
The probabilistic or the urban Rational Method is a more detailed approximate method that is
utilised to generate peak flowrates for use in the design of pipe networks within small
properties and for small sub-catchments. This framework of analysis differs from simple
empirical relationships by including equivalent or effective impervious areas, accumulation of
flow rates and the areas of different land uses. The method uses rainfall intensity derived
from Intensity Frequency Duration (IFD) data, assumed runoff coefficients and time of
concentration to derive stormwater peak flows.
The design approach associated with urban Rational Method is often based on lumped sub-
catchment inputs to inlet structures which require the resolution of partial area effects on the
timing of cumulative peak discharges throughout a conveyance network. A lumped sub-
catchment process combines all land uses, including the area of pervious and impervious
surfaces (full area), with an estimated time of concentration to derive peak flows at the outlet
of a sub-catchment which is the inlet to a conveyance network. A partial area effect is, for
example, where the runoff from impervious surfaces (partial area) arrives at the outlet before
runoff from pervious surfaces reach the outlet at less than the full area travel time. These
methods may be used to analyse the capacity of individual pipes or peak flows from small
catchments but cannot simulate actual flow behaviour throughout conveyance networks and
urban stormwater management systems (Pilgrim, 1987).
The simple nature of the urban Rational Method cannot account for the complexity of
contemporary urban catchments and modern stormwater management approaches, the
temporal and spatial variability of storm events, and variations in antecedent or between
storm event processes. Approximate methods, such as Rational Method, should only be
applied within a catchment where more detailed analysis of rainfall runoff observations have
defined the parameters (for example, runoff coefficient and time of concentration) for use in
the method (Phillips et al., 2014; Coombes et al., 2015a). However, Goyen (2000)
established that derivation of runoff parameters at the regional scale or bottom of a
catchment may not necessarily describe local processes in sub-catchments. Local
information is also needed to determine urban runoff parameters.
Runoff or hydrograph routing methods are commonly associated with computer models that
include internal processes that incorporate different land uses with separate pervious and
impervious surfaces. The process includes depression storages and losses with lag times to
generate separate hydrographs of runoff for each land surface. These runoff routing
methods typically employ event based rainfall inputs (Book 4, Chapter 3, Section 2) of
177
Modelling Approaches
selected Annual Exceedance Probability (AEP) and duration of peak burst rainfall (refer to
Book 9, Chapter 6, Section 4). An objective of this process is to achieve probability neutrality
between rainfall inputs and generated runoff for urban catchments.
These runoff routing methods may utilise single or multiple design storms and associated
temporal patterns to determine regimes of excess rainfall that is then routed through
hydraulic models that range from simple pipe hydraulics to full two-dimensional
hydrodynamic processes. A key limitation of event based modelling approaches is the need
for assumptions about joint probability of antecedent conditions (such as soil moisture and
available storage in volume management solutions) and the characteristics of storm events
(Kuczera et al., 2006). In addition, event based methods have traditionally only simulated
runoff from burst rainfall and have not considered that runoff is also generated by pre-burst
and post-burst rainfall (refer to Book 9, Chapter 6, Section 4). The magnitude of rainfall
runoff in urban catchment may be under-estimated by event based processes unless pre-
burst rainfall is also counted in rainfall event based models.
The limitations of rainfall event based models, and dramatic increases in the capacity and
utilisation of computers has fostered the use of continuous simulation (Book 4, Chapter 3,
Section 3 and Book 9, Chapter 3) models that can account for continuous physical,
conceptual and statistical processes in urban catchments. These methods have traditionally
utilised real or synthetically generated rainfall sequences to understand the yield from water
supply catchments and the behaviour of water and wastewater distribution networks. These
methods are also used to estimate the behaviour of stormwater quality solutions in urban
catchments (Fletcher et al., 2001). However, continuous simulation can also be employed to
account for the interactions between climate processes, human interventions or behaviours
and stormwater runoff from urban catchments (Coombes and Barry, 2015). Pluviograph
rainfall records with intervals of less than an hour (often 6 minute intervals) are used in
continuous simulation of rainfall runoff from urban catchments.
The continuous simulation method involves simulation of a rainfall runoff model over a time
period of sufficient length to account for all of the important interactions between rainfall and
catchment processes to produce an urban flood frequency analysis. Sufficient lengths of
observed rainfall are usually not available to provide adequate information about rare runoff
events and synthetic rainfall sequences are often required for continuous simulation models
(Book 4, Chapter 3, Section 3; Book 2, Chapter 7, ). Use of continuous simulation with
synthetic rainfall inputs may require calibration of the rainfall model and the continuous
runoff routing model (Book 4, Chapter 3, Section 3). However, all models require calibration
and verification.
178
Modelling Approaches
required, and topography information will need to be edited to include key infrastructure such
as street gutters, hydraulic structure, conveyance networks and road crowns (Hall, 2015).
Rainfall-on-grid models should be calibrated to local historical spatial flood levels or flow
data. Use of regional rainfall runoff parameters is not suitable for direct rainfall methods that
are driven by local processes.
There may also be a need to vary roughness parameters (such as Manning's n) with flow
depth (for example, Zahidi et al. (2017); Khrapov et al. (2015); Muglera et al. (2011)) and
carefully assign loss parameters in each grid (Babister and Barton, 2012). The results at
local and sub-catchment scales may be unexpected as all flow paths are identified. The
method is subject to a range of potential challenges including mathematical instabilities,
unrealistic flows and large errors created by losses, variable roughness, long runtimes and
shallow flow depths. These powerful direct rain methods are subject to ongoing research
and model results should be interpreted with caution. It is imperative that designers check
that catchment response with an alternative model and volume of runoff is consistent with
loss model used (refer to Book 9, Chapter 6, Section 4). If a rainfall excess model is used
this represents the volume of runoff that appears at the catchment outlet not rainfall applied
to the model so depression storage needs to factored into losses.
The design of stormwater infrastructure and understanding of runoff for urban areas involves
decisions at multiple scales. This insight can be combined with ensembles of design rainfall
patterns to determine the appropriate rainfall inputs as shown (for example) by the Box and
Whisker plot of peak runoff (discharge) to the catchment outlet in Figure 9.6.3.
179
Modelling Approaches
Figure 9.6.3. Example of a Box and Whisker Plot of Peak Stormwater Runoff Utilised to
Select the Critical Storm Burst Ensemble and Other Design Information
Figure 9.6.3 indicates highest average and median peak discharge is generated by the
ensemble of 10 storm bursts of 25 minute duration at the catchment outlet. A small number
of higher values of peak runoff also occur in the 45 minute (maximum value) and 120 minute
(far outlier value) durations which could be used to test the potential maximum hazard of
surface flows. Conveyance infrastructure within the catchment should be designed using
ensembles of storms with durations up to and including 25 minutes to account for impacts of
smaller duration storms upstream of the outlet. Different design ensembles may apply in
situations that incorporate within catchment storage solutions and at different locations in the
urban catchment.
This improved approach to design rainfall inputs to models is particularly important for urban
catchments that are significantly different to rural catchments because they generate runoff
from majority of rainfall as shown in Figure 9.6.4.
180
Modelling Approaches
Figure 9.6.4 demonstrates that urban runoff can be generated by pre-burst, burst and post-
burst proportions of complete storms (entire storm event). There are many different
configurations of pre-burst, burst and post-burst rainfall in real rainfall events that should be
considered in analysis of urban hydrology. Urban designs based on a single burst pattern of
rainfall or peak rainfall assumptions can overlook substantial runoff rates and volumes which
may adversely impact on the performance of inlet structures in conveyance networks,
volume management measures, roads and overland flow paths (Coombes et al., 2015b).
A range of updated rainfall products are available from the ARR Data Hub (Babister et al,
2016), including new spatially distributed IFD, Areal Reduction factors (ARF), design
temporal patterns for burst rainfall, hydrological losses, and pre-burst rainfall – as
summarised in Table 9.6.5.
Book 2, Chapter 3.
ARF Figure 2.7 from US FORGE work (except New equations
data NSW) derived using
Australian data.
Book 2, Chapter 4.
Temporal patterns Single temporal AVM, filtered for Ensemble of real
pattern of design embedded burst storms.
burst rainfall based on
Average Variability Book 2, Chapter 5.
Method (AVM)
Spatial pattern Centroid Spatially distributed Spatially distributed
IFD IFD
181
Modelling Approaches
The different rainfall inputs to hydrology and hydraulic models are discussed in Book 2. The
updated IFD design rainfall data is available from the BoM website. Derivation of the IFD
data using the additional rainfall records is outlined in Book 2, Chapter 3, Section 4 and the
application of the updated IFD design rainfalls is presented in Book 2, Chapter 3, Section 9.
ARF are available from the ARR Data Hub and is discussed in Book 2, Chapter 4. Design
rainfalls (IFD) only apply at a point in a catchment. When estimates of rainfall runoff are
required for catchments with areas greater than 10 km2, the design rainfall intensities at a
point are not representative of the areal average rainfall intensity for the entire catchment.
The ARF is the ratio between the design values of areal average rainfall and point rainfall,
for same duration and Annual Exceedance Probability (AEP). Application of ARF is outlined
in Book 2, Chapter 4, Section 3.
Most runoff-routing methods utilise design temporal patterns to determine the timing of
rainfall falling on catchment and generate hydrographs of runoff. The traditional use of a
single average temporal pattern has been found to be inadequate for hydrological analysis
due to the variability of natural rainfall patterns (Book 2, Chapter 5) and of the characteristics
of urban catchments (Book 9, Chapter 3). The application of design temporal patterns as
outlined in Book 2, Chapter 5, Section 9. Ensembles of design temporal patterns that are
more likely to capture these natural and human variabilities are available from the ARR Data
Hub. It is noted that two different ensemble patterns are provided, point rainfall patterns for
catchments with areas up to 75 km2 and areal rainfall patterns for catchments with areas
greater than 75 km2.
Climate change has the potential to alter the frequency and severity of rainfall events, storm
surge and floods by altering rainfall IFD relationships, rainfall temporal patterns, continuous
rainfall sequences, antecedent conditions and baseflow regimes (Book 1, Chapter 6; Book 2,
Chapter 2, Section 4). Interim climate change factors are presented as changes in average
temperature and associated percentage increases in rainfall intensity for selected global
climate models (GCM) in the ARR Data Hub. These values should be applied in the context
of the risk decision tree processes provided in Book 1, Chapter 6, Section 3. These interim
values are subject to continuing research and evolving science.
182
Modelling Approaches
The ARR Data Hub provides regional rural losses for complete storms and pre-burst rainfall.
In urban areas, the median values of local losses should be utilised wherever possible. The
average initial losses from urban impervious surfaces is less than 1 mm (Book 4, Chapter 2,
Section 7) and ranges from 1 mm to 4 mm for urban effective impervious areas (Book 5,
Chapter 3, Section 4). In most cases, storm burst loss is equal to median storm loss less
pre-burst rainfall.
Rural and regional loss assumptions should not be a default assumption for urban areas and
a hierarchy for selecting urban losses is highlighted as follows:
• Use local losses based on GIS investigations, local knowledge and observations. Losses
derived at a regional scale are not local losses- use local losses in small scale models.
Note that a well-constructed model with adequate spatial scale should account for
effective impervious area and connectivity effects
• Regional losses (Book 5, Chapter 3, Section 4 and Book 5, Chapter 3, Section 5):
Impervious area losses: IL: <1 mm, CL: 0 mm/hr; Effective Impervious Area: IL: 1-2 mm,
CL: 0 mm/hr; Pervious area ≈ rural losses
Radar rainfall (refer Cecinati et al. (2017)) can be used to interpolate between point rainfall
observations for use in hydrology and 2D hydraulic models. There have been many studies
that have developed methods to correct errors in radar rainfall but some residual errors are
intrinsic to radar rainfall that should be resolved by spatial and temporal comparison to point
rainfall observations.
183
Modelling Approaches
Figure 9.6.5. Stormwater Runoff from Roofs and Properties – Lot Scale Effects
Figure 9.6.5 demonstrates the pathways of stormwater runoff from different surfaces within a
property. These runoff processes are dominated by directly connected impervious surfaces,
indirectly connected surfaces and pervious surfaces. Rain falling on impervious roof surfaces
flow into roof gutter storages which discharge via downpipes into pipes connected to the
street gutter or pipe network. Runoff from impervious driveway surfaces and adjacent road
surfaces discharge to street gutters. These impervious surfaces facilitate highly efficient
translation of rainfall into runoff, are subject to small depression storage losses, and are
mostly directly connected to street gutters. Rain falling on pervious yard areas is partially
retained in depression storages and infiltrates into soil profiles prior to generation of runoff
from residual rainfall. These types of pervious surfaces are relatively inefficient at generating
runoff and are often indirectly connected to street gutters or pipe networks. Urban properties
can also include impervious areas that discharge stormwater to pervious surfaces or
184
Modelling Approaches
storages (for example, rainwater tanks, onsite detention and raingardens) that partially
disconnect these surfaces from street gutters.
Runoff from impervious surfaces may also arrive at street gutters more rapidly than runoff
from pervious surfaces. In many situations, pervious surfaces may not generate runoff for
frequent rainfall events. These runoff behaviours are influenced by the configuration of
property assets (including building form), topography and stormwater management
measures. In situations where allotments slope away from roads, runoff from roofs and
impervious surfaces may be directed to an inter-allotment conveyance (easement drainage)
network. Local authorities will often specify locations of stormwater discharges from
properties – this is known as a legal discharge point. Subsoil drains are sometimes used on
properties to lower water tables around buildings or in waterlogged areas and discharge
stormwater from properties.
Property scale influences are fundamental to urban stormwater runoff. However, there has
been limited testing at this scale (Stephens and Kuczera, 1999), and designs of roof and
property drainage are not clearly defined (Jones et al., 1999). A major challenge for
simulation of urban stormwater runoff is the behaviour of individual properties and
accumulation of these property behaviours throughout urban catchments (Goyen (2000),
Coombes (2015); Book 9, Chapter 3). The cumulative impacts of properties on the behaviour
of catchments are defined by the timing, volume and rate of stormwater runoff from each
property. The runoff behaviour of properties can also be altered by a range of onsite
stormwater management approaches including disconnection of roof downpipes from street
gutters, raingardens, landscaping, rainwater tanks, infiltration measures, onsite detention
and green spaces (refer Book 9, Chapter 3 and Book 9, Chapter 4). Local authorities can
apply restrictions on the flow rate, quantity and quality of stormwater that discharges from a
property to encourage onsite management of stormwater to avoid or reduce downstream
impacts (Chocat et al., 2001; Patouillard and Forest, 2011; Walsh et al., 2012; Everard and
McInnes, 2013).
Simple methods for design of roof gutters, downpipes and property drainage are provided in
Australian Standards (for example, AS/NZS 3500.3), by suppliers of roofing materials,
government authorities and the Plumbing Code of Australia. These approaches include
nominal and general methods. Nominal methods apply to single dwellings on properties with
land areas up to 1,000 m2 by providing “deemed to comply” specifications of infrastructure
(configuration, minimum pipe sizes, depth of cover over pipes and slopes).
Design calculations are provided for more complex land uses and larger properties. These
guidelines highlight the need to avoid ponding against buildings, flows into buildings and
management of overland flows from adjoining properties. Large residential, commercial and
industrial properties and car parks include more complex and dendritic stormwater
management systems (for example Figure 9.6.6).
185
Modelling Approaches
Figure 9.6.6. Stormwater Management System for Larger Properties with Complex Land
Uses
Figure 9.6.6 shows that stormwater management schemes within properties may combine
multiple pathways of stormwater runoff from different surfaces that have variable levels of
connection to the street gutter or inlet pit in the street conveyance network. The performance
of these networks may be affected by in-pipe attenuation effects, volume management
measures, and substantial variations in the timing and magnitude of runoff to sub-catchment
outlets. These outflows from properties are surface flows, or direct inflows to pipe networks
in streets or inter-allotment conveyance networks.
Approximate or general design methods are based in rules derived from simple Rational
Method assumptions and utilise catchment areas (roofs, paved surfaces and gardens),
proportions of imperviousness, slopes, assumed times of concentration with associated
average rainfall intensities and runoff coefficients to generate maximum or peak flow rates. A
five minute time of concentration and associated rainfall intensity was commonly assumed in
design processes for roof and property drainage. Performance standards for roofs have
been defined by choice of rainfall intensity of a 5% AEP for roof gutters and of a 1% AEP for
box gutters. Design of conveyance networks within properties aim to avoid surcharges and
overland flows for 1 EY in low density areas and up to 5% AEP for important land uses (such
as hospitals and aged care facilities) that may be vulnerable to greater risk or inconvenience.
The volume, pattern and timing of stormwater runoff are not considered in these approaches
which may lead to under-performance of stormwater management measures included
unexpected surface flows on properties.
186
Modelling Approaches
Field measurements suggest that travel time to street gutters from residential properties is
two minutes or less (Stephens and Kuczera, 1999; Coombes, 2002). The assumption of five
minute time of concentration in ARR 1987 (Pilgrim, 1987) was based on the lowest available
time interval of IFD rainfall at the time. Revised IFDs available from the BoM provide values
for rainfall intensity that commence at a one minute duration which permits use of finer detail
in design and to account for shorter flow times to outlets. Observations by Stephens and
Kuczera (1999), Goyen (2000) and Coombes (2002) indicate that initial losses from roof
gutter systems range from 0 mm to 1 mm and continuing losses range from 0% for metal
roofs to 20% for dry tile roofs. Average depression storage losses of impervious surfaces
can range from 1 mm to 10 mm and average losses from pervious surfaces range from 2
mm to 20 mm.
Goyen and O'Loughlin (1999b) highlighted that spatial and temporal patterns of rainfall
losses and their magnitude have significant impacts on peak stormwater runoff. Larger scale
and more general estimates of losses are provided in Book 3, Chapter 3. Wherever possible,
local information on losses should be incorporated in analysis of stormwater runoff and
associated designs of infrastructure.
More detailed hydrograph routing methods may be required for larger properties with
complex land uses to design infrastructure for given performance standards, and to
understand the behaviour of the stormwater management system. The need to manage
inflows of groundwater and surface runoff to basements on some properties will also require
volume based analysis to understand the extent of flooding and to design pump out
infrastructure. Argue (2004) provides a range of simple methods for including volumes in the
small scale design processes that are known as “regime in balance” and accounting for
“emptying times” of storages.
This modelling process includes details of different surfaces within sub-catchments that
influence stormwater runoff to inlet structures within the property stormwater management
network. The analysis should include the characteristics of pervious and impervious surfaces
– such as initial and continuing losses, sub-catchment areas, slopes and details of overland
flow paths. This approach is similar to the design and analysis process for public stormwater
conveyance (street drainage) networks.
Use of ensembles of storm burst rainfall will ensure that the stormwater management system
for a property is tested by a range of equally likely storm patterns and volumes of rainfall.
This will permit a more complete understanding of potential surface flow paths within the
property and in the adjacent street gutter, and the impacts on downstream infrastructure.
However, use of complete storms or inclusion of pre-burst rainfall with the burst rainfall
patterns will assist with defining the likely magnitude of overland flow behaviours at the
property. Initial losses in the analysis may need to be set to zero if the magnitude of pre-
burst rainfall is greater than the capacity of depression storages on the property. At some
locations, the residual pre-burst rainfall may also contribute to additional runoff and overland
flows within the property. These approaches can be combined in a range of computer
modelling packages.
187
Modelling Approaches
The ability of peak flow or event based models to describe runoff behaviours are limited in
situations where the joint probability of antecedent conditions and storm events is not well
defined (refer to Book 4, Chapter 3, Section 3) and there are continuous responses to
complete storm events. These limitations apply to stormwater strategies that include volume
storage measures, rainwater or stormwater harvesting, and water quality solutions.
In these situations, continuous simulation using real local rainfall or synthetic rainfall
sequences can be utilised to test the continuous interactions between key components of
the stormwater management. The results from continuous simulation can be directly
interrogated to understand key performance criteria such as annual average reduction in
water demand, stormwater runoff and nitrogen loads created by rainwater harvesting and
raingardens. Alternatively, continuous simulation can provide distributions of available
storage in volume management measures (such as rainwater tanks, infiltration measures
and bioretention devices) or soil profiles prior to storm events versus frequency of storm
events that can be used in event based analysis (Coombes and Barry, 2008b; Hardy et al.,
2004) as shown, for example, in Figure 9.6.7.
Figure 9.6.7. Example Distribution of Available Storage Prior to Storm Events versus Annual
Exceedance Probability (AEP) of Storm Events
Figure 9.6.7 (for example,) demonstrates the average retention storage available in
rainwater tank (capacity of 5 m3 collecting runoff from a 100 m2 roof area and supplying
household indoor and outdoor uses) prior to storm events of a given AEP that was derived
using continuous simulation. This type of information can be used in event based models to
determine stormwater peak flows and runoff volumes. These results will vary significantly
with different land uses, building form and throughout Australia.
188
Modelling Approaches
Figure 9.6.8 highlights that an urban sub-catchment may contain a range of different land
uses, including (for example) unit and detached residential dwellings on properties, a park
and part of a road. These land uses incorporate different surfaces, including roofs, paved
areas (impervious), garden and grassed areas (pervious) that produce different regimes of
stormwater runoff.
The behaviour of the sub-catchment surfaces can be estimated using lumped catchment
approximations which are based on sub-catchment area, the total impervious area (TIA) and
a travel time (or time of concentration) for a critical rainfall duration to the inlet structure
(refer Book 5, Chapter 2, Section 2 and Book 9, Chapter 5, Section 5). For example, a sub-
catchment area of 4,250 m2 with an impervious proportion of 56% and time of concentration
which depends on rainfall intensity, slope and distance to inlet structure. In the absence of
other data, these types of approximations could be used in simple calculations or in
computer models. However, it is preferable to construct analysis of urban sub-catchments
using local details which can be sourced from site inspection, survey plans and inquiry using
GIS.
189
Modelling Approaches
connectivity of different types of surfaces with storages to the inlet can dramatically change
travel times and peak flows. Roofs may discharge via pipes to street gutters that facilitate
rapid transfer of runoff volume to inlets. Runoff from road surfaces to the gutter may arrive at
a similar or earlier time (refer Figure 9.6.8).
Other impervious surfaces may discharge via driveways to the street gutter which produces
a different time for stormwater runoff to reach the inlet structure. Pervious surfaces also
discharge to the street gutter, partially via impervious surfaces, to the inlet. Thus the timing
of the arrival of runoff volumes to the inlet is dependent on these many different
configurations and characteristics within the sub-catchment. So the performance of the inlet
structure and the magnitude of surface bypass flows are dramatically affected by these
considerations. These complex processes can be better described by semi-distributed (link-
node) and distributed (grid) computer models (Book 5, Chapter 2, Section 4; Book 6,
Chapter 4, Section 7) that explicitly combine these details with pre-burst rainfall and
ensembles on burst rainfall patterns.
Regional analysis of a small number of urban catchments provides estimated initial losses of
1 – 3 mm for EIA and 20 – 30 mm for indirectly connected areas in sub-catchments (Book 5,
Chapter 3, Section 5). Estimated median continuing losses were 2.5 mm/hour in South East
Australia and 1 – 4 mm/hour elsewhere. These event based regional values should only be
used in the absence of local data. It is essential that assumptions about losses in stormwater
models are based on assessment of local conditions. The magnitude of losses is also
impacted by AMC which is altered by garden watering in urban areas and by available
storage in volume management measures throughout the sub-catchment. It is unlikely that
event based models can fully account for these effects. Sensitivity checks, Monte Carlo
processes and continuous simulation can be utilised to include the variation in AMC and
available storage within urban sub-catchments.
Urban drainage was historically designed using peak flows derived using peak rainfall
intensity or peak rainfall bursts in accordance with the assumption that peak flowrates only
affect conveyance infrastructure. Many urban drainage networks are operating below
anticipated service levels due to a range of impacts including increased density of urban
areas. Analysis by Coombes et al. (2015a) indicates that the absence of stormwater runoff
volumes in design processes based on peak runoff assumptions may partially explain under-
performance of some urban drainage networks. The performance of inlet structures and
therefore drainage networks can also be affected by the volume of stormwater arriving at the
structure, variations in rainfall temporal patterns and by pre-burst rainfall that was not
190
Modelling Approaches
included in the design process. The uncounted volumes of stormwater runoff in peak flow
and storm burst assumptions can become additional and unexpected overland or bypass
flows in urban systems.
Analysis and design of stormwater management and flooding in urban areas was historically
based on separate hydrology and hydraulic processes, and is focused at the network scale.
A key objective of these processes was determination of flows in conveyance infrastructure
such as pipes and open channels to avoid surcharges and bypass flows at inlets (refer to
Book 9, Chapter 5, Section 5) to avoid nuisance, property damage and risk to life (refer to
Book 9, Chapter 5, Section 2 and Book 6, Chapter 7). These urban conveyance networks
include significant surface flows, usually along roads and through open spaces, from sub-
catchments into and throughout conveyance networks. These flows from sub-catchments to
inlets and within conveyance networks were determined as a hydrological process as an
input to hydraulic models of conveyance networks (see Book 9, Chapter 5, Section 6and
Book 5, Chapter 2).
Overland or surface flows are a key consideration in analysis and design of urban
stormwater management infrastructure. The dominant urban hydraulic response to rare
rainfall events (such as 1% AEP) is often overland flows on roads and across open space.
Emerging methods of analysis and design of urban stormwater involve combined hydrology
and hydraulic models to better understand surface flows throughout urban catchments.
These methods include coupled one and two-dimensional models, and direct rainfall
(rainfall-on-grid) models. Book 6, Chapter 4, Section 7, and Babister and Barton (2012)
provide detailed discussion about these approaches.
The flowrates, depth and area of surface flows in urban catchments are highly sensitive to
different temporal patterns and volumes of rainfall (Babister and Barton, 2012). Similarly,
Goyen (1981), Goyen (2000) and Coombes et al. (2015a) found that the performance of
conveyance infrastructure also varies with temporal patterns and volumes of rainfall. It is
recommended that ensembles of ten temporal patterns of design rainfall are used for
investigation of the hydrology and hydraulic processes in urban areas. The separation of
hydrologic and hydraulic routing is often blurred in analysis of urban areas which fosters
complicated decisions around the use of hydrologic inputs and their interaction with hydraulic
models. An overview of the difference approaches to rainfall inputs provided by this guideline
is compared to ARR 1987 approach in Figure 9.6.9.
191
Modelling Approaches
Figure 9.6.9 highlights that this guideline provides ensembles of 10 temporal patterns for
each region that is a departure from the single event process supported by ARR 1987.
These rainfall inputs can be used in hydrology and hydraulic modelling as required for
different design and assessment tasks (refer to Book 2, Chapter 4 for further detail). The
rapid assessment approach is not recommended for design of urban conveyance networks
and the Monte Carlo processes can be used in special cases. It is expected that rainfall
ensembles in hydrologic simulations, and in hydrologic and hydraulic simulations would be
commonly utilised in urban conveyance networks. The process of using rainfall ensembles in
hydrology is outlined in Figure 9.6.10.
Figure 9.6.10 shows that the inputs to analysis of the conveyance network include IFD
information from the BOM, ensembles of rainfall temporal patterns, regional losses, pre-burst
rainfall and Areal reduction factors from the ARR Data Hub. Wherever possible, local losses
derived in accordance with Book 9, Chapter 6, Section 4 and Book 9, Chapter 6, Section 4
should be used in preference to regional losses for urban areas. These inputs are used in a
hydrology model to generate ensembles of peak flows throughout the urban catchment for
various storm durations and the required quantiles or AEPs of storm events. Mean peak
flows are derived for key locations in the catchment and the rainfall temporal pattern that
produces peak flows closest to the mean peak flows are utilised in the hydraulic model. This
approach may be better suited to models with longer run times as considerable time can be
expended determining critical durations in both hydrology and hydraulic models.
192
Modelling Approaches
Figure 9.6.10. Design Process that Utilises Rainfall Ensembles in Hydrology to Select the
Rainfall Pattern Closest to Mean Peak Flows for use in Hydraulic Analysis
The processes outlined in Figure 9.6.10 produce a single estimate of flood depth for each
selected quantile or AEP. It is important to highlight that the critical rainfall duration and
temporal pattern estimated using the hydrology model is likely to be different to the critical
rainfall and temporal pattern relevant to the hydraulic simulations. These differences
between critical hydrology and hydraulic inputs can have substantial impacts on the design
of infrastructure and understanding of surface flows.
In situations where the hydraulic impacts of the design processes are significant, rainfall
ensembles can be used in the hydrologic and hydraulic simulations as outlined in
Figure 9.6.11.
Figure 9.6.11. Design Process that Utilises Rainfall Ensembles in Hydrology and Hydraulic
Simulations to Select the Mean Pattern for Analysis of Flooding
193
Modelling Approaches
Figure 9.6.11 outlines that process for utilising ensembles of rainfall patterns in hydrology
and hydraulic models. This process is better suited to situations where there are shorter
model run times, critical flooding considerations and for coupled hydrology and hydraulic
models. The processes outlined in Figure 9.6.10 and Figure 9.6.11 may also need to be
applied to understand critical rainfall durations and patterns at key internal locations within
catchments.
194
Modelling Approaches
This guideline supports a number of modelling techniques and Table 9.6.1 and Table 9.6.2
provide guidance on selection of modelling approaches. Use of coupled 1D/2D and direct
rainfall models were necessary to understand within catchment surface flows and flooding.
The potentially short model run times and need to understand local flooding supports use of
rainfall ensembles in both hydrology and hydraulics models. This study combined a well-
known hydrology model with a popular 2D and 1D hydraulic model that relies on second
order finite-difference schemes to simulate the hydrodynamics of floodplains and waterways.
This case study discusses three modelling options that were designed to account for within
catchment overland flows and flooding:
195
Modelling Approaches
• Use of a hydrology model to generate overland flows from small sub-catchments for use in
a coupled 1D/2D hydraulic model. Individual properties, roofs and small area land
surfaces were assigned as sub-catchments (refer to Book 9, Chapter 6, Section 4) in the
hydrologic model to capture the rainfall concentration phase of stormwater runoff into the
hydraulic model. This approach is necessary to understand within catchment flooding.
• A concentrated direct rainfall model where rainfall is applied to polygons of different land
surfaces separated by perviousness and connectivity to the hydraulic 1D/2D model. These
concentrated land surfaces also account for rainfall losses.
• Direct rainfall-on-grid where rainfall, after accounting for initial and continuing losses, was
applied to all active grid cells. A fixed grid of 2m2 was employed in the hydraulic model.
Direct rainfall methods are known to trap volumes of rainfall in depressions and in areas with
high roughness throughout 2D hydraulic models. The value of a carefully constructed direct
rainfall model is the ability to identify sub-catchment flow paths, contributing areas and
storage. However, the designer must ensure that catchment storages or initial losses are not
doubled counted in simulations by the addition of regional loss assumptions. Given that
there is a paucity of research into the accuracy of direct rainfall models, it is recommended
that results of direct rainfall methods are compared with traditional methods by examination
of the characteristics of hydrograph produced by both methods (Babister and Barton, 2012).
A suitable method of representing buildings and good quality topography data is also
required to produce accurate urban stormwater runoff behaviours. A mass balance or
volume error check is also recommended.
The historical process of determining rainfall loss parameters using ARR 1987 assumptions,
including soil type and antecedent moisture content parameters (AMC) from the ILSAX
model, is provided in Table 9.6.6 for comparison.
This guideline provides a range of up-to-date parameters for use in analysis. The catchment
is located within the East Coast South temporal pattern region. Temporal patterns for the
East Coast South region and Intensity Frequency Duration (IFD) rainfall depths were
downloaded from the ARR Data Hub website. This information is combined to construct
ensembles of 10 rainfall patterns for the required flood quantiles (AEP). This case study
focuses on the 1% AEP storm. The initial and continuing storm losses of 28 mm and 1.6
mm/hour for rural areas, and median pre-burst rainfall of 1.1 mm associated with a one hour
1% AEP storm event can also be downloaded from the ARR Data Hub.
196
Modelling Approaches
It is recommended that varied rainfall losses are applied to different types of surfaces in the
catchment. These surfaces include urban pervious areas such as parks, and impervious
areas such as roads, median strips and building roofs. The identified impervious areas were
split up into Effective Impervious Area (EIA) and Indirectly Connected Impervious Area
(ICIA).
Effective Impervious Area represents the portion of a catchment area that has an impervious
response. Due to the highly urbanised nature of the catchment this portion was identified as
75% of the total impervious area. The remaining area that is not classified as Effective
Impervious Area is Indirectly Connected Impervious Area (25%). Building roofs were
identified separately as Indirectly Connected Impervious Area as the down pipes were not
assumed to directly discharge into the storm water pipes. The information from the ARR
Data Hub is modified by loss values for urban catchments that are provided in Book 5,
Chapter 3 and in Book 9, Chapter 6, Section 4 as summarised in Table 9.6.7.
Table 9.6.7. ARR 2016 Rainfall Loss Parameters for Urban Areas
In event based modelling approaches, it is important to subtract pre-burst rainfall from local
losses associated with impervious and pervious surfaces as follows:
Burst initial loss = Storm initial losses – Pre-burst rainfall (for Burst initialloss ≥ 0)
For example, the burst initial loss for effective impervious area is 1.5 – 1.1 = 0.4 mm. The
adopted burst losses for the urban surfaces are presented in Table 9.6.8. Note that in a
situation where pre-burst rainfall is greater than the storm initial losses, the residual pre-burst
rainfall should be included in the analysis.
197
Modelling Approaches
Hydraulic and associated flood behaviour is influenced by the hydraulic resistance due to
topography and urban form. The selection of appropriate roughness coefficients is critical to
the success of this approach (see Book 6, Chapter 4). Depth varying Manning’s “n”
roughness parameters were selected for each land use to account for shallow overland flow
depths across urban surfaces. Some hydraulic modelling packages provide this capability in
accordance with emerging research into depth varying roughness (for example, Zahidi et al.
(2017), Khrapov et al. (2015), Muglera et al. (2011)).
Table 9.6.9. Assumed Capacity of Inlet Pits for 1% AEP Rain Events
The critical rainfall duration for the catchment was derived using the ensembles of rainfall
temporal patterns in the combined hydrology and 2D hydraulic model to reveal the highest
mean and median flood elevations at key locations as shown for in Figure 9.6.13.
Figure 9.6.13. Use of Ensembles of Storm Bursts (1% AEP) in the Hydraulic Model to Select
Critical Duration
198
Modelling Approaches
Figure 9.6.13 reveals that the use of ensembles of rainfall in the hydraulic model indicates
that different critical rainfall durations apply throughout the catchment. The results from
Figure 9.6.13 were used with consideration of the characteristics of the catchment to select
the critical storm duration of 60 minutes. The impact on stormwater runoff from using the
single storm burst pattern from ARR 1987 is compared to use of an ensemble of ten storm
burst patterns (1% AEP) from this guideline for part of the Wooloomooloo catchment in
Figure 9.6.14. This graph presents ten hydrographs of stormwater runoff in the trunk
drainage system at Bourke Street confluence.
Figure 9.6.14. Example of Runoff from ARR 1987 Single Storm Burst and Ensembles of
Storm Bursts from this guideline (1% AEP)
Figure 9.6.14 demonstrates that a single pattern of burst rainfall from ARR 1987 produces a
different hydrograph shape, volume and peak runoff at the catchment outlet to the ensemble
of storm burst patterns from this guideline. This difference is driven by the 30 years of
additional data and science available to this guideline that has allowed the derivations of
more spatially relevant rainfall and temporal patterns. The variability of the equally likely
storm burst patterns from the ARR ensembles facilitates testing of catchment characteristics
for generation of maximum runoff.
The direct rainfall method applies rainfall directly to all grid cells and the scale of routing is at
every 2 m by 2 m grid cell. In this approach the depth of flow is shallow and rainfall can get
stuck on the model grid. To maintain the area of rainfall applied to the grid, the buildings
were nulled (removed) from the actual grid and rainfall was scaled up to account for the lost
building areas.
The concentrated direct rainfall method applied rainfall to polygons of different local surfaces
such as buildings and parks. This process permits the specification of the area, initial and
continuing losses that are applied to each land use polygon. Separate attributes are applied
to roofs to account for the different connectivity to concentrated stormwater flows.
A manual volume check should be undertaken on all direct rainfall model configurations. The
volume of water leaving the model through the downstream boundary should be equal to the
amount of water that was applied (via direct rainfall and inflows across external boundaries),
less losses and storages within the model. The upper portion of the catchment (area of 52.8
199
Modelling Approaches
Ha) was assessed to maximize the volume of water that drains from the catchment at the
last time step. The characteristics of the upper catchment are shown in Table 9.6.10.
Table 9.6.10. Characteristics of the Upper Catchment used in the Volume Check
The model run was extended to allow all stormwater to drain from the catchment by
extrapolating the outflow curve towards zero. Inflow volume was calculated as the
cumulative depth of rainfall less initial and continuing losses multiplied by the area of the
catchment. Flows extracted from the hydraulic model 1D results can also be converted into
volumes. A flow line along the upstream catchment divide together with outflow boundaries
were used in the 2D hydraulic model to also account for the volume of overland flows leaving
the catchment. These results can be presented as a cumulative depth graph or as a pie
chart (refer to Figure 9.6.15)
Figure 9.6.15. Upper Catchment Volume Check for Direct Rainfall Model (Prior to
Corrections)
200
Modelling Approaches
Figure 9.6.15 shows 5,750 m3 (14%) of rainfall was retained in the model (11 mm) which is
described as the volume balance. An acceptable error or additional retention of stormwater
is less than 5% which indicates a need to reduce initial losses used in the direct rain model.
Accounting for volumes of depression storage in the catchment topography by decreasing
initial rainfall losses will increase in overall pipe and overland outflows. These results
indicate that the catchment topography includes depression storages that capture 23.1 mm
of rainfall. The results from coupled hydrology and 1D/2D hydraulic model with traditional
loss assumptions revealed rainfall losses of 24.3 mm. The concentrated direct rainfall and
direct rainfall methods can also be evaluated using sensitivity testing of initial conditions as
follows:
• Accounting for depression storage loss by reducing the initial loss. Apply direct rainfall with
initial loss, less the average depth on grid;
• Accounting for depression storage using a restart file, which reapplied the conditions from
the last time step to the model. Direct rainfall applied with the initial conditions adopted
from the final time step of the initial simulation.
The outflow depths in the standard direct rainfall simulations changed from 52 mm to 61 mm
by using a restart file and in the standard direct rainfall simulations changed from 52 mm to
56 mm by reducing the assumed initial losses in the models.
The hydrograph outputs of overland flows at selected locations (refer to Figure 9.6.12) at
Riley Street near the park (top pane) and at Crown Street North (bottom pane) are shown in
Figure 9.6.16. It is clear that overland flow is under-represented in the uncorrected direct
rainfall models as compared to traditional coupled 1D/2D models.
201
Modelling Approaches
Figure 9.6.16 demonstrates that the uncorrected direct rainfall models produce variable
under-estimation of surface flows, as compared to a traditional coupled 1D/2D model, that is
dependent on location and attributes of sub-catchments. Techniques that account for
depression storage or pre-wetting of the catchment surfaces using restart files can improve
the comparative performance of direct rainfall models. However, the residual differences in
surface flows highlight that 2D models and in particular direct rainfall models should also be
verified using historical records of local flood depths. Surface flows are a significant
proportion of the responses from urban catchment as shown in Figure 9.6.17.
202
Modelling Approaches
Figure 9.6.17. Outflow Hydrographs Catchment Showing the Significance of Surface Flows
This case study demonstrates practical application of the ARR ensemble temporal patterns
on an urban catchment that is dominated by overland flow. The pattern that best represents
the mean response has been selected based on flood elevation rather than flow. It is clear
from the results of this analysis that a volume check of direct rainfall approaches should be
undertaken in accordance with recommendations of Babister and Barton (2012) and the
results should be verified using historical records of spatial flooding. A significant amount the
rainfall excess is not generating runoff because rainfall is trapped on the terrain grid. This
trapped rainfall excess represents an effective overestimation of the catchment loses with
associated underestimation of surface flow and should be factored into the losses so that the
correct amount of rainfall excess is generated. This can be carried out by either pre-wetting
parts of the catchment or adjusting the assumed initial losses or a combination of both.
6.4.6. Downstream
Outflows from urban sub-catchments and conveyance networks interact with regional
storage controls and water quality measures (refer to Book 9, Chapter 4 and Book 9,
Chapter 5), discharge to urban waterways (See Book 9, Chapter 2and Book 9, Chapter 3)
and to receiving waters such as estuaries, rivers, bays and oceans. The methods outlined in
Book 6, Chapter 5 may need to be applied to interactions of rainfall and storm surge
processes in estuaries, bays and oceans to account for combined impacts on urban flooding.
The complexity of urban areas also fosters the need to consider the joint probability of the
different factors such intersection of urban runoff with regional flows in rivers or water levels
in regional storages and water quality measures, which may be correlated or independent of
each other. Methods to account for joint probability are provided in Book 4, Chapter 4. The
urban designer should also consider climate change impacts on urban flooding as outlined in
Book 8, Chapter 7, Section 7 and Book 1, Chapter 6.
203
Modelling Approaches
The connectivity between design of urban conveyance and a volume management facility,
setting the rural base case for design targets, application of climate change and assessment
of downstream impacts on a sensitive waterways is combined in a greenfield example
(Coombes and Barry, 2018). This conceptual design example is located near Ballarat in
Victoria and includes an objective of no increase in peak flows in the downstream natural
waterway to mitigate impacts of the urban development on erosion of the stream. The pre-
development catchment is shown in Figure 9.6.18 and the proposed development is
presented in Figure 9.6.19.
204
Modelling Approaches
An estimate of pre-development peak flows was required to set the design peak flow targets
for the proposed urban development. The Regional Flood Frequency Estimation Model
(RFFE) available from http://rffe.arr-software.org/ was utilised to estimate rural peak flows
with uncertainty as shown in Figure 9.6.20 which is based on gauged flows from multiple
regional gauges (Figure 9.6.21). The use and limitations of the RFFE is described in Book 3,
Chapter 3. Whilst the example catchment size is less than the currently recommended
minimum and the RFFE is subject to improvement, this process provides a good starting
point for defining the rural flow target. The rural flows from the RFFE might also be combined
with statistical analysis of observed flows in a nearby catchment using FLIKE (refer to Book
3, Chapter 2, Section 8) to improve regional flow estimates. These improved regional peak
205
Modelling Approaches
flow results from the nearby catchment can be used to calibrate a hydrology model and the
parameters transferred to the design catchment as explained by Coombes et al. (2016). Patil
and Stieglitz (2012), for example, outline methods of transferring parameters from gauged
catchments to ungauged catchments.
Figure 9.6.21. Regional Flow Gauges used in the RFFE Estimate of Rural Peak Flows
The next step in the design process involved selecting the project location in the ARR Data
Hub and downloading hydrology and rainfall information, including local design rainfall IFD
and ensembles of temporal patterns. Most proprietary models will download this information
and set up the ensembles of rainfall inputs. Estimated regional rural losses for initial losses
(IL) of 25 mm and continuing losses (CL) of 4.3 mm/hr were also downloaded from the ARR
Data Hub.
A model with combined hydrology and hydraulic capacity was used with initial estimates of IL
= 25 mm and CL = 4.3 mm/hr, design burst rainfall ensembles and pre-burst rainfall to
206
Modelling Approaches
estimate local rural losses that were calibrated to rural flows sourced from the RFFE as
shown in Figure 9.6.22. The critical duration was found to be 1.5 hours as defined by highest
mean peak flows for 50%, 10% and 1% AEP events as shown in Figure 9.6.23,
Figure 9.6.24 and Figure 9.6.25. Median pre-burst rainfall for 90 minute storm durations
were also selected from the ARR Data Hub for 50% AEP: 4.1 mm; 10% AEP: 3.3 mm and
1% AEP: 1.1 mm. The pre-burst rainfall was included in the hydrology model and spread
over the hour prior to burst rainfall and the calibration processes aimed to find values of IL
and CL that produced simulated rural peak flows that were similar to RFFE peak flows for
the 10% AEP events. This process enabled an estimate of local rural initial losses of 16 mm
and continuing loss of 5 mm/hr for an assumed Mannings roughness coefficient (n = 0.075).
207
Modelling Approaches
The mean maximum pre-development peak flows for the 50%, 10% and 1% AEP were found
to be 0.011 m3/s, 0.14 m3/s and 0.45 m3/s respectively. These values were used as the peak
flow targets for the urban development. The altered land surfaces (impervious and pervious
areas of roads and properties) associated with the urban development was included in the
hydrology model. The loss values for the urban catchment from Book 5, Chapter 3 and Book
9, Chapter 6, Section 4 were assigned as follows:
Indirectly connected impervious area assumptions were not required because the spatial
detail of land uses with associated connectivity were included in the hydrology/hydraulics
model. The hydrology of the urban catchment was simulated for all design rainfall ensembles
to determine a critical duration of 10 minutes for the 10% AEP flows relevant to the design of
pit and pipe conveyance infrastructure (refer to Book 9, Chapter 5). These simulations were
208
Modelling Approaches
completed prior to design of infrastructure to determine the relevant critical duration and
design storm for use in the design process. Pre-burst rainfall for a one hour duration was
selected from the ARR Data Hub (50% AEP: 2.2 mm, 10% AEP: 2.2 mm, 1% AEP: 0.8 mm)
for use with the 10 minute duration design rainfall ensembles relevant to the design of the pit
and pipe conveyance infrastructure. The pre-burst rainfall was distributed across an hour
prior to the burst rainfall.
The hydrographs from the simulation using ensembles of 10% AEP design burst rainfall with
pre-burst rainfall was examined to select the design storm closest to mean peak flow for
design of conveyance infrastructure as shown in Figure 9.6.26. Urban peak flows from all
design rainfall durations for 50%, 10% and 1% AEP events are presented in Figure 9.6.27,
Figure 9.6.28 and Figure 9.6.29 respectively.
Figure 9.6.26. Selection of the 10% AEP Design Storm for Preliminary Infrastructure Design
209
Modelling Approaches
A hydrology/hydraulics model was use to sized pipes in the conveyance network with
objectives of maintaining 150 mm freeboard to grates of inlet pits and less than two metre
flow width on roads. The design of the conveyance network was then checked using
ensembles for 10% AEP design storm events with pre-burst rainfall for 10, 15, 20 and 30
minute durations.
The safety of surface flows were also checked by simulating the performance of the
conveyance network using design rainfall ensembles for 1% AEP burst events with pre-burst
210
Modelling Approaches
rainfall for 10, 15, 20 and 30 minute durations. In accordance with Book 9, Chapter 3,
Section 4 and Book 9, Chapter 5, Section 6 (also see Book 7, Chapter 6), the design aimed
to limit surface water depths to less than 200 mm and less than 50 mm at road crowns.
These objectives also included limiting depth velocity product to less than 0.4 and aimed for
freeboard to floor levels of greater than 300 mm.
Figure 9.6.30. Overview of the Planned Conveyance Network in the Urban Development
A storage basin was then designed to manage flooding and impacts on downstream
waterway (refer to Book 9, Chapter 4) by mitigating the 50%, 10%, 1% AEP peak flows to
meet the rural target defined above. Storage volume and outflow arrangements were utilised
to achieve this (refer to Figure 9.6.31). The design of the basin included a freeboard of 300
mm from 1% AEP maximum depth and an emergency spillway designed for full blockage of
1% AEP rainfall events (refer to Book 6, Chapter 6 for blockage discussions).
211
Modelling Approaches
Figure 9.6.31. Overview of the Planned Storage Basin Below the Urban Development
A trial basin design was undertaken using the hydrology/hydraulics models and ensembles
of 1.5 hour duration design rainfall with pre-burst rainfall. The design of the basin was then
tested and modified using ensembles of design rainfall with pre-burst rainfall for all durations
to ensure the rural peak flow targets were met and the maximum basin depth was not
exceeded. The final results for peak flows discharging from the development via the basin
are shown in Figure 9.6.32, Figure 9.6.33 and Figure 9.6.34 for 50%, 10% and 1% AEP
rainfall events. Water levels in the basin for all 1% AEP rainfall durations are provided in
Figure 9.6.35.
Figure 9.6.32. Peak flows from the Basin for 50% AEP Events
212
Modelling Approaches
Figure 9.6.33. Peak Flows from the Basin for 10% AEP Events
Figure 9.6.34. Peak Flows from the Basin for 1% AEP Events
213
Modelling Approaches
Figure 9.6.35. Peak Water Levels the Basin for 1% AEP Events
Figure 9.6.32 to Figure 9.6.34 show that the mean peak flows from the basin were less than
the rural flows with critical durations ranging from one to three hours. A one hour critical
duration of peak water levels in the basin was also observed from the analysis (refer to
Figure 9.6.35). These result highlight that critical durations of stormwater runoff can vary
throughout catchments and across different types of the infrastructure.
The design of the conveyance and storage infrastructure was evaluated for climate change
impacts using the methods outlined in Book 1, Chapter 6 and Book 8, Chapter 7, Section 7.
A design life for the infrastructure and consequence level for climate change impacts was
selected. A design life of 100 years was assumed for the basin with medium consequences
of failure due to impacts on the waterway and surrounding rural properties.
This assessment was utilised to extract data from ARR Data Hub for the RCP 8.5 value for
2090 which indicated an expected 16.1% increase in peak rainfall1. This expected increase
in peak flows was used to alter the increase in peak rainfall. This expected increase in peak
flows was used to alter the increase in peak rainfall (Please note the Data hub value for
Ballarat has changed as of May 2019 to 16.3% to reflect changes to the predicted
temperatures from Climate Change Australia). This expected increase in peak flows was
used to alter the relevant design rainfall ensembles and the hydrology/hydraulic model was
rerun to test the impact of climate change on peak water levels in the basin and on roads.
Designers should also utilise emerging research to incorporate that most up to date climate
change assessments. For example, Wasko and Sharma (2015) outline greater potential for
increased rainfall intensities in urban areas.
The impact of applying the expected 2090 climate change effects on design rainfall on peak
water levels in the basin and at a critical location on the road is shown in Figure 9.6.36.
Increases in peak water depths are experienced in the basin and on the road. The increased
runoff into the basin is managed by the emergency spillway and peak water levels are
acceptable. However, peak water levels on the road exceed the design objectives and the
designer should highlight this situation to the consent authority for further consideration.
1Please note the Data hub value for Ballarat has changed as of May 2019 to 16.3%. This reflects changes to the
predicted temperatures from Climate Change Australia
214
Modelling Approaches
Figure 9.6.36. Peak Water Levels in the Basin and on Roads for 1% AEP Events Subject to
Climate Change
6.5. References
Argue J.R., (2004), Water Sensitive Urban Design: basic procedures for 'source control' of
stormwater - a handbook for Australian practice. Urban Water Resources Centre, University
of South Australia, Adelaide, South Australia, in collaboration with Stormwater Industry
Association and Australian Water Association. ISBN 1 920927 18 2, Adelaide.
Atkins (2015), Flood loss avoidance benefits of green infrastructure for stormwater
management. USEPA.
Babister, M. and Barton, C. (eds) (2012), Two dimensional modelling in urban and rural
floodplains. Australian Rainfall and Runoff Revision Project 15. Engineers Australia.
Babister, M., Trim, A., Testoni, I. and Retallick, M. 2016. The Australian Rainfall and Runoff
Datahub, 37th Hydrology and Water Resources Symposium Queenstown NZ
Cecinati F., Rico-Ramirez M.A., Heuvelink G.B.M. and Han D. (2017), Representing radar
rainfall uncertainty with ensembles based on time variant geostatistical error modelling
approach, Journal of Hydrology, 548, 391-405.
Chocat, B., Krebs, P., Marsalek, J., Rauch, W. and Schilling, W. (2001), Urban drainage
redefined: from stormwater removal to integrated management. Water Science Technology,
43(5), 61-8, PubMed PMID: 11379157.
Coombes P.J. (2015), Transitioning drainage into urban water cycle management,
Proceedings of the WSUD 2015 Conference, Engineers Australia, Sydney.
Coombes P.J., (2002), Rainwater tanks revisited – new opportunities of integrated water
cycle management. PhD Thesis. The University of Newcastle. NSW. Australia. 2002.
Coombes, P.J. and Barry, M.E. (2015), A systems framework of big data for analysis of
policy and strategy, Proceedings of the 2015 WSUD and IECA Conference, Sydney.
215
Modelling Approaches
Coombes, P.J., and M.E. Barry, (2018) Using surfaces of big data to underpin continuous
simulation in ssystems analysis, WSUD2018 Conference, Engineers Australia, Perth
Coombes, P.J. and Barry, M.E. (2008a), Determination of available storage in rainwater
tanks prior to storm events. Hydrology and Water Resources Symposium. Engineers
Australia. Adelaide.
Coombes, P.J. and Barry, M.E. (2008b), Determination of available storage in rainwater
tanks prior to storm events, Proceedings of the 31st Hydrology and Water Resources
Symposium, 15-17 April, Adelaide.
Coombes P.J., Colegate M., Barber L. and Babister M. (2016), A modern perspective on the
hydrology processes of two catchments in regional Victoria, 37th Hydrology and Water
Resources Symposium, Engineers Australia, Queenstown, New Zealand.
Coombes, P., Babister, M. and McAlister, T. (2015a).Is the Science and Data underpinning
the Rational Method Robust for use in Evolving Urban Catchments, Proceedings of the 36th
Hydrology and Water Resources Symposium Hobart 2015.
Coombes P.J., Babister M. and McAlister, (2015b), Is the Science and Data underpinning the
Rational Method Robust for use in Evolving Urban Catchments, Proceedings of the 36th
Hydrology and Water Resources Symposium, Engineers Australia, Hobart.
Everard M., and McInnes R. (2013), Systemic solutions for multi-benefit water and
environmental management. Science of the Total Environment. 461: 170-179.
Fletcher T.D., Wong T.H.F., Duncan H.P., Coleman J.R. and Jenkins G.A. (2001), Managing
the impacts of urbanisation on receiving waters: a decision-making framework. Third
Australian Stream Management Conference: The Value of Healthy Streams, Brisbane, pp:
217-224.
Goyen A.G. (1981), Determination of rainfall runoff model parameters. Masters Thesis, NSW
Institute of Technology, Sydney.
Goyen, A.G. (2000), Spatial and Temporal Effects on Urban Rainfall / Runoff Modelling. PhD
Thesis. University of Technology, Sydney. Faculty of Engineering.
Goyen, A.G., O'Loughlin, G.G. (1999a), Examining the basic building blocks of urban runoff.
Urban Storm Drainage - 8th International Proceedings. Sydney, 30 Aug.-3 Sept. 1999, 3:
1382-1390.
Goyen, A.G., O'Loughlin, G.G. (1999b), The Effects of Infiltration Spatial and Temporal
Patterns on Urban Runoff. Water 99: Joint Congress; 25th Hydrology & Water Resources
Symposium, 2nd International Conference on Water Resources & Environment Research;
Handbook and Proceedings, pp: 819-822.
Hall J. (2015), Direct Rainfall Flood Modelling: The Good, the Bad and the Ugly. Australasian
Journal of Water Resources, 19(1), 74-85.
Hardy, M.J., Coombes, P.J. and Kuczera, G.A. (2004), An investigation of estate level
impacts of spatially distributed rainwater tanks, 2004 International Conference on Water
Sensitive Urban Design, Engineers Australia, Adelaide, Australia.
Jones R.F., O'Loughlin G.G. and Egan P.E. (1999), roof and property drainage design
methods for Australia and New Zealand, Institution of Engineers Australia, Sydney, pp:
793-800.
216
Modelling Approaches
Kemp D. and Myers B. (2015), A verification of the hydrological impact of 20 years of infill
development in an urban catchment, 36th Hydrology and water resources symposium: the
art and science of water, Engineers Australia, pp: 379-386.
Khrapov S.S., Pisarev A.V., Kobelev I.A., Zhumaliev A.G., Agafonnikova E.O., Losev A.G.
and Khoperskov A.V. (2015), The Numerical Simulation of Shallow Water: Estimation of the
Roughness Coefficient on the Flood Stage, Advances in Mechanical Engineering, Article ID
787016.
Kuczera G.A., Lambert, M., Heneker, T., Jennings, S., Frost, A. and Coombes, P.J. (2006),
Joint probability and design storms at the crossroads. Australian Journal of Water
Resources, 10: 63-79.
Mugler C., Planchon O., Patin J., Weill S., Silvera N., Richard P. and Mouche E. (2011),
Comparison of roughness models to simulate overland flow and tracer transport experiments
under simulated rainfall at plot scale, Journal of Hydrology, 402(1-2), 25-40.
Patil, S. and Stieglitz, M. (2012), Controls on hydrologic similarity: role of nearby gauged
catchments for prediction at an ungauged catchment, Hydrology and Earth Systems
Science, 16: 551-562.
Patouillard C., and Forest J., (2011), The spread of sustainable urban drainage systems for
managing urban stormwater: a multi-level perspective analysis. Journal of Innovation
Economics and Management. (8), 141-161.
Phillips, B., Goyen, A., Thomson, R., Pathiraja, S. and Pomeroy, L. (2014), Project 6 - Loss
models for catchment simulation - urban catchments, Stage 2 Report. Report prepared for
Australian Rainfall and Runoff Revision, Engineers Australia.
Pilgrim, DH (ed) (1987) Australian Rainfall and Runoff - A Guide to Flood Estimation,
Institution of Engineers, Australia, Barton, ACT, 1987.
Roso S., Boyd M.J. and Chisholm L.A. (2006), Assessment of Spatial Analysis Techniques
for Estimating Impervious Cover. 30th Hydrology and Water Resources Symposium,
Launceston, TAS
Stephens M.L. and Kuczera G.A. (1999), Testing the time area urban runoff model at the
allotment scale, Institution of Engineers Australia, Sydney, pp: 1391-1398.
Walsh, C.J., Fletcher, T.D. and Burns, M.J. (2012), Urban stormwater runoff: a new class of
environmental flow problem. PLOSone DOI: 10.1371/journal.pone.0045814.
Ward, M., Babister M., Retallick M., and Testoni I, (2018), Ensemble assessment and
gridded surface response, 38th Hydrology and Water Resources Symposium, Engineers
Australia, Melbourne.
Wasko, C. and Sharma, A. (2015), Steeper temporal distribution of rain intensity at higher
temperatures within Australian storms, Nature Geoscience, 8(7), 527-529.
Weeks B., Witheridge G., Rigby E., Bathelmess A. and O'Loughlin G. (2013), Blockage of
hydraulic structures, Australian Rainfall and Runoff revision Project 11, Stage 2 Report,
Engineers Australia.
217
Modelling Approaches
Zahidi I., Yusuf B., Cope M., Mohamed T.A. and Shafri H.Z.M. (2017), Effects of depth-
varying vegetation roughness in two-dimensional hydrodynamic modelling. International
Journal of River Basin Management, DOI: 10.1080/15715124.2017.1394313.
218
11 national Circuit
BARTON ACT 2600
e | arr_admin@arr.org.au
www.arr.org.au