Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Predictive Maintenance: Condition Monitoring (Or, Colloquially, CM) Is The Process of Monitoring A Parameter of

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Condition monitoring (or, colloquially, CM) is the process of monitoring a parameter of

condition in machinery (vibration, temperature etc.), in order to identify a significant change


which is indicative of a developing fault. It is a major component of predictive maintenance. The
use of condition monitoring allows maintenance to be scheduled, or other actions to be taken to
prevent failure and avoid its consequences. Condition monitoring has a unique benefit in that
conditions that would shorten normal lifespan can be addressed before they develop into a major
failure. Condition monitoring techniques are normally used on rotating equipment and other
machinery (pumps, electric motors, internal combustion engines, presses), while
periodic inspection using non-destructive testing techniques and fit for service
(FFS)[1] evaluation are used for stationary plant equipment such as steam boilers,piping and heat
exchangers.

Condition monitoring technology[edit]


The following list includes the main condition monitoring techniques applied in the industrial
and transportation sectors:

 Vibration Analysis and diagnostics [2]


 Lubricant analysis [3]
 Acoustic emission (Airborne Ultrasound)
 Infrared thermography [4]
 Ultrasound testing (Material Thickness/Flaw Testing)
 Motor Condition Monitoring and Motor current signature analysis (MCSA)
Most CM technologies are being slowly standardized by ASTM and ISO.[5]

Rotating equipment[edit]
The most commonly used method for rotating machines is called a vibration
analysis.[6][7][8] Measurements can be taken on machine bearing casings
with accelerometers (seismic or piezo-electric transducers) to measure the casing vibrations, and
on the vast majority of critical machines, with eddy-current transducers that directly observe the
rotating shafts to measure the radial (and axial) displacement of the shaft. The level
of vibration can be compared with historical baseline values such as former start ups and
shutdowns, and in some cases established standards such as load changes, to assess the severity.
Interpreting the vibration signal obtained is an elaborate procedure that requires specialized
training and experience. It is simplified by the use of state-of-the-art technologies that provide
the vast majority of data analysis automatically and provide information instead of raw data. One
commonly employed technique is to examine the individual frequencies present in the signal.
These frequencies correspond to certain mechanical components (for example, the various pieces
that make up a rolling-element bearing) or certain malfunctions (such as shaft unbalance or
misalignment). By examining these frequencies and their harmonics, the CM specialist can often
identify the location and type of problem, and sometimes the root cause as well. For example,
high vibration at the frequency corresponding to the speed of rotation is most often due to
residual imbalance and is corrected by balancing the machine. As another example, a
degrading rolling-element bearing will usually exhibit increasing vibration signals at specific
frequencies as it wears. Special analysis instruments can detect this wear weeks or even months
before failure, giving ample warning to schedule replacement before a failure which could cause
a much longer down-time. Beside all sensors and data analysis it is important to keep in mind
that more than 80% of all complex mechanical equipment fail accidentally and without any
relation to their life-cycle period.[citation needed]
Most vibration analysis instruments today utilize a Fast Fourier Transform (FFT)[9] which is a
special case of the generalized Discrete Fourier Transform and converts the vibration signal from
its time domain representation to its equivalent frequency domain representation. However,
frequency analysis (sometimes called Spectral Analysis or Vibration Signature Analysis) is only
one aspect of interpreting the information contained in a vibration signal. Frequency analysis
tends to be most useful on machines that employ rolling element bearings and whose main
failure modes tend to be the degradation of those bearings, which typically exhibit an increase in
characteristic frequencies associated with the bearing geometries and constructions. Depending
on the type of machine, its typical malfunctions, the bearing types employed, rotational speeds,
and other factors, the CM specialist may use additional diagnostic tools, such as examination of
the time domain signal, the phase relationship between vibration components and a timing mark
on the machine shaft (often known as a keyphasor), historical trends of vibration levels, the
shape of vibration, and numerous other aspects of the signal along with other information from
the process such as load, bearing temperatures, flow rates, valve positions and pressures to
provide an accurate diagnosis. This is particularly true of machines that use fluid bearings rather
than rolling-element bearings. To enable them to look at this data in a more simplified form
vibration analysts or machinery diagnostic engineers have adopted a number of mathematical
plots to show machine problems and running characteristics, these plots include the bode plot,
the waterfall plot, the polar plot and the orbit time base plot amongst others.
Handheld data collectors and analyzers are now commonplace on non-critical or balance of
plant machines on which permanent on-line vibration instrumentation cannot be economically
justified. The technician can collect data samples from a number of machines, then download the
data into a computer where the analyst (and sometimes artificial intelligence) can examine the
data for changes indicative of malfunctions and impending failures. For larger, more critical
machines where safety implications, production interruptions (so-called "downtime"),
replacement parts, and other costs of failure can be appreciable (determined by the criticality
index), a permanent monitoring system is typically employed rather than relying on periodic
handheld data collection. However, the diagnostic methods and tools available from either
approach are generally the same.
Recently also on-line condition monitoring systems have been applied to heavy process
industries such as pulp, paper, mining, petrochemical and power generation. See examples of
industry specific condition monitoring solutions:

 Condition monitoring for pulp processes


 Condition monitoring for paper and board processes
 Condition monitoring for power plants
 Condition monitoring for mining ann construction processes
 Condition monitoring for marine applications
These can be dedicated systems like Sensodec 6S or nowadays this functionality can
be integrated to DCS. See also the seminar paper embedded into DCS.[10]
Performance monitoring is a less well-known condition monitoring technique. It can be applied
to rotating machinery such as pumps and turbines, as well as stationary items such as boilers and
heat exchangers. Measurements are required of physical quantities: temperature, pressure, flow,
speed, displacement, according to the plant item. Absolute accuracy is rarely necessary, but
repeatable data is needed. Calibrated test instruments are usually needed, but some success has
been achieved in plant with DCS (Distributed Control Systems). Performance analysis is often
closely related to energy efficiency, and therefore has long been applied in steam power
generation plants. In some cases, it is possible to calculate the optimum time for overhaul to
restore degraded performance.

Other techniques[edit]

 Often visual inspections are considered to form an underlying component of condition


monitoring, however this is only true if the inspection results can be measured or critiqued
against a documented set of guidelines. For these inspections to be considered condition
monitoring, the results and the conditions at the time of observation must be collated to allow
for comparative analysis against the previous and future measurements. The act of simply
visually inspecting a section of pipework for the presence of cracks or leaks cannot be
considered condition monitoring unless quantifiable parameters exist to support the
inspection and a relative comparison is made against previous inspections. An act performed
in isolation to previous inspections is considered a Condition Assessment, Condition
Monitoring activities require that analysis is made comparative to previous data and reports
the trending of that comparison.
 Slight temperature variations across a surface can be discovered with visual inspection
and non-destructive testing with thermography. Heat is indicative of failing components,
especially degrading electrical contacts and terminations. Thermography can also be
successfully applied to high-speed bearings, fluid couplings, conveyor rollers, and storage
tank internal build-up.[11]
 Using a Scanning Electron Microscope of a carefully taken sample of debris suspended in
lubricating oil (taken from filters or magnetic chip detectors). Instruments then reveal the
elements contained, their proportions, size and morphology. Using this method, the site, the
mechanical failure mechanism and the time to eventual failure may be determined. This is
called WDA - Wear Debris Analysis.
 Spectrographic oil analysis that tests the chemical composition of the oil can be used to
predict failure modes. For example a high silicon content indicates contamination of grit etc.,
and high iron levels indicate wearing components. Individually, elements give fair
indications, but when used together they can very accurately determine failure modes e.g. for
internal combustion engines, the presence of iron/alloy, and carbon would indicate worn
piston rings.[3]
 Ultrasound can be used for high-speed and slow-speed mechanical applications and for high-
pressure fluid situations. Digital ultrasonic meters measure high frequency signals from
bearings and display the result as a dBuV (decibels per microvolt) value. This value is
trended over time and used to predict increases in friction, rubbing, impacting, and other
bearing defects. The dBuV value is also used to predict proper intervals for re-lubrication.
Ultrasound monitoring, if done properly, proves out to be a great companion technology for
vibration analysis.
Headphones allow humans to listen to ultrasound as well. A high pitched 'buzzing sound' in
bearings indicates flaws in the contact surfaces, and when partial blockages occur in high
pressure fluids the orifice will cause a large amount of ultrasonic noise. Ultrasound is used in
the Shock Pulse Method[12] of condition monitoring.

 Performance analysis, where the physical efficiency, performance, or condition is found by


comparing actual parameters against an ideal model. Deterioration is typically the cause of
difference in the readings. After motors, centrifugal pumps are arguably the most common
machines. Condition monitoring by a simple head-flow test near duty point using repeatable
measurements has long been used but could be more widely adopted. An extension of this
method can be used to calculate the best time to overhaul a pump based on balancing the cost
of overhaul against the increasing energy consumption that occurs as a pump wears. Aviation
gas turbines are also commonly monitored using performance analysis techniques with the
original equipment manufacturers such as Rolls-Royce plc routinely monitoring whole fleets
of aircraft engines under Long Term Service Agreements (LTSAs) or Total Care packages.
 Wear Debris Detection Sensors are capable of detecting ferrous and non-ferrous wear
particles within the lubrication oil giving considerable information about the condition of the
measured machinery. By creating and monitoring a trend of what debris is being generated it
is possible to detect faults prior to catastrophic failure of rotating equipment such as
gearbox's, turbines, etc.

The Criticality Index[edit]


The Criticality Index is often used to determine the degree on condition monitoring on a given
machine taking into account the machines purpose, redundancy (i.e. if the machine fails, is there
a standby machine which can take over), cost of repair, downtime impacts, health, safety and
environment issues and a number of other key factors. The criticality index puts all machines
into one of three categories:

1. Critical machinery - Machines that are vital to the plant or process and without which the
plant or process cannot function. Machines in this category include the steam or gas
turbines in a power plant, crude oil export pumps on an oil rig or the cracker in an oil
refinery. With critical machinery being at the heart of the process it is seen to require full
on-line condition monitoring to continually record as much data from the machine as
possible regardless of cost and is often specified by the plant insurance. Measurements
such as loads, pressures, temperatures, casing vibration and displacement, shaft axial and
radial displacement, speed and differential expansion are taken where possible. These
values are often fed back into a machinery management software package which is
capable of trending the historical data and providing the operators with information such
as performance data and even predict faults and provide diagnosis of failures before they
happen.
2. Essential Machinery - Units that are a key part of the process, but if there is a failure, the
process still continues. Redundant units (if available) fall into this realm. Testing and
control of these units is also essential to maintain alternative plans should Critical
Machinery fail.
3. General purpose or balance of plant machines - These are the machines that make up the
remainder of the plant and normally monitored using a handheld data collector as
mentioned previously to periodically create a picture of the health of the machine.
Reliability-centered maintenance (RCM) is a process to ensure that systems continue to do
what their users require in their present operating context.[1] It is generally used to achieve
improvements in fields such as the establishment of safe minimum levels of maintenance.
Successful implementation of RCM will lead to increase in cost effectiveness, Reliability,
machine uptime, and a greater understanding of the level of risk that the organization is
managing. It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM
Processes.(Article 5277)

Context[edit]
It is generally used to achieve improvements in fields such as the establishment of safe minimum
levels of maintenance, changes to operating procedures and strategies and the establishment of
capital maintenance regimes and plans. Successful implementation of RCM will lead to increase
in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the
organization is managing.
The late John Moubray, in his book RCM2 characterized reliability-centered maintenance as a
process to establish the safe minimum levels of maintenance. This description echoed statements
in the Nowlan and Heap report from United Airlines.
It is defined by the technical standard SAE JA1011, Evaluation Criteria for RCM Processes,
which sets out the minimum criteria that any process should meet before it can be called RCM.
This starts with the seven questions below, worked through in the order that they are listed:
1. What is the item supposed to do and its associated performance standards?
2. In what ways can it fail to provide the required functions?
3. What are the events that cause each failure?
4. What happens when each failure occurs?
5. In what way does each failure matter?
6. What systematic task can be performed proactively to prevent, or to diminish to a
satisfactory degree, the consequences of the failure?
7. What must be done if a suitable preventive task cannot be found?
Reliability centered maintenance is an engineering framework that enables the definition of a
complete maintenance regimen. It regards maintenance as the means to maintain the functions a
user may require of machinery in a defined operating context. As a discipline it enables
machinery stakeholders to monitor, assess, predict and generally understand the working of their
physical assets. This is embodied in the initial part of the RCM process which is to identify the
operating context of the machinery, and write a Failure Mode Effects and Criticality Analysis
(FMECA). The second part of the analysis is to apply the "RCM logic", which helps determine
the appropriate maintenance tasks for the identified failure modes in the FMECA. Once the logic
is complete for all elements in the FMECA, the resulting list of maintenance is "packaged", so
that the periodicities of the tasks are rationalised to be called up in work packages; it is important
not to destroy the applicability of maintenance in this phase. Lastly, RCM is kept live throughout
the "in-service" life of machinery, where the effectiveness of the maintenance is kept under
constant review and adjusted in light of the experience gained.
RCM can be used to create a cost-effective maintenance strategy to address dominant causes of
equipment failure. It is a systematic approach to defining a routine maintenance program
composed of cost-effective tasks that preserve important functions.
The important functions (of a piece of equipment) to preserve with routine maintenance are
identified, their dominant failure modes and causes determined and the consequences of failure
ascertained. Levels of criticality are assigned to the consequences of failure. Some functions are
not critical and are left to "run to failure" while other functions must be preserved at all cost.
Maintenance tasks are selected that address the dominant failure causes. This process directly
addresses maintenance preventable failures. Failures caused by unlikely events, non-predictable
acts of nature, etc. will usually receive no action provided their risk (combination of severity and
frequency) is trivial (or at least tolerable). When the risk of such failures is very high, RCM
encourages (and sometimes mandates) the user to consider changing something which will
reduce the risk to a tolerable level.
The result is a maintenance program that focuses scarce economic resources on those items that
would cause the most disruption if they were to fail.
RCM emphasizes the use of Predictive Maintenance (PdM) techniques in addition to traditional
preventive measures.

Background[edit]
The term "reliability-centered maintenance" was first used in public papers[citation needed] authored
by Tom Matteson, Stanley Nowlan, Howard Heap, and other senior executives and engineers
at United Airlines (UAL) to describe a process used to determine the optimum maintenance
requirements for aircraft. Having left United Airlines to pursue a consulting career a few months
before the publication of the final Nowlan-Heap report, Matteson received no authorial credit for
the work. However, his contributions were substantial and perhaps indispensable to the
document as a whole. The US Department of Defense (DOD) sponsored the authoring of both a
textbook (by UAL) and an evaluation report (by Rand Corporation) on Reliability-Centered
Maintenance, both published in 1978. They brought RCM concepts to the attention of a wider
audience. The text book described efforts by commercial airlines and the US Navy in the 1960s
and 1970s to improve the reliability of their new jet the Boeing 747.[which?]
The first generation of jet aircraft had a crash rate that would be considered highly alarming
today, and both the Federal Aviation Administration (FAA) and the airlines' senior management
felt strong pressure to improve matters. In the early 1960s, with FAA approval the airlines began
to conduct a series of intensive engineering studies on in-service aircraft. The studies proved that
the fundamental assumption of design engineers and maintenance planners—that every airplane
and every major component in the airplane (such as its engines) had a specific "lifetime" of
reliable service, after which it had to be replaced (or overhauled) in order to prevent failures—
was wrong in nearly every specific example in a complex modern jet airliner.
This was one of many astounding discoveries that have revolutionized the managerial discipline
of physical asset management and have been at the base of many developments since this
seminal work was published. Among some of the paradigm shifts inspired by RCM were:

 an understanding that the vast majority of failures are not necessarily linked to the age
of the asset (this is often modeled by the "memoryless" exponential probability
distribution)
 changing from efforts to predict life expectancies to trying to manage the process of
failure
 an understanding of the difference between the requirements of assets from a user
perspective, and the design reliability of the asset
 an understanding of the importance of managing assets on condition (often referred to
as condition monitoring, condition based maintenance and predictive maintenance)
 an understanding of four basic routine maintenance tasks
 linking levels of tolerable risk to maintenance strategy development
Today RCM is defined in the standard SAE JA1011, Evaluation Criteria for Reliability-Centered
Maintenance (RCM) Processes. This sets out the minimum criteria for what is, and for what is
not, able to be defined as RCM.
The standard is a watershed event in the ongoing evolution of the discipline of physical asset
management. Prior to the development of the standard many processes were labeled as RCM
even though they were not true to the intentions and the principles in the original report that
defined the term publicly.
Today companies can use this standard to ensure that the processes, services and software they
purchase and implement conforms with what is defined as RCM, ensuring the best possibility of
achieving the many benefits attributable to rigorous application of RCM.

Basic features[edit]
The RCM process described in the DOD/UAL report recognized three principal risks from
equipment failures: threats

 to safety,
 to operations, and
 to the maintenance budget.
Modern RCM gives threats to the environment a separate classification, though most forms
manage them in the same way as threats to safety.
RCM offers five principal options among the risk management
strategies:

 Predictive maintenance tasks,


 Preventive Restoration or Preventive Replacement maintenance
tasks,
 Detective maintenance tasks,
 Run-to-Failure, and
 One-time changes to the "system" (changes to hardware design,
to operations, or to other things).
RCM also offers specific criteria to use when selecting a risk management strategy for a system
that presents a specific risk when it fails. Some are technical in nature (can the proposed task
detect the condition it needs to detect? does the equipment actually wear out, with use?). Others
are goal-oriented (is it reasonably likely that the proposed task-and-task-frequency will reduce
the risk to a tolerable level?). The criteria are often presented in the form of a decision-logic
diagram, though this is not intrinsic to the nature of the process.
Identification of Safety Critical Elements (SCE) and maintaining associated pre defined
performance standards is the foundation of asset integrity management.

In use[edit]
After being created by the commercial aviation industry, RCM was adopted by the U.S. military
(beginning in the mid-1970s) and by the U.S. commercial nuclear power industry (in the 1980s).
Starting in the late 1980s, an independent initiative led by John Moubray corrected some early
flaws in the process, and adapted it for use in the wider industry. John was also responsible for
popularizing the method and for introducing it to much of the industrial community outside of
the Aviation industry. (RCM2)
In the two decades since RCM2 was first released, industry has undergone massive change.
Increased economic pressures and competition, tied with advances in lean thinking and
efficiency methods meant that companies often struggled to find the people required to carry out
an RCM initiative.
At this point in time many methods sprung up that took an approach of reducing the rigour of the
RCM approach. The result was the propagation of many methods that called themselves RCM,
yet had little in common with the original concepts. In some cases these were misleading and
inefficient, while in other cases they were even dangerous.
Since each initiative is sponsored by one or more consulting firms eager to help clients use it,
there is still considerable disagreement about their relative dangers (or merits). Also there is a
tendency for consulting firms to promote a software package as an alternative methodology in
place of the knowledge required to perform analyses.
The RCM standard (SAE JA1011, available from http://www.sae.org) provides the minimum
criteria that processes must comply with if they are to be called RCM.
Although a voluntary standard, it provides a reference for companies looking to implement RCM
to ensure they are getting a process, software package or service that is in line with the original
report.
Disney introduced RCM to its parks in 1997, led by Paul Pressler and consultants McKinsey &
Company, laying off a large number of maintenance workers and saving large amounts of
money. Some people blamed the new cost-conscious maintenance culture for some of
the Incidents at Disneyland Resort that occurred in the following years.[2]

The process of implementing a damage detection and characterization strategy for engineering
structures is referred to as Structural Health Monitoring (SHM). Here damage is defined as
changes to the material and/or geometric properties of a structural system, including changes to
the boundary conditions and system connectivity, which adversely affect the system’s
performance. The SHM process involves the observation of a system over time using
periodically sampled dynamic response measurements from an array of sensors, the extraction of
damage-sensitive features from these measurements, and the statistical analysis of these features
to determine the current state of system health. For long term SHM, the output of this process is
periodically updated information regarding the ability of the structure to perform its intended
function in light of the inevitable aging and degradation resulting from operational environments.
After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition
screening and aims to provide, in near real time, reliable information regarding the integrity of
the structure.[1]

Introduction[edit]

Qualitative and non-continuous methods have long been used to evaluate structures for their
capacity to serve their intended purpose. Since the beginning of the 19th century, railroad wheel-
tappers have used the sound of a hammer striking the train wheel to evaluate if damage was
present.[2] In rotating machinery, vibration monitoring has been used for decades as a
performance evaluation technique.[1] Two techniques in the field of SHM are wave propagation
based techniques Raghavan and Cesnik[3] and vibration based techniques.[4][5][6] Broadly the
literature for vibration based SHM can be divided into two aspects, the first wherein models are
proposed for the damage to determine the dynamic characteristics, also known as the direct
problem, for example refer, Unified Framework[7] and the second, wherein the dynamic
characteristics are used to determine damage characteristics, also known as the inverse problem,
for example refer.[8] In the last ten to fifteen years, SHM technologies have emerged creating an
exciting new field within various branches of engineering. Academic conferences and scientific
journals have been established during this time that specifically focus on SHM.[2] These
technologies are currently becoming increasingly common.

Statistical Pattern Recognition EXAMPLE Approach[edit]

The SHM problem can be addressed in the context of a statistical pattern recognition
paradigm.[9][10] This paradigm can be broken down into four parts: (1) Operational Evaluation,
(2) Data Acquisition and Cleansing, (3) Feature Extraction and Data Compression, and (4)
Statistical Model Development for Feature Discrimination. When one attempts to apply this
paradigm to data from real world structures, it quickly becomes apparent that the ability to
cleanse, compress, normalize and fuse data to account for operational and environmental
variability is a key implementation issue when addressing Parts 2-4 of this paradigm. These
processes can be implemented through hardware or software and, in general, some combination
of these two approaches will be used.

Health Assessment of Engineered Structures of Bridges, Buildings and other related


infrastructures[edit]

Commonly known as Structural Health Assessment (SHA) or SHM, this concept is widely
applied to various forms of infrastructures, especially as countries all over the world enter into an
even greater period of construction of various infrastructures ranging from bridges to
skyscrapers. Especially so when damages to structures are concerned, it is important to note that
there are stages of increasing difficulty that require the knowledge of previous stages, namely:

1) Detecting the existence of the damage on the structure

2) Locating the damage

3) Identifying the types of damage

4) Quantifying the severity of the damage

It is necessary to employ signal processing and statistical classification to convert sensor data on
the infrastructural health status into damage info for assessment.

Operational Evaluation[edit]

Operational evaluation attempts to answer four questions regarding the implementation of a


damage identification capability:

i) What are the life-safety and/or economic justification for performing the SHM?

ii) How is damage defined for the system being investigated and, for multiple damage
possibilities, which cases are of the most concern?

iii) What are the conditions, both operational and environmental, under which the system to be
monitored functions?

iv) What are the limitations on acquiring data in the operational environment?

Operational evaluation begins to set the limitations on what will be monitored and how the
monitoring will be accomplished. This evaluation starts to tailor the damage identification
process to features that are unique to the system being monitored and tries to take advantage of
unique features of the damage that is to be detected.

Data Acquisition, Normalization and Cleansing[edit]

The data acquisition portion of the SHM process involves selecting the excitation methods, the
sensor types, number and locations, and the data acquisition/storage/transmittal hardware. Again,
this process will be application specific. Economic considerations will play a major role in
making these decisions. The intervals at which data should be collected is another consideration
that must be addressed.

Because data can be measured under varying conditions, the ability to normalize the data
becomes very important to the damage identification process. As it applies to SHM, data
normalization is the process of separating changes in sensor reading caused by damage from
those caused by varying operational and environmental conditions. One of the most common
procedures is to normalize the measured responses by the measured inputs. When environmental
or operational variability is an issue, the need can arise to normalize the data in some temporal
fashion to facilitate the comparison of data measured at similar times of an environmental or
operational cycle. Sources of variability in the data acquisition process and with the system being
monitored need to be identified and minimized to the extent possible. In general, not all sources
of variability can be eliminated. Therefore, it is necessary to make the appropriate measurements
such that these sources can be statistically quantified. Variability can arise from changing
environmental and test conditions, changes in the data reduction process, and unit-to-unit
inconsistencies.

Data cleansing is the process of selectively choosing data to pass on to or reject from the feature
selection process. The data cleansing process is usually based on knowledge gained by
individuals directly involved with the data acquisition. As an example, an inspection of the test
setup may reveal that a sensor was loosely mounted and, hence, based on the judgment of the
individuals performing the measurement, this set of data or the data from that particular sensor
may be selectively deleted from the feature selection process. Signal processing techniques such
as filtering and re-sampling can also be thought of as data cleansing procedures.

Finally, the data acquisition, normalization, and cleansing portion of SHM process should not be
static. Insight gained from the feature selection process and the statistical model development
process will provide information regarding changes that can improve the data acquisition
process.

Feature Extraction and Data Compression[edit]

The area of the SHM process that receives the most attention in the technical literature is the
identification of data features that allows one to distinguish between the undamaged and
damaged structure. Inherent in this feature selection process is the condensation of the data. The
best features for damage identification are, again, application specific.

One of the most common feature extraction methods is based on correlating measured system
response quantities, such a vibration amplitude or frequency, with the first-hand observations of
the degrading system. Another method of developing features for damage identification is to
apply engineered flaws, similar to ones expected in actual operating conditions, to systems and
develop an initial understanding of the parameters that are sensitive to the expected damage. The
flawed system can also be used to validate that the diagnostic measurements are sensitive enough
to distinguish between features identified from the undamaged and damaged system. The use of
analytical tools such as experimentally-validated finite element models can be a great asset in
this process. In many cases the analytical tools are used to perform numerical experiments where
the flaws are introduced through computer simulation. Damage accumulation testing, during
which significant structural components of the system under study are degraded by subjecting
them to realistic loading conditions, can also be used to identify appropriate features. This
process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature
cycling to accumulate certain types of damage in an accelerated fashion. Insight into the
appropriate features can be gained from several types of analytical and experimental studies as
described above and is usually the result of information obtained from some combination of
these studies.

The operational implementation and diagnostic measurement technologies needed to perform


SHM produce more data than traditional uses of structural dynamics information. A
condensation of the data is advantageous and necessary when comparisons of many feature sets
obtained over the lifetime of the structure are envisioned. Also, because data will be acquired
from a structure over an extended period of time and in an operational environment, robust data
reduction techniques must be developed to retain feature sensitivity to the structural changes of
interest in the presence of environmental and operational variability. To further aid in the
extraction and recording of quality data needed to perform SHM, the statistical significance of
the features should be characterized and used in the condensation process.

Statistical Model Development[edit]

The portion of the SHM process that has received the least attention in the technical literature is
the development of statistical models for discrimination between features from the undamaged
and damaged structures. Statistical model development is concerned with the implementation of
the algorithms that operate on the extracted features to quantify the damage state of the structure.
The algorithms used in statistical model development usually fall into three categories. When
data are available from both the undamaged and damaged structure, the statistical pattern
recognition algorithms fall into the general classification referred to as supervised learning.
Group classification and regression analysis are categories of supervised learning algorithms.
Unsupervised learning refers to algorithms that are applied to data not containing examples from
the damaged structure. Outlier or novelty detection is the primary class of algorithms applied in
unsupervised learning applications. All of the algorithms analyze statistical distributions of the
measured or derived features to enhance the damage identification process.

The Fundamental Axioms of SHM[edit]

Based on the extensive literature that has developed on SHM over the last 20 years, it can be
argued that this field has matured to the point where several fundamental axioms, or general
principles, have emerged.[11] The axioms are listed as follows:

 Axiom I: All materials have inherent flaws or defects;

 Axiom II: The assessment of damage requires a comparison between two system states;
 Axiom III: Identifying the existence and location of damage can be done in an
unsupervised learning mode, but identifying the type of damage present and the damage
severity can generally only be done in a supervised learning mode;

 Axiom IVa: Sensors cannot measure damage. Feature extraction through signal
processing and statistical classification is necessary to convert sensor data into damage
information;

 Axiom IVb: Without intelligent feature extraction, the more sensitive a measurement is to
damage, the more sensitive it is to changing operational and environmental conditions;

 Axiom V: The length- and time-scales associated with damage initiation and evolution
dictate the required properties of the SHM sensing system;

 Axiom VI: There is a trade-off between the sensitivity to damage of an algorithm and its
noise rejection capability;

 Axiom VII: The size of damage that can be detected from changes in system dynamics is
inversely proportional to the frequency range of excitation.

SHM Components[edit]

SHM System's elements include:

 Structure

 Sensors

 Data acquisition systems

 Data transfer and storage mechanism

 Data management

 Data interpretation and diagnosis:

1) System Identification

2) Structural model update

3) Structural condition assessment

4) Prediction of remaining service life

An example of this technology is embedding sensors in structures like bridges and aircraft. These
sensors provide real time monitoring of various structural changes like stress and strain. In the
case of civil engineering structures, the data provided by the sensors is usually transmitted to a
remote data acquisition centres. With the aid of modern technology, real time control of
structures (Active Structural Control) based on the information of sensors is possible

Examples[edit]

Wind and Structural Health Monitoring System for Bridges in Hong Kong[edit]

The Wind and Structural Health Monitoring System (WASHMS) is a sophisticated bridge
monitoring system, costing US$1.3 million, used by the Hong Kong Highways Department to
ensure road user comfort and safety of the Tsing Ma, Ting Kau, Kap Shui
Mun and Stonecutters bridges.[12]

In order to oversee the integrity, durability and reliability of the bridges, WASHMS has four
different levels of operation: sensory systems, data acquisition systems, local centralised
computer systems and global central computer system.

The sensory system consists of approximately 900 sensors and their relevant interfacing units.
With more than 350 sensors on the Tsing Ma bridge, 350 on Ting Kau and 200 on Kap Shui
Mun, the structural behaviour of the bridges is measured 24 hours a day, seven days a week.

The sensors include accelerometers, strain gauges, displacement transducers, level sensing
stations, anemometers, temperature sensors and dynamic weight-in-motion sensors. They
measure everything from tarmactemperature and strains in structural members to wind speed and
the deflection and rotation of the kilometres of cables and any movement of the bridge decks and
towers.

These sensors are the early warning system for the bridges, providing the essential information
that help the Highways Department to accurately monitor the general health conditions of the
bridges.

The structures have been built to withstand up to a one-minute mean wind speed of 95 metres per
second. In 1997, when Hong Kong had a direct hit from Typhoon Victor, wind speeds of 110 to
120 kilometres per hour were recorded. However, the highest wind speed on record occurred
during Typhoon Wanda in 1962 when a 3-second gust wind speed was recorded at 78.8 metres
per second, 284 kilometres per hour.

The information from these hundreds of different sensors is transmitted to the data
acquisition outstation units. There are three data acquisition outstation units on Tsing Ma bridge,
three on Ting Kau and two on the Kap Shui Mun.

The computing powerhouse for these systems is in the administrative building used by the
Highways Department in Tsing Yi. The local central computer system provides data collection
control, post-processing, transmission and storage. The global system is used for data acquisition
and analysis, assessing the physical conditions and structural functions of the bridges and for
integration and manipulation of the data acquisition, analysis and assessing processes.

 Monitoring Hong Kong's Bridges Real-Time Kinematic Spans The Gap

Other large examples[edit]

The following projects are currently known as some of the biggest on-going bridge monitoring

 The Rio–Antirrio bridge, Greece: has more than 100 sensors monitoring the structure and
the traffic in real time.

 Millau Viaduc, France: has one of the largest systems with fiber optics in the world
which is considered[by whom?] state of the art.

 The Huey P Long bridge, USA: has over 800 static and dynamic strain gauges designed
to measure axial and bending load effects.

 The Fatih Sultan Mehmet Bridge, Turkey: also known as the Second Bosphorus Bridge.
It has been monitored using an innovative wireless sensor network with normal traffic
condition.

 Masjid al-Haram#Current expansion project, Mecca, Saudi Arabia : has more than 600
sensors ( Concrete pressure cell, Embedment type strain gauge, Sister bar strain gauge,
etc.) installed at foundation and concrete columns. This project is under construction.

 The Sydney Harbour Bridge in Australia is currently implementing a monitoring system


involving over 2,400 sensors. Asset managers and bridge inspectors have mobile and web
browser decision support tools based on analysis of sensor data.

 The Queensferry Crossing, currently under construction across the Firth of Forth, will
have a monitoring system including more than 2,000 sensors upon its completion. Asset
managers will have access to data for all sensors from a web-based data management
interface, including automated data analysis.

Structural Health Monitoring for bridges[edit]

Health monitoring of large bridges shall be performed by simultaneous measurement of loads on


the bridge and effects of these loads. It typically includes monitoring of:

 Wind and weather

 Traffic
 Prestressing and stay cables

 Deck

 Pylons

 Ground

Provided with this knowledge, the engineer can:

 Estimate the loads and their effects

 Estimate the state of fatigue or other limit state

 Forecast the probable evolution of the bridge's health

The state of Oregon in the United States, Department of Transportation Bridge Engineering
Department has developed and implemented a Structural Health Monitoring (SHM) program as
referenced in this technical paper by Steven Lovejoy, Senior Engineer.[13]

References are available that provide an introduction to the application of fiber optic sensors to
Structural Health Monitoring on bridges.[14]

Vibration testing[edit]
Vibration testing is accomplished by introducing a forcing function into a structure, usually with
some type of shaker. Alternately, a DUT (device under test) is attached to the "table" of a shaker.
Vibration testing is performed to examine the response of a device under test (DUT) to a defined
vibration environment. The measured response may be fatigue life, resonant frequencies or
squeak and rattle sound output (NVH). Squeak and rattle testing is performed with a special type
of quiet shaker that produces very low sound levels while under operation.
For relatively low frequency forcing, servohydraulic (electrohydraulic) shakers are used. For
higher frequencies, electrodynamic shakers are used. Generally, one or more "input" or "control"
points located on the DUT-side of a fixture is kept at a specified acceleration.[1] Other "response"
points experience maximum vibration level (resonance) or minimum vibration level (anti-
resonance). It is often desirable to achieve anti-resonance to keep a system from becoming too
noisy, or to reduce strain on certain parts due to vibration modes caused by specific vibration
frequencies .[2]
The most common types of vibration testing services conducted by vibration test labs are
Sinusoidal and Random. Sine (one-frequency-at-a-time) tests are performed to survey the
structural response of the device under test (DUT). A random (all frequencies at once) test is
generally considered to more closely replicate a real world environment, such as road inputs to a
moving automobile.
Most vibration testing is conducted in a 'single DUT axis' at a time, even though most real-world
vibration occurs in various axes simultaneously. MIL-STD-810G, released in late 2008, Test
Method 527, calls for multiple exciter testing. The vibration test fixture used to attach the DUT
to the shaker table must be designed for the frequency range of the vibration test spectrum.
Generally for smaller fixtures and lower frequency ranges, the designer targets a fixture design
that is free of resonances in the test frequency range. This becomes more difficult as the DUT
gets larger and as the test frequency increases. In these cases multi-point control strategies can
mitigate some of the resonances that may be present in the future. Devices specifically designed
to trace or record vibrations are called vibroscopes.

Vibration analysis[edit]
Vibration Analysis (VA), applied in an industrial or maintenance environment aims to reduce
maintenance costs and equipment downtime by detecting equipment faults.[3][4] VA is a key
component of a Condition Monitoring (CM) program, and is often referred to as Predictive
Maintenance (PdM).[5] Most commonly VA is used to detect faults in rotating equipment (Fans,
Motors, Pumps, and Gearboxes etc.) such as Unbalance, Misalignment, rolling element bearing
faults and resonance conditions.
VA can use the units of Displacement, Velocity and Acceleration displayed as a Time Waveform
(TWF), but most commonly the spectrum is used, derived from a Fast Fourier Transform of the
TWF. The vibration spectrum provides important frequency information that can pinpoint the
faulty component.
The fundamentals of vibration analysis can be understood by studying the simple mass–spring–
damper model. Indeed, even a complex structure such as an automobile body can be modeled as
a "summation" of simple mass–spring–damper models. The mass–spring–damper model is an
example of a simple harmonic oscillator. The mathematics used to describe its behavior is
identical to other simple harmonic oscillators such as the RLC circuit.

You might also like