Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Comparison of Earthquake Catalogs Declustered From

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Comparison of earthquake catalogs declustered

from three different methods in the Korean


Peninsula
Sung Kyun Kim (  kimsk3454@gmail.com )

Research Article

Keywords: Earthquake catalog, Seismicity declustering method, Poisson process test

Posted Date: December 5th, 2022

DOI: https://doi.org/10.21203/rs.3.rs-2324827/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Page 1/21
Abstract
The earthquake catalog includes both dependent earthquakes, which are spatio-temporally related to
each other, and independent or background earthquakes. In order to predict the long-term seismicity or
perform seismic hazard research, the dependent earthquakes must be removed to generate a declustered
earthquake catalog. However, several declustering methods have been proposed, and the evaluation of
seismic hazard may vary depending on the selected declustering method. In the present study, the
catalog of earthquakes that were observed between 2016 and 2021 in and around the Korean peninsula
is declustered using the methods of Gardner and Knopoff (1974), Reasenberg (1985), and Zhuang et al.
(2002), and the resultant catalogs are compared. The values of the seismicity parameters (a and b) in the
Gutenberg-Richter relationship are estimated from the declustered catalogs, and are seen to vary
depending on the declustering method, thereby affecting the results of long-term earthquake prediction or
seismic hazard analysis.

In addition, three approaches are used to test whether the original (raw) and declustered catalogs follow
the Poisson process or not. The minimum magnitude (Mp) above which the null hypothesis of the
Poisson process cannot be rejected in the earthquake catalog is shown to range from 1.6 to 2.2
depending on the declustered catalog and the test method used. Further, the Mp obtained herein shows a
large value compared to the completeness magnitude estimated in the present study. A comparison of
the curves representing the cumulative number of background earthquakes versus the elapsed time for
the various declustered catalogs shows that the method of Zhuang et al. (2002) gives the closest
agreement with the real background seismicity curve.

1. Introduction
To study the seismicity or earthquake hazards of a given area, an earthquake catalog for that area is
required. This earthquake catalog includes both dependent earthquakes, which depend temporally and
spatially on each other, and independent or background earthquakes (Van Stiphout et al. 2012).
Independent or background earthquakes are mainly caused by permanent structural stresses relating to
plate tectonic movements. These are also referred to as the mainshock or parent earthquake. Meanwhile,
dependent earthquakes are foreshocks and aftershocks, or triggered earthquakes caused by temporary
stress changes before, or slippage after, fault motion relating to the mainshock. The operation of
removing the dependent earthquakes from the earthquake catalog is termed seismicity declustering, and
is an important issue in the study of seismicity (Zhuang et al. 2005; Van Stiphout et al. 2012). The
ultimate goal of seismicity declustering is to obtain a pure background seismicity rate that completely
eliminates the dependent earthquakes from the observed earthquake catalog, leaving only the
background earthquakes that are temporally and spatially independent. For example, a commonly-used
method in the prediction of long-term seismicity and in seismic hazard research is that of Cornell (1968),
which considers that the long-term seismicity rate within a certain area is constant with respect to time,
i.e., a stationary Poisson process is assumed (Mulgaria et al. 2017; Taroni et al. 2021). Therefore, a pure
background seismicity rate that completely removes the dependent earthquake from the observed
Page 2/21
earthquake catalog is required. The characteristics of dependent earthquakes vary according to regions,
and for this reason, it can be said that different declustering methods have been proposed in various
regions (Van Stiphout et al. 2012).

The Korean Peninsula, which is located inside the Eurasian plate close to the Japan Trench, has a
relatively low seismicity compared to the neighboring southwest Japan and northeast China.
Seismological research in Korea was started in response to the need to evaluate earthquake safety
following the construction of industrial complexes and nuclear power plants. However, while instrumental
earthquake observations in the Korean Peninsula started in 1905, digital observations did not begin until
the end of the 2000s. Consequently, there have been insufficient studies on earthquake hazards. On
September 12, 2016, an earthquake with a magnitude of 5.8 occurred in Gyeongju in the southeastern
part of the Korean Peninsula (Kim et al. 2016; Son et al. 2017). Then, about a year later, a magnitude 5.4
earthquake occurred in Pohang, close to Gyeongju, and this earthquake is known to have been triggered
by water injection into a borehole for geothermal power generation (Kim et al. 2018; Lim et al. 2020).
These two earthquakes were accompanied by a number of foreshocks and aftershocks (Kim and Lee
2019) and caused considerable damage to the surrounding area (Eem et al. 2018; Ghim et al. 2018; Jin et
al. 2019; Lee et al. 2018). As a result, national and social interest in earthquakes have increased, and
research on seismic hazards has become active.

In the present study, the earthquake catalog for the Korean Peninsula between 2016 and 2021 that was
compiled by the Korea Meteorological Administration (KMA 2022) is selected because even micro-
earthquakes with a magnitude of 2.0 or less are included in the catalog from 2016, along with many
dependent earthquakes (aftershocks). The epicenter distribution for the catalog is shown in Fig. 1. Here, it
can be seen that the epicenters are concentrated offshore along the south-east coast of the Korean
Peninsula, and from the central to the southern part of the west coast, and onshore from the central
western part to the southeastern part of the peninsula. The catalog is declustered from three different
methods, and the resultant catalogs are compared in terms of the completeness magnitude and the
seismicity parameters (a- and b-values). In addition, an examination of whether the declustered catalogs
follow the Poisson process or not is performed by using several statistical test methods.

2. Seismicity Declustering Methods


In this study, the seismicity declustering methods of Gardner and Knopoff (1974), Reasenberg (1985),
and Zhuang et al. (2002) are compared. The first of these is also known as the window method, and is
the simplest one for identifying mainshocks and aftershocks. After an earthquake occurs, earthquakes
within a specific time and space windows relative to that earthquake are recognized as aftershocks. The
sizes of the temporal and spatial windows are usually expressed as functions of the magnitude of the
mainshock (i.e., the largest shock in the cluster). Smaller earthquakes that occurred before the mainshock
are regarded as foreshocks. The secondary and higher-order aftershocks (i.e., aftershocks that are
triggered by or related to previous aftershocks) are not considered in the Gardner and Knopoff (1974)
method. Since this method assumes a circular form as a space window, the distribution of aftershocks
Page 3/21
implying the extension of the fault is not considered. While Gardner and Knopoff (1974) presented the
lengths and durations of the windows in the form of tables, Van Stiphout et al. (2012) approximated the
windows as Eqs. (1):

0.032M +2.7389
10 , ifM ≥ 6.5
r = 10
0.1238M +0.983
[km] , t = { [days], (1)
0.5409M −0.547,else
10

where r is the spatial radius, t is the time, and M the magnitude of the earthquake. These equations are
used for declustering in the present work.

The method of Reasenberg (1985) links earthquakes within the space-time interaction zone, and groups
them as an aftershock cluster. That is, when earthquake A is the aftershock of B, and B is the aftershock
of C, earthquakes A, B, and C are grouped into a common aftershock cluster. This means that the
secondary and higher-order aftershocks are considered in the treatment of the aftershock. The largest
earthquake in the cluster is regarded as the mainshock of that cluster. The size of the temporal
interaction zone is given by Omori’s law for aftershock statistics. The spatial interaction zone is defined
by the size of the seismic source. In the present work, declustering by the Reasenberg (1985) method is
performed using the CLUSTER2000 computer program (USGS, 2021), and the standard input parameters
of the original Reasenberg (1985) algorithm are adopted besides the size of the spatial interaction zone.

Tibi et al. (2011) used an alternative input for the size of the spatial interaction zone based on the work of
Kanamori and Anderson (1975), who represented the radius a of a circular rupture area (in km) as a
function of the moment magnitude (Mw) and the static stress drop (Δσ). Taking the stress drop to be 30
bars, which is appropriate for most shallow earthquakes, gives the simple relationship in Eq. (2):

a = 100.5M w−2.25 (2)

Then, the spatial interaction zone R consists of the rupture area radius of the aftershock multiplied by the
factor F, plus the rupture radius of the largest earthquake in the cluster, as given by Eq. (3) (Tibi et al.,
2011):
0
0.5M −2.25
R = a × F + 10 w

0
where Mw is the moment magnitude of the mainshock. The above alternative input of Tibi et al. (2011) is
adopted in the present study. The factor F is assumed to be 10 which is commonly used value.

In the methods described above, constants representing the size of the space-time window or the size of
the space-time interaction zone are given subjectively. If the constant changes, the distinction between
the background and the dependent earthquakes changes. In response to this highly subjective method,
Zhuang et al. (2002; 2004) proposed a method for probabilistically expressing independent (background )
earthquakes or dependent earthquakes. They used the epidemic-type aftershock sequence (ETAS) model
Page 4/21
to probabilistically treat the classification of background and dependent earthquakes. The ETAS model
can be expressed by the conditional intensity function in Eq. (4) (Zhuang et al., 2002) :

λ(t, x, y) = μ(x, y) + ∑
k:tk <t
χ(mk )g( t − tk )f( x − xk y − y |mk , k
), (4)

where μ (x, y)is the time-independent intensity function for the background earthquake, g (t)and
f (x, y| mk )are the normalized response functions with respect to the epicenter and the time of
occurrence, respectively, and χ (mk )represents the number of dependent earthquakes that are expected
to occur due to earthquakes of magnitude mk . Eq. (4) defines the probability that the j-th earthquake was
triggered by the i-th earthquake as the relative contribution of the i-th earthquake towards the occurrence
of the j-th earthquake (ρi,j ), which can be expressed as Eq. (5):

χ (mi ) g(tj − ti ) (f( xj − xi , y − y |mi )


i j
ρ =
i,j
λ(tj , xj , y )
j

5
.

In the same way, the probability that an earthquake is a background earthquake (φj ) or a triggered
earthquake (ρj ) can be expressed by Eqs. (6) and (7), respectively.

μ(xj , y )
j
φ =
j
λ(tj , xj , y )
j

j−1

ρ = 1 − φ = ∑ρ
j j ij

i=1

The present work uses the following algorithm that was proposed by Zhuang et al. (2002) to identify a
background earthquake and a dependent earthquake from the earthquake catalog:

1. For each pair of earthquakes i, j = 1, 2, … N (i < j), calculateρij in (5) andφj inEq. (6) .
2. Set j = 1.
3. Generate uniform random numberUj in[0, 1].
4. If Uj < ∅j , then the j-th event is considered to be a background event.
I
5. Otherwise, select the smallest I such that Uj < ∅j + ∑
i=1
ρ
ij
. Then the j-th event is considered to be
a descendent of the I-th event.
6. If j = N, then terminate the algorithm; otherwise, set j = j + 1 and go to step 3.

Page 5/21
3. Results Of Declustering
The results of declustering according to each of the three methods are presented in Table 1, where the
total number of events in the raw earthquake catalog is 7952. The method of Zhuang et al. (2002)
removed the largest number of dependent earthquakes and followed by the method of Gardner and
Knopoff (1974). The method of Reasenberg (1985), which considers the linking of dependent
earthquakes, detected the relatively few dependent earthquakes. In the case of the method of Reasenberg
(1985), it is characteristic that the number of clusters of dependent earthquakes is relatively small to
compare to the number of earthquakes removed.

Table 1
The numbers of events, clusters, removed, and remaining events obtained using the declustering methods
of Gardner and Knopoff (1974), Reasenberg (1985), and Zhuang et al. (2002)
Method No. of No. of Removed events Remaining
events clusters (%) events

Gardner and Knopoff 7952 463 4557 (57.2) 3395


(1974)

Reasenberg(1985) 7952 187 4144 (52.1) 3808

Zhuang et al. (2002) 7952 454 5606 (70.5) 2346

In addition, the seismicity obtained from the raw earthquake catalog is plotted in Fig. 2a, while the
mainshocks and their dependent earthquakes detected by the methods of Gardner and Knopoff (1974),
Reasenberg (1985), and Zhuang et al. (2002) are plotted in Fig. 2b, c, and d, respectively. The horizontal
and vertical axes represent time (year) and latitude, respectively. Here, the dependent earthquakes
following the Gyeongju earthquake in September 2016 and the Pohang earthquake in November 2017
(which are denoted by the characters ‘P’ and ‘Q’, respectivey) are well-separated on the time axis, and
resemble a dotted line. It can be seen that the largest number of dependent earthquakes was detected
and removed by the method of Zhuang et al. (2002) (Fig. 2d, Table 1), followed by that of Gardner and
Knopoff (1974) (Fig. 2b, Table 1). Note that the results in Fig. 2b and d exhibit almost the same spatial
range, whereas the temporal extent of Fig. 2d is wider than that of Fig. 2b. Notably, the method of
Reasenberg (1985) detected the smallest number of dependent earthquakes in both the spatial and
temporal ranges (Fig. 2c, Table 1).

To predict the future seismicity, or to evaluate the seismic hazard, it is desirable to use a well-declustered
earthquake catalog from which the seismicity parameters a and b, and the occurrence rate above a
certain magnitude, can be estimated from the Gutenberg-Richter model for the frequency distribution of
earthquake magnitudes. The quantity and quality of the earthquake data collected within a certain period
and region usually depend on the capability of the seismic observation network. In other words, as the
observation network becomes denser, and the performance of its instruments improves, even small-scale
events can be detected and, hence, the number of detected events increases. In an earthquake catalog,
the magnitude of completeness (Mc) is usually defined as the minimum magnitude above which all
Page 6/21
earthquakes within a certain region and period are reliably recorded (Rydelek and Sacks 1989; Wiemer
and Wyss 2000). A correct estimate of Mc is crucial because too high a value leads to under-sampling by
discarding usable data, while too low a value leads to erroneous seismicity parameter values and, thus, a
biased analysis, by using incomplete data (Mignan and Woessner 2012). Consequently, the evaluation of
Mc is the starting point for estimating the seismicity parameters from an earthquake catalog. While a
number of approaches for estimating Mc have been proposed (Ogata and Katsura 1993; Wiemer and
Wyss 2000; Woessner and Wiemer 2005), the widely-used method of Wiemer and Wyss (2000) is adopted
in the present study.

Once the Mc has been determined, the events with magnitudes equal to or greater than Mc are selected
from the earthquake catalog in order to estimate the seismicity parameters. The seismicity parameter b
can be estimated by either the least-squares method or the maximum-likelihood method from the
Gutenberg-Richter curve. However, it is more reasonable to use the maximum-likelihood method, because
the cumulative event numbers are not independent and the temporal number of earthquake occurrences
are better represented by a Poisson distribution rather than a Gaussian distribution (Weichert, 1980).
Several methods (Aki, 1965; Page, 1968; Weichert, 1980) for estimating the b-value based on the
maximum likelihood method have been proposed. It is well known that the b-value is greatly affected by
the method used (Amorese et al. 2009; Bender 1983). For example, Lee (2013) used the verification of
simulated catalogs to show that the Aki - Utsu formula (Aki, 1965; Utsu, 1965) gives a more stable b-
value compared to other methods. Thus, the Aki - Utsu formula was used herein to estimate the b-value.
Moreover, instead of the a-value, the annual occurrence rate of earthquakes with magnitudes equal to or
greater than 5.0 was used herein, in accordance with most seismic hazard studies.

The estimated b-values, their standard deviations (SD), and annual occurrence rates of magnitude 5.0 or
greater for the raw catalog, and for the declustered catalogs from the methods of Gardner and Knopoff
(1974), Reasenberg (1985), and Zhuang et al. (2002), are presented in Table 2. Thus, the Mc value is
estimated to be 1.5 for the raw catalog, and this decreases to 1.4 for each of the three declustered
catalogs. Similarly, the b-value of 0.97 obtained from the raw catalog is decreased after removing the
dependent earthquakes. This is thought to be because a relatively large number of small-scale
earthquakes are removed from the raw catalog. However, unlike the Mc values, the b-values of the
declustered catalogs vary according to the method used, being 0.90, 0.91, and 0.93 for the methods of
Gardner and Knopoff (1974), Zhuang et al. (2002), and Reasenberg (1985), respectively. Similarly, the
annual occurrence rate of magnitude 5.0 or greater varies according to the declustering method, being
0.2708 for the raw catalog, and 0.2394, 0.2209, 0.2122 for the methods of Gardner and Knopoff (1974),
Reasenberg (1985), and Zhuang et al. (2002), respectively.

Page 7/21
Table 2
The completeness magnitude (Mc), b-value, and annual occurrence
rate of M 5.0 according to the original (raw) catalog and declustered
catalogs using the methods of Gardner and Knopoff (1974),
Reasenberg (1985), and Zhuang et al. (2002)
Catalog Mc b SD 5.0 M / year

Original (Raw) 1.5 0.97 0.003 0.2708

Gardner and Knopoff (1974) 1.4 0.90 0.004 0.2394

Reasenberg (1985) 1.4 0.93 0.004 0.2209

Zhuang et al. (2002) 1.4 0.91 0 .006 0.2122

4. The Poisson Process Test


As noted above, in the long-term prediction of seismicity, or in seismic hazard research, it is first assumed
that the seismicity follows the Poisson process. Therefore, it is necessary to verify this assumption for
the declustered earthquake catalog. Since Gardner and Knopoff (1974) attempted the Poisson test for the
declustered catalog in the California area, many researchers have conducted such studies in various
areas (Luen and Stark 2012; Shearer and Stark 2012; Wyss and Toya 2000). In particular, Noh (2016)
performed the Poisson test on earthquakes observed in and around the Korean Peninsula between 1980
and 2014, to find that the earthquakes with magnitudes greater than or equal to 2.7 that occurred on land
followed the Poisson process.

In the present study, the following three Poisson tests are applied to the three declustered earthquake
catalogs: (i) the multinomial chi-square (MC) test, (ii) the conditional chi-square (CC) test, and (iii) the
Kolmogorov-Smirnov (KS) test. The MC and CC tests pay attention to how the earthquakes are distributed
within a certain time period, while the KS test evaluates the distribution of time intervals between
earthquake occurrence times.

For the MC test, the method of Shearer and Stark (2012) is adopted as follows. First, the entire
observation period is divided into several equal intervals (N i̇ ), and the events occurring within each
interval are counted. Then, it is checked whether the number of intervals with a certain number of events
agrees well with the theoretical Poisson distribution. Assuming that the seismicity follows the Poisson
process, the average event rate (λ)per interval becomes N /N i̇ , where the total number of events is N.
When the expected number of intervals is at least five, then the smallest and largest intervals (
− +
K andK )canbeexpressed by Eqs. (8) and (9) (Shearer and Stark 2012):

− −λ j
K ≡ min[k : N e ∑λ ∕ j! ≥ 5]

j=0

Page 8/21
k−1

+ −λ j
K ≡ max[k : N (1 − e ∑λ ∕ j! ≥ 5]

j=0

9
+ −
The test statistic χM 2 is obtained with (K − K + 1) − 2 degrees of freedom by using Eq. (10):

where Xk denotes the number of intervals including k events, and Ek is defined as Eqs. (11):

−λ K j −

⎪ N e ∑ λ /j!, k = K ,

⎪ i̇ j=0







−λ k − +
Ek ≡ ⎨ N e λ /k!, k = K + 1……K − 1,








⎪ +
⎪ −λ K −1 j +

⎪ N (1 − e ∑ λ /j!) , k = K .
i̇ j=0

11
Because the MC test does not distinguish between the numbers of intervals with small and large numbers
of events, it is not sensitive to variance in the data. For this reason, the MC test is not as sensitive as
certain other tests towards apparent fluctuations in the rate of seismicity (Luen and Stark 2012). This
problem can be compensated for by the CC test, which is also known as the Poisson dispersion test. This
test divides the entire observation period into several equal intervals to find the variance between the
number of events within each interval, along with the average number of events. When the number of
events during the k-th observation period is Nk , the test statistic χC 2 can be expressed as Eq. (12):

M 2
(Nk − λ)
2
χc = ∑
λ
k=1

12
Meanwhile, the KS test can be applied to the distribution of intervals between earthquake occurrences.
For an earthquake catalog with (n + 1) events, the earthquake occurrence time intervals are placed in

ascending order as (x1 , x2 , ⋯ xn ), and their average is defined as x. The time interval distribution
function F (xi )is then defined by Eq. (13):


F (xi ) = zi = 1 − exp (xi / x) , i = 1, 2, ⋯ , n (13)

The statistic of the KS test is then expressed by Eqs. (14) (Stephens, 1974):

Page 9/21
+ −
D = max(D ,D )

;
(14)
+ −
D = max(i/n − zi ), D = max (zi − (i − 1)/n) , 1 ≤ i ≤ n

To execute a hypothesis test using the table provided by Stephens (1974), Eq. (14) must be corrected to
give Eq. (15):

(D − 0.2/n ) (√n + 0.28 + 0.5/√n) → D (15)

The null hypothesis is then used to test whether the earthquake catalogs declustered by each of the three
methods follow the Poisson process. For each catalog, each Poisson test (MC, CC, and KS) is performed
with a significance level of 0.05 for increasing completeness magnitude (Mc) values from 1.0 to 3.0 in
steps of 0.1. In general, the null hypothesis is rejected at a small Mc, but is not rejected above a certain
limiting Mc. Noh (2016) defined the minimum Mc for which the null hypothesis is not rejected as M p.
The term M p is used as the same meaning in this study. In the present work, considering the amount of
data, the MC and CC tests were performed for two cases in which the numbers of intervals were 72 and
216, as obtained by dividing the observation period from 2016 to 2021 into 10 and 30 days, respectively.
The results in Table 3 indicate that the M p values obtained in the MC test for the three declustered
earthquake catalogs are in the range of 1.6 to 2.2. Thus, there is no significant difference regardless of
the number of intervals. Further, the M p value obtained by the Poisson test is seen to be larger than the
completeness magnitude (Mc in Table 2). For each of the tree Poisson tests, the largest M p value is
obtained for the catalog that was declustered using the method of Reasenberg (1985), which removes the
smallest number of dependent earthquakes. In addition, it is noted that the CC test gives largerM p values
than do the other two Poisson tests for the catalogs obtained using the methods of Gardner and Knopoff
(1974) and Reasenberg (1985). Hence, given that careful consideration must be paid to the selection of
the cutoff magnitude during the process of preparing an earthquake catalog for seismic hazard research,
the M p value will provide a good reference.

Table 3
The from the three types of Poisson hypothesis test (multinomial chi-square, conditional chi-square, and
Kolmogorov-Smirnov) for the declustered catalogs using the methods of Gardner and Knopoff (1974),
Reasenberg (1985), and Zhuang et al. (2002)
Catalog Method

multinomial chi-square conditional chi-square Kolmogorov-Smirnov


(MC) (CC) (KS)

Gardner and Knopoff 1.7 1.9 1.6


(1974)

Reasenberg (1985) 1.8 2.2 1.9

Zhuang et al. (2002) 1.6 1.6 1.6

Page 10/21
5. Discussion
In the present study, the seismicity parameters and Poissonian tests are used to compare three different
methods for the seismicity declustering. However, it is difficult to determine which method is the most
effective because there is no inherently unique method for the removal of dependent earthquakes, and
the evaluation criteria for the removal results are not absolute. In the ETAS model (Ogata 1988; Utsu et al.
1995), which generalizes Omori's law for aftershock occurrence, the seismicity rate observed in an area is
expressed as the sum of the constant background seismicity rate over time and the aftershock activity
rate triggered by other earthquakes (Ogata 1988). Therefore, if dependent earthquakes are completely
removed by the seismicity declustering method, then the cumulative background seismicity rate with
respect to time will be linear.

The cumulative numbers of earthquakes obtained from the raw catalog and the three declustered
earthquake catalogs are plotted against time in Fig. 3. Here, the cumulative number of earthquakes
includes only earthquakes with a completeness magnitude (Mc) of greater than or equal to 1.5. This is
because the inclusion of smaller earthquakes than Mc can distort the seismicity in space and time.
Notably, the cumulative number of earthquakes with Mc ≥ 1.5 from the raw catalog (solid black line,
Fig. 3) increased significantly after the Gyeongju earthquake (G.E.; ML = 5.8) on September 12, 2016 and
the Pohang earthquake (P.E.; ML = 5.4) on November 15, 2017. However, as noted above, if the dependent
earthquakes are completely removed from the catalog, only the background earthquakes with a constant
rate of seismicity will remain; hence, the cumulative number of earthquakes will appear as a straight line.
Therefore, the optimum declustering method can be identified by observing which curve is the closest to
being a straight line. By finding a straight line that best fits each of the curves in Fig. 3, and finding the
slope of each best fit line from the least square method, the values of 343.4, 314.7 and 236.5 are
obtained for the catalogs that were declustered using the Reasenberg (1985), Gardner and Knopoff
(1974), and Zhuang et al. methods, respectively (Table 4). These values represent the annual numbers of
earthquakes with magnitudes greater than or equal to 1.5. In the case of the Reasenberg (1985) method,
the best-fit curve has the largest slope (dot-dashed line, Fig. 3), which can be interpreted as the result of
fewer dependent earthquakes being removed compared to the other two declustered curves. As shown in
Table 1, The number of dependent earthquakes removed increases in the order of 4144 (Reasenberg
(1985)) < 4557 (Gardner and Knopoff (1974)) < 5606 (Zhuang et al. (2002)). Consequently, the curve in
Fig. 3 for the catalog that was declustered using the method of Zhuang et al. (2002) (blue dashed line) is
the closest to being a straight line. However, based on this evidence alone, it is difficult to conclude that
the method of Zhuang et al. (2002) is the optimum method. This is because the background seismicity
rate can be regarded as constant over a sufficiently long period of time, and it is difficult to discuss this
given the short study period of about 6 years.

Page 11/21
Table 4
The annual mean occurrence rate of earthquakes with
1.5 obtained from the cumulative curves in Fig. 3 for the
original (raw) catalog and the declustered catalogs using
the methods of Gardner and Knopoff (1974), Reasenberg
(1985), and Zhuang et al. (2002)
Curve Mean occurrence rate

ML ≥ 1.5 / year

Gardner and Knopoff (1974) 314.70

Reasenberg (1985) 343.40

Zhuang et al. (2002) 236.48

The method of Gardner and Knopoff (1974) for removing the dependent earthquakes within a fixed
space-time window is compared with the probabilistic method of Zhuang et al. (2002) in terms of the
spatial window in Fig. 4. Here, the black dashed line represents the upper limit of the spatial window for
the method of Gardner and Knopoff (1974), while the solid red line represents the upper limit of the
spatial window for the method of Uhrhammer (1986). In Fig. 4, the dependent earthquakes that were
removed probabilistically by the method of Zhuang et al. (2002) are indicated by the crosses. The spatial
windows for the methods of Gardner and Knopoff (1974) and Uhrhammer (1986) are each a function of
the magnitude, such that the spatial window becomes wider as the magnitude increases. Thus, the size
of the spatial window for the method of Zhuang et al. (2002) is seen to be approximately the same as
that for the method of Gardner and Knopoff (1974).

By contrast, the comparative time window ranges for the removal of the dependent earthquakes by the
methods of Gardner and Knopoff (1974) and Uhrhammer (1986) are represented by the black dashed and
solid red lines, respectively, in Fig. 5. Again, the dependent earthquakes that were removed
probabilistically by the method of Zhuang et al. (2002) are indicated by the crosses. As with the above-
mentioned spatial windows, the time windows for the methods of Gardner and Knopoff (1974) and
Uhrhammer (1986) are each a function of the magnitude, such that the time window becomes wider as
the magnitude increases. However, the method of Zhuang et al. (2002) exhibits an unlimited time window
regardless of the earthquake magnitude, thus indicating that this method can remove many more
dependent earthquakes. Nevertheless, as noted above, there is no inherently unique method for removing
dependent earthquakes, and it is difficult to determine which method is the most effective because the
evaluation criteria for the removal results are not absolute. It is thought that the range of the space-time
window for dependent earthquakes is associated with the extent of fault motion depending on the size of
the seismic source and the physical properties of the subsurface. Therefore, the range of the space-time
window may vary from region to region. In addition, the range can vary according to the specific definition
of foreshock and aftershock. For example, it is a question of whether the earthquakes that are triggered
by the reactivation of an old fault should be regarded as aftershocks.

Page 12/21
6. Conclusions
In this study, the seismicity declustering methods of Gardner and Knopoff (1974), Reasenberg (1985),
and Zhuang et al. (2002) were compared by application to the catalog of earthquakes that were observed
between 2016 and 2021 in and around the Korean peninsula. The results demonstrated that the method
of Zhuang et al. (2002) detects the largest number of dependent earthquakes, which account for as many
as 70.5% of the total events. By contrast, the method of Reasenberg (1985) detected the smallest number
of dependent earthquakes compared to the other methods. Notably, while the methods of Zhuang et al.
(2002) and Gardner and Knopoff (1974) exhibit similar spatial windows, the method of Zhuang et al.
(2002) exhibits a time window that is unrestricted and independent of the earthquake magnitude. The
completeness magnitude of the raw earthquake catalog was found to be 1.5, and this was reduced to 1.4
after removing the dependent earthquakes. The b-value of the raw catalog was estimated to be 0.97, and
was decreased to 0.90 in the declustered catalogs. In addition, the annual occurrence rate of earthquakes
with magnitudes of greater than or equal to 5.0 was found to vary according to the declustering method.
These observations demonstrate that the results of long-term earthquake prediction or seismic hazard
analysis can vary depending on the declustering methods used.

To determine whether the declustered catalogs follow the Poisson process, null hypothesis tests were
performed by using the multinomial chi-square (MC) test, the conditional chi-square (CC) test, and the
Kolmogorov-Smirnov (KS) test. The minimum magnitude (M p) above which the null hypothesis of the
Poisson process could not be rejected was found to vary from 1.6 to 2.2 depending on the declustered
catalog and the test method used. For the MC test, the M p ranged from 1.6 to 1.8, and there was no
significant difference according to the number of intervals. While the KS test, which examines the
distribution of time intervals between events, gave a similar M p range to that of the MC test, the CC test
gave M p values ranging from 1.5 to 2.5. In general, the M p value obtained from the Poisson test shows
a larger value than the completeness magnitude (Mc); hence, the M p value is expected to provide a good
reference for the selection of the cutoff magnitude.

When the dependent earthquakes are completely removed, only the background earthquakes with a
constant seismicity rate remain. Consequently, the cumulative number of earthquakes over time will
appear as a straight line. Although this result was confirmed by the plot of cumulative number of
earthquakes against time for the method of Zhuang et al. (2002), it is difficult to conclude that Zhuang et
al. (2002) is the optimum method from this evidence alone. However, while the methods of Zhuang et al.
(2002) and Gardner and Knopoff (1974) provide similar spatial windows, the method of Zhuang et al.
(2002) provides an unlimited time window, regardless of the earthquake magnitude. This indicates that
the method of Zhuang et al. (2002) can remove many more dependent earthquakes than the other
methods. Nevertheless, there is no inherently unique method for the removal of dependent earthquakes,
and it is difficult to determine which method is the most effective because the evaluation criteria for the
removal results are not absolute.

References
Page 13/21
1. Aki, K. (1965). Maximum likelihood estimate of b in the formula log10N=a-bm and its confidence
limits. Bulletin of Earthquake Research, Tokyo University, 43, 237-239.
2. Amorese, D., Grasso, J.R., and Rydelek, P.A (2009). On varying b-values with depth: Result from
computer-intensive tests for southern California. Geophysical Journal International, 180, 347-360.
3. Bender, B. (1983) Maximum likelihood estimation of b values for magnitude grouped data. Bulletin
of the Seismological Society of America, 73, 831-851.
4. Cornell, C.A. (1968). Engineering seismic risk analysis. Bulletin of the Seismological Society of
America, 58, 1583-1606.
5. Eem, S.-H., Yang, B., Jeon, H. (2018). Earthquake damage assessment of buildings using opendata in
the Pohang and the Gyeongju Earthquakes, Journal of Earthquake Engineering(Earthquake
Engineering Society of Korea), 22, 121-128.
6. Gardner, J. K., and L. Knopoff (1974). Is the sequence of earthquakes in Southern California, with
aftershocks removed, Poissonian?. Bull. Seis. Soc. Am., 64, 1363-1367.
7. Gihm, Y.S., Kim, S.W., Ko, K., Choi, J.-H., Bae, H., Hong, P.S., Lee, Y., Lee, H., Jin, K., Choi, S.-J., Kim, J.C.,
Choi, M.S., and Lee, S.R. (2018). Paleoseismological implications of liquefaction-induced structures
caused by the 2017 Pohang earthquake. Geosciences Journal, 22, 871-880.
8. Kanamori, H., and Anderson, D. L. (1975). Theoretical basis of some empirical relations in
seismology. Bulletin of the Seismological Society of America, 65, 1073–1095.
9. Kim, K.H., Kang, T.S., Rhie, J., Kim, Y.H., Park, Y., Kang, S.U., Han, M., Kim, J., Park, J., Kim, M., Kong,
C.H., Heo, D., Lee, H., Park, E., Park, H., Lee, S.j., Cho, S., Woo, J.U., Lee, S.H., and Kim, J. (2016). The
12 September Gyeongju earthquakes: 2. Temporary seismic network for monitoring aftershocks.
Geosciences Journal, 20, 753-757.
10. Kim, K.H., Rhie, J.-H., Kim, Y.H., Kim, S., Kang, S.U., and Seo, W. (2018). Asessing whether the 2017
Mw 5.4 Pohang earthquake in South Korea was an induced event. Science, 360, 1007-1009.
11. Kim, S.K. and Lee, J.M. (2019). Comparison of the aftershock activities of the 2016 M5.8 Gyeongju
and 2017 M5.4 Pohang earthquakes. Korea. Journal of the Geological Society of Korea, 55, 207-218
(in Korean with English Abstract).
12. KMA (2022). https://data.kma.go.kr/data/weatherReport/eqkList.do? pgmNo = 654 (March 20,
2022).
13. Jin, K., Lee, J., Lee, K.-S., and Kyung, J.B. (2019). Earthquake damage and related factors associated
with the 2016 ML = 5.8 Gyeongju earthquake, southeast Korea. Geosciences Journal, 24,
DOI:10.1007/s12303-019-0024-9.
14. Lee, C.-H., Kim, S.-Y., Park, J.-H., Kim, G.-K., and Kim, T.-J. (2018). Comparative analysis of structural
damage potentials observed in the 9.12 Gyeongju and 11.15 Pohang Earthquakes, Journal of
Earthquake Engineering(Earthquake Engineering Society of Korea), 22, 175-184.
15. Lee, H,-K. (2013). Estimation of Gutenberg-Richter b-value and Mmax using instrumental earthquake
catalog from the southern Korean Peninsula. MS thesis, Chonnam National University, 71 p (in

Page 14/21
Korean with English abstract).
16. Lim, H., Deng, K., Kim, Y.H. Ree, J.-H., Song, T.-R.A., and Kim, H. (2020). 2017 Mw 5.5 Pohang
Earthquake, South Korea, and poroelastic stress changes associated with fluid injection. Journal of
Geophysical Research Solid Earth, 125, https://doi.org/10.1029/2019JB019134.
17. Luen, B. and Stark, P.B. (2012). Poisson tests of declustered catalogues. Geophysical Journal
International, 189, 691-700.
18. Mignan, A., J. Woessner (2012). Estimating the magnitude of completeness for earthquake catalogs,
Community Online Resource for Statistical Seismicity Analysis, doi:10.5078/corssa-00180805.
19. Mulargia, F., Stark, P.B., Geller, R.J. (2017). Why is Probabilistic Seismic Hazard Analysis (PSHA) still
used ?. Physics of the Earth and Planetary Interiors, 264, 63–75.
20. Noh, M. (2016). On the Poisson process of the Korean earthquakes. Geosciences Journal, 20, 775-
779.
21. Ogata, Y. (1988). Statistical models for earthquake occurrences and residual analysis for point
processes. Journal of American Statistical Association, 83, 9-27.
22. Ogata, Y. and Katsura, K. (1993). Analysis of temporal and spatial heterogeneity of magnitude
frequency distribution inferred from earthquake catalogues. Geophysical Journal International, 113,
727–738.
23. Page, R. (1968). Aftershock and microaftershocks of the Great Alaska earthquake of 1964. Bulletin
of the Seismological Society of America, 58, 1131-1168.
24. Reasenberg, P. (1985). Second-order moment of central California seismicity, 1969-82. Journal of
Geophysical Research. 90, 5479-5495.
25. Rydelek, P. A. and Sacks, I. S. (1989). Testing the completeness of earthquake cataloguesand the
hypothesis of self-similarity. Nature, 337 (6204), 251–253.
26. Shearer, P.M. and Stark, P.B. (2012). Global risk of big earthquakes has not recently increased. Earth,
Atmospheric, and Planetary Sciences, 109, 717–721
27. Son, M., Cho, C.S., Shin, J.S., Rhee, H.M., and Sheen, D.H. (2017). Spatio-temporal distribution of
events during the first three months of the 2016 Gyeongju, Korea, earthquake sequence. Bulletin of
the Seismological Society of America, doi: 10.1785/0120170107.
28. Stiphout Van, T., Zhuang, J. and Marsan, D. (2012). Seismicity declustering. Community Online
Resource for Statistical Seismicity Analysis, doi:10.5078/Corssa-52382934.
29. Stephens, M.A. (1974). EDF statistics for goodness of fit and some comparisons. Journal of the
American Statistical Association, 69, 730-737.
30. Taroni, M. and Akinci, A. (2021). Good practices in PSHA: declustering, b-value estimation,
foreshocks and aftershocks inclusion; a case study in Italy. Geophysical Journal International, 224,
1174–1187.
31. Tibi, R., Blanco, J., and Fatehi, A. (2011). An alternative and efficient cluster-link approach for
declustering of earthquake catalogs. Seismological Research Letters, 82, 509-518.

Page 15/21
32. USGS (2021). https://www.usgs.gov/software/cluster2000 (May 5, 2021).
33. Utsu, T. (1965). A method for determining the value of b in a formula log n=a-bM showing the
magnitude-frequency relation for earthquakes. Geophysical Bulletin of Hokkaido University, 13, 99-
103.
34. Utsu, T., Ogata, Y., and Matsu'ura, R.S. (1995). The Centenary of the Omori formula for a decay law of
aftershock activity. Journal of Physics of the Earth, 43, 1-33.
35. Uhrhammer R (1986). Characteristics of northern and central California seismicity. Earthquake Notes,
57, 21-21.
36. Weichert, D.H. (1980). Estimation of the earthquake recurrence parameters for unequal observation
periods for different magnitudes. Bulletin of the Seismological Society of America, 70, 1337-1346.
37. Wiemer, S. and Wyss, M. (2000). Minimum magnitude of complete reporting in earthquake catalogs:
Examples from Alaska, the western United States, and Japan. Bulletin of the Seismological Society
of America, 90, 859–869.
38. Woessner, J. and Wiemer, S. (2005). Assessing the quality of earthquake catalogues: Estimating the
magnitude of completeness and its uncertainty. Bulletin of the Seismological Society of America, 95,
doi:10.1785/012040007.
39. Wyss, M. and Toya, Y. (2000). Is background seismicity produced at a stationary Poisson rate ?.
Bulletin of the Seismological Society of America, 90,1174-1187.
40. Zhuang, J., Ogata, Y., Vere-Jones, D. (2002). Stochastic declustering of space–time earthquake
occurrences. Journal of American Statistical Association, 97, 369–380.
41. Zhuang, J., Ogata, Y., Vere-Jones, D. (2004). Analyzing earthquake clustering features by using
stochastic reconstruction. Journal of Geophysical Research, 109(B5), B05301.
doi:10.1029/2003JB002879.

Figures

Page 16/21
Figure 1

The epicenter distribution observed by the Korea Meteorological Administration between 2016 and 2021

Page 17/21
Figure 2

The latitude - time plots for the mainshocks (large white circles) and clustered earthquakes (small red
circles): (a) the original catalog, and (b–d) the declustered catalogs using the methods of (b) Gardner and
Knopoff (1974), (c) Reasenberg (1986), and (d) Zhuang et al. (2002). In each case the characters ‘P’ and
‘Q’ represent the Gyeongju earthquake in September 2016 and the Pohang earthquake in November 2017,
respectively
Page 18/21
Figure 3

The cumulative numbers of earthquakes between 2016 and 2021 from (a) the original (raw) catalog, and
(b–d) the declustered catalogs using the methods of (b) Gardner and Knopoff (1974), (c) Reasenberg
(1986), and (d) Zhuang et al. (2002). The symbols P and Q denote the occurrence times of the 2016
Gyeongju earthquake (ML =5.8) and the Pohang earthquake (ML = 5.4), respectively

Page 19/21
Figure 4

The ranges of the spatial windows for the removal of dependent earthquakes by the methods of Gardner
and Knopoff (1974) and Uhrhammer (1986). The cross marks represent the dependent earthquakes that
were removed probabilistically by the method of Zhuang et al. (2002)

Page 20/21
Figure 5

The ranges of the time windows for the removal of the dependent earthquakes by the methods of
Gardner and Knopoff (1974) and Uhrhammer (1986). The cross marks represent the dependent
earthquakes that were removed probabilistically by the method of Zhuang et al. (2002)

Page 21/21

You might also like