AN A PRIORI BOUND FOR AUTOMATED MULTI-LEVEL
SUBSTRUCTURING
KOLJA ELSSEL∗
AND HEINRICH VOSS∗
Key words. Eigenvalues, AMLS, substructuring, nonlinear eigenproblem, minmax characterization
AMS subject classification. 65F15, 65F50
Abstract. The Automated Multi-Level Substructuring (AMLS) method has been developed to
reduce the computational demands of frequency response analysis and has recently been proposed
as an alternative to iterative projection methods like Lanczos or Jacobi–Davidson for computing a
large number of eigenvalues for matrices of very large dimension. Based on Schur complements and
modal approximations of submatrices on several levels AMLS constructs a projected eigenproblem
which yields good approximations of eigenvalues at the lower end of the spectrum. Rewriting the
original problem as a rational eigenproblem of the same dimension as the projected problem, and
taking advantage of a minmax characterization for the rational eigenproblem we derive an a priori
bound for the AMLS approximation of eigenvalues.
1. Introduction. Over the last few years, a new method for frequency response
and eigenvalue analysis for complex structures has been developed by Bennighof
and co–authors [3], [4], [5], [6], [14] known as Automatic Multi-Level Substructuring
(AMLS). Here the large finite element model is recursively divided into very many
substructures on several levels based on the sparsity structure of the system matrices.
Assuming that the interior degrees of freedom of substructures depend quasistatically
on the interface degrees of freedom, and modeling the deviation from quasistatic dependence in terms of a small number of selected substructure eigenmodes the size of
the finite element model is reduced substantially yet yielding satisfactory accuracy
over a wide frequency range of interest. Recent studies ([16], [14], e.g.) in vibroacoustic analysis of passenger car bodies where very large FE models with more than
one million degrees of freedom appear and several hundreds of eigenfrequencies and
eigenmodes are needed have shown that AMLS is considerably faster than Lanczos
type approaches.
We stress the fact that substructuring does not mean that it is obtained by a
domain decomposition of a real structure, but it is understood in a purely algebraic
sense, i.e. the dissection of the matrices can be derived by applying a graph partitioner
like CHACO [11] or METIS [15] to the matrix under consideration. However, because
of its pictographic nomenclature we will use terms like substructure or eigenmode
from frequency response problems when introducing the AMLS method.
From a mathematical point of view AMLS is a projection method where the ansatz
space is constructed exploiting Schur complements of submatrices and truncation of
spectral representations of subproblems. In this paper we will take advantage of the
facts that the original eigenproblem is equivalent to a rational eigenvalue problem of
the same dimension as the projected problem in AMLS, which can be interpreted as
exact condensation of the original eigenproblem with respect to an appropriate basis.
Its eigenvalues at the lower end of the spectrum can be characterized as minmax
values of a Rayleigh functional of this rational eigenproblem. Hence, comparing the
Rayleigh quotient of the projected problem and the Rayleigh functional of the rational
∗ Department of Mathematics, Hamburg University of Technology, D-21071 Hamburg, Germany
({elssel,voss}@tu-harburg.de)
1
2
KOLJA ELSSEL AND HEINRICH VOSS
problem we derive an a priori bound for the error of the AMLS method.
Bekas and Saad [2] for the one level version of AMLS identified the AMLS approximation as linearization of the rational eigenproblem mentioned in the last paragraph
which motivated them to suggest three modifications of AMLS, a second order approximation, expanding the projection space by Krylov subspaces, and a combination
of these two modifications.
In a recent paper Yang et al. [21] considered a one level version of AMLS. The
authors obtained a simple heuristic for choosing spectral components from each substructure suggesting to drop all eigenpairs (ω, φ) of substructures in the reduction
process such that
ρ1 (ω) :=
λ1
≤τ
ω − λ1
where λ1 is the smallest eigenvalue of the problem under consideration, and τ is a
given tolerance. By our new a priori bound this omission rule guarantees that the
relative error of the smallest eigenvalue of the projected problem is not greater than
the tolerance τ .
Our presentation is organized as follows. Section 2 gives a brief overview of
the automatic multi-level substructuring and derives the exactly condensed rational
eigenproblems, which is equivalent to the original eigenproblem. Section 3 collects
the variational characterization of nonlinear and nonoverdamped eigenvalue problems
which are exploited in Section 4 to deduce a priori bounds for the component mode
method, the one level version of AMLS, and the general automated multi-level substructuring method. The paper closes with numerical examples in Section 5.
2. Substructuring of eigenproblems. We are concerned with the linear eigenvalue problem
Kx = λM x
(2.1)
where K ∈ Rn×n and M ∈ Rn×n are symmetric and positive definite matrices. We
recall that the terms structure, substructure, interface and domain are meant in the
algebraic sense to follow.
We first consider one level versions of substructuring methods. Assume that the
joint graph of the matrices K and M is partitioned into r substructures such that the
rows and columns of K can be reordered in the following way:
Kℓℓ1
O
...
O
Kℓi1
O
Kℓℓ2 . . .
O
Kℓi2
..
.
.
.. ,
.
..
..
..
K= .
.
O
O
. . . Kℓℓr Kℓir
Kiℓ1 Kiℓ2 . . . Kiℓr Kii
and M after reordering has the same block form. Here Kℓℓj , j = 1, . . . , r is the local
stiffness matrix corresponding to the j-th substructure, i denotes the set of interface
vertices, and Kℓij describes the interaction of the interface degrees of freedom and
the j-th substructure.
Distinguishing only between local and interface degrees of freedom K and M have
the following form:
µ
¶
µ
¶
Kℓℓ Kℓi
Mℓℓ Mℓi
K=
and M =
.
(2.2)
Kiℓ Kii
Miℓ Mii
3
AN A PRIORI BOUND FOR AMLS
We transform the matrix K to block diagonal form using block Gaussian elimination, i.e. we apply the congruent transformation with
µ
¶
−1
I −Kℓℓ
Kℓi
P =
0
I
to the pencil (K, M ) obtaining the equivalent pencil
T
T
(P KP, P M P ) =
µµ
Kℓℓ
0
¶ µ
0
Mℓℓ
,
K̃ii
M̃iℓ
M̃ℓi
M̃ii
¶¶
.
(2.3)
Here Kℓℓ and Mℓℓ stay unchanged, and
−1
T
Kℓi
K̃ii = Kii − Kℓi
Kℓℓ
M̃ℓi =
M̃ii =
is the Schur complement of Kℓℓ
−1
T
Kℓi = M̃iℓ
Mℓi − Mℓℓ Kℓℓ
−1
−1
Mℓi
Kℓi − Kiℓ Kℓℓ
Mii − Miℓ Kℓℓ
−1
−1
Kℓi .
Mℓℓ Kℓℓ
+ Kiℓ Kℓℓ
Neglecting in (2.3) all rows and columns corresponding to local degrees
of freedom,
µ
¶
−1
−Kℓℓ
Kℓi
i.e. projecting problem (2.1) to the subspace spanned by columns of
one
I
obtains the method of static condensation
K̃ii y = λM̃ii y
(2.4)
introduced by Guyan [10] and Irons [13]. For vibrating structures this means that
the interior degrees of freedom are assumed to depend quasistatically on the interface
degrees of freedom, and the inertia forces of the substructures are neglected.
To model the deviation from quasistatic behavior thereby improving the approximation properties of static condensation we consider the eigenvalue problem
Kℓℓ Φ = Mℓℓ ΦΩ,
ΦT Mℓℓ Φ = I,
(2.5)
where Ω is a diagonal matrix containing the eigenvalues.
Changing the basis for the local degrees of freedom to a modal one, i.e. applying
the further congruent transformation diag{Φ, I} to problem (2.3) one gets
µµ
Ω
0
¶¶
¶ µ
0
I
ΦT M̃ℓi
.
,
K̃ii
M̃iℓ Φ
M̃ii
(2.6)
In structural dynamics (2.6) is called Craigh–Bampton form of the eigenvalue
problem (2.1) corresponding to the partitioning (2.2). In terms of linear algebra it
results from block Gaussian elimination to reduce K to block diagonal form, and
diagonalization of the block Kℓℓ using a spectral basis.
Selecting some eigenmodes of problem (2.5) (usually the ones according to eigenvalues which do not exceed a cut off threshold, however, in a recent paper Bai and Lia
[1] suggested a different choice based on a moment–matching analysis), and dropping
the rows and columns in (2.6) corresponding to the other modes one arrives at the
component mode synthesis method (CMS) introduced by Hurty [12] and Craigh and
Bampton [7]. Hence, if the diagonal matrix Ω1 contains in its diagonal the eigenvalues to drop and Φ1 the corresponding eigenvectors, and if Ω2 and Φ2 contain the
4
KOLJA ELSSEL AND HEINRICH VOSS
eigenvalues and eigenvectors to keep, respectively, then the eigenproblem (2.6) can be
rewritten as
I
0
M̃ℓi1
Ω1 0
0
x1
x1
0 Ω2
0 x2 = λ 0
(2.7)
I
M̃ℓi2 x2
x3
x3
0
0 K̃ii
M̃iℓ1 M̃iℓ2 M̃ii
with
−1
T
Kℓi ) = M̃iℓj
, j = 1, 2,
M̃ℓij = ΦTj (Mℓi − Mℓℓ Kℓℓ
and the CMS approximations to the eigenpairs of (2.1) are obtained from the reduced
eigenvalue problem
¶
µ
µ
¶
Ω2
0
I
M̃ℓi2
y=λ
y
(2.8)
0 K̃ii
M̃iℓ2 M̃ii
In Section 4 we shall prove an a priori bound for the relative error of problem
(2.8) taking advantage of the fact that the eigenvalues of the original problem (2.1) are
eigenvalues of a rational eigenproblem of the same dimension as the reduced problem
(2.8), and that the eigenvalues of the rational problem at the lower end of the spectrum
are minmax values of a Rayleigh functional.
If λ is not a diagonal entry of Ω1 then the first equation of (2.7) yields
x1 = λ(Ω1 − λI)−1 M̃ℓi1 x3 ,
and λ is an eigenvalue of (2.1) if and only if it is an eigenvalue of the rational eigenproblem
¶µ ¶
µ
¶µ ¶
µ
Ω2
0
x2
x2
I
M̃ℓi2
=λ
x3
x3
0 K̃ii
M̃iℓ2 M̃ii
µ ¶
µ
¶
¡
¢ x2
0
.
(2.9)
(Ω1 − λI)−1 0 M̃ℓi1
+λ2
x3
M̃iℓ1
The number of interface degrees of freedom may still be very large, and therefore
the dimension of the reduced problem (2.8) may be very high. It can be reduced
further by modal reduction of the interface degrees of freedom in the following way:
Considering the eigenvalue problem
K̃ii Ψ = M̃ii ΨΓ, ΨT K̃ii Ψ = Γ, ΨT M̃ii Ψ = I,
(2.10)
and applying the congruent transformation to the pencil in (2.6) with P̃ = diag{I, Ψ},
we obtain the equivalent pencil
¶¶
¶ µ
µµ
I
M̂ℓi
Ω O
(2.11)
,
T
O Γ
M̂ℓi
I
with
−1
T
M̂ℓi = ΦT (Mℓi − Mℓℓ Kℓℓ
Kℓi )Ψ = M̂iℓ
.
(2.12)
Selecting eigenmodes of (2.5) and of (2.10) and neglecting rows and columns in
(2.11) which correspond to the other modes one gets a reduced problem which is the
AN A PRIORI BOUND FOR AMLS
5
one level version of the automated multilevel substructuring method, introduced by
Bennighof in [5].
Similarly as for the CMS method we partition the matrices Γ and Ψ into
µ
¶
Γ1 0
Γ=
and Ψ = (Ψ1 , Ψ2 )
0 Γ2
and rearranging the rows and columns beginning with the modes corresponding to
Φ1 and Ψ1 to be dropped followed by the ones corresponding to Φ2 and Ψ2 problem
(2.11) obtains the form
I
M̂12
0
M̂14
Ω1 0
0
0
0 Γ1 0
0
I
M̂23
0
, M̂21
(2.13)
0
0 Ω2 0 0
M̂32
I
M̂34
0
0
0 Γ2
M̂41
0
M̂43
I
where
−1
T
Kℓi )Ψ1 = M̂21
M̂12 = ΦT1 (Mℓi − Mℓℓ Kℓℓ
−1
T
Kℓi )Ψ2 = M̂41
M̂14 = ΦT1 (Mℓi − Mℓℓ Kℓℓ
−1
T
M̂32 = ΦT2 (Mℓi − Mℓℓ Kℓℓ
Kℓi )Ψ1 = M̂23
−1
T
M̂34 = ΦT2 (Mℓi − Mℓℓ Kℓℓ
Kℓi )Ψ2 = M̂43
.
The one level approximations of AMLS to eigenpairs are obtained from
µ
¶
¶
µ
Ω2 0
I
M̂34
y.
y=λ
0 Γ2
M̂43
I
(2.14)
Similarly as in (2.9) for the CMS method the variables corresponding to the
leading two rows of (2.13) can be eliminated yielding a rational eigenproblem
µ
¶
¶
µ
Ω2 0
I
M̂34
y
y=λ
0 γ2
M̂43
I
¶
¶−1 µ
¶µ
µ
0
M̂14
Ω1 − λI −λM̂12
0
M̂32
2
y (2.15)
+λ
M̂32
0
−λM̂12 Γ1 − λI
M̂41
0
which is equivalent to (2.1).
For the special eigenvalue problem Ax = λx the one level approximation (2.14)
was identified by Bekas and Saad [2] as linearization of the rational problem (2.15)
(which is trivial using the representations (2.13), (2.14), and (2.15)). This was the
starting point for further improvements adding a second order approximation, expanding the projection space by Krylov subspaces, and a combination of these two
modifications. However, we doubt that these variants will be useful for the general
AMLS method.
The general version of AMLS introduced by Bennighof [5] starts with the transformed eigenproblem (2.3), and reduces the dimension taking advantage of the spectral
decomposition (2.10). Neglecting eigenvectors corresponding to eigenvalues exceeding
a given cut off bound one obtains a reduced pencil on the first level
¶¶
¶ µ
µµ
Mℓℓ
M̃ℓi Ψ1
Kℓℓ 0
.
(2.16)
,
0
Γ1
ΨT1 M̃iℓ
I
6
KOLJA ELSSEL AND HEINRICH VOSS
Here the dimension of Kℓℓ , i.e. the substructures on the coarsest level usually
will be very large. Therefore, one avoids to reduce the problem applying the spectral
decomposition (2.5) but one dissects the substructures of the first level again, and
applies the same reduction step to the upper left part of the matrices in (2.16).
Assume that the initial substructures are dissected a second time, and rearrange
the rows and columns of (2.16) such that the local degrees of freedom of the substructuring on the second level appear first, followed by the newly generated interface
degrees of freedom. Then (2.16) obtains the following form
(1)
(1)
(1)
(1)
Mℓℓ
Mℓi
Mℓi1 Ψ1
Kℓℓ Kℓi
0
(1)
(1)
(1)
(1)
(2.17)
Kiℓ
Mii
Mℓi2 Ψ1 z.
Kii
0 z = λ Miℓ
T
T
0
0
Γ1
I
Ψ1 Miℓ1 Ψ1 Miℓ2
Block diagonalizing the rearranged representation K (1) of Kℓℓ by the congruent
transformation
(1)
(1)
O
I −(Kℓℓ )−1 Kℓi
O
I
O
O
O
I
yields
(1)
K
ℓℓ
O
0
O
(1)
K̃ii
0
(1)
Mℓℓ
O
(1)
0 z = λ M̃iℓ
T
Γ1
Ψ1 Miℓ1
(1)
M̃ℓi
(1)
M̃ii
T
Ψ1 M̃iℓ2
Mℓi1 Ψ1
M̃ℓi2 Ψ1 z,
I
(2.18)
which is reduced considering only the share of the spectral decomposition of
(1)
(1)
K̃ii Ψ = M̃ii ΨΓ, ΨT M̃ii Ψ = I,
(2.19)
which correspond to eigenvalues less than the given cut off threshold. Hence we obtain
the reduced problem on the second level
(1)
(1)
(1)
M̃ℓi1 Ψ2
Mℓi1 Ψ1
Mℓℓ
Kℓℓ
O O
(1)
O
(2.20)
Γ2 O z = λ ΨT2 M̃iℓ1
I
ΨT2 M̃ℓi2 Ψ1 z
O
O Γ1
I
ΨT1 Miℓ1 ΨT1 M̃iℓ2 Ψ2
Continuing with substructuring on the current level, block Gauss eliminating
the off diagonal blocks in the left upper block, and reducing the dimension of the
current interface block by spectral truncation we finally arrive on a level p where the
eigenproblems corresponding to the individual substructures are small enough to be
solved by a standard eigensolver. In this situation we apply a final eigenfrequency
(p)
(p)
truncation to the pencil (Kℓℓ , Mℓℓ ) in the left upper corner and receive the projected
eigenproblem of AMLS.
We presented here a top–down version of AMLS starting with a spectral truncation of the Schur complement of the local degrees of freedom corresponding to the
coarsest substructuring of the problem because this will be convenient in Section 4
when deriving an a priori bound. For an implementation of AMLS this top–down
version is inappropriate, because in the first step we need an LU decomposition of the
diagonal blocks of Kℓℓ the dimensions of which usually will be very large.
AN A PRIORI BOUND FOR AMLS
7
The equivalent bottom–up version (cf. [3], [4], [5], [9], [14]) starts by generating the
substructuring on several levels taking advantage of the joint graph of K and M only,
but not of the entries of K and M . Each substructure on the finest level is transformed
to its quasistatic/modal representation, i.e. we apply the congruent transformation to
the restriction of the pencil (2.1) to the local degrees of freedom of the substructure
under consideration and those interface variables connected to this substructure which
diagonalizes the K part of the restricted pencil, and we reduce the dimension by
spectral truncation which is quite inexpensive because these substructures are very
small.
Once lowest level substructures have been transformed they are assembled together to form ’parent substructures’ on the next level. For each of these substructures we identify the newly generated local degrees of freedom (which were interface
ones on the lowest level), and assemble the corresponding local matrices. Again these
matrices are small, and block diagonalization and spectral reduction again are inexpensive. Continuing this way assembling to form higher level substructures, and
transforming to quasistatic/modal representation, we finally arrive at a model of the
entire structure on the coarsest level where we execute a final spectral reduction. It
is obvious that this form of AMLS has a very high parallelization potential.
3. Variational characterization of eigenvalues of nonlinear eigenproblems. We consider the nonlinear eigenvalue problem
T (λ)x = 0
(3.1)
where T (λ) ∈ Rn×n is a family of real symmetric matrices for every λ in an open real
interval J which may be unbounded.
For a linear symmetric problem Kx = λM x all eigenvalues are real, and if they
are ordered by magnitude regarding their multiplicity λ1 ≤ λ2 ≤ · · · ≤ λn then it is
well known that they can be characterized by the minmax principle of Poincaré
λk = min
max
V ∈Sk x∈V, x6=0
xT Kx
,
xT M x
k = 1, 2, . . . , n,
(3.2)
where Sk denotes the set of all k dimensional subspaces of Rn .
Similar results hold for certain nonlinear eigenvalue problems, too. We assume
that the function f (λ, x) := xT T (λ)x is continuously differentiable on J × Rn , and
that for every fixed x ∈ Rn \ {0} the real equation
f (λ, x) = 0
(3.3)
has at most one solution in J. Then equation (3.3) implicitly defines a functional
p on some subset D of Rn \ {0} which replaces the Rayleigh quotient in the variational characterization of eigenvalues of problem (3.1), and which we call the Rayleigh
functional.
In the overdamped case, i.e. if p is defined on the entire space Rn \ {0}, the
variational characterizations of eigenvalues literally generalize to the nonlinear case.
Assume that
xT T ′ (p(x))x > 0
for every x 6= 0
(3.4)
generalizing the requirement that for symmetric pencils (K, M ) a linear combination
of the matrices K and M has to be positive definite. Then it holds (cf. [8], [17]) that
8
KOLJA ELSSEL AND HEINRICH VOSS
problem (3.3) has n eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn in J, and
λk = min
max
V ∈Sk x∈V, x6=0
p(x),
k = 1, 2, . . . , n.
(3.5)
In the nonoverdamped case D 6= Rn \ {0} the natural enumeration for which the
smallest eigenvalue is the first one, the second smallest is the second one, etc. is
not appropriate. This can be easily seen if we make a linear eigenproblem Ax = λx
nonlinear by reducing the parameter domain J to an interval which does not contain
the smallest eigenvalue. Then in general the infimum of the Rayleigh quotient is not
an eigenvalue of the reduced problem.
The following enumeration where an eigenvalue λ of the nonlinear problem (3.1)
inherits its number from the location of the eigenvalue 0 in the spectrum of the matrix
T (λ) which was introduced in [20] is the key to the minmax characterization in the
nonoverdamped case.
If λ ∈ J is an eigenvalue of problem (3.1) then µ = 0 is an eigenvalue of the linear
problem T (λ)y = µy, and therefore there exists k ∈ N such that
0 = max min1 v T T (λ)v
V ∈Sk v∈V
where V 1 := {v ∈ V : kvk = 1} is the unit sphere in V . In this case we call λ a k-th
eigenvalue of (3.1).
With this enumeration the following minmax characterization of the eigenvalues
of the nonlinear eigenproblem (3.1) was proved in [20]. The dual maxmin characterization is contained in [19].
Theorem 3.1. Assume that for every x ∈ Rn , x 6= 0 the real equation f (λ, x) =
xT T (λ)x = 0 has at most one solution p(x) in J. Let D denote the domain of
definition of the Rayleigh functional p, and assume that
xT T ′ (p(x))x > 0
for every x ∈ D.
(3.6)
Then the following assertions hold:
(i) For every k ∈ {1, 2, . . . , n} there is at most one k-th eigenvalue of problem
(3.1) in J which can be characterized by
λk = min
V ∈Sk
V ∩D6=∅
sup
p(v).
(3.7)
v∈V ∩D
The minimum is attained by the invariant subspace W of T (λk ) corresponding
to the k largest eigenvalues of T (λk ), and supv∈W ∩D p(v) is attained by all
eigenvectors of (3.1) corresponding to λk .
(ii) If
λk =
inf
V ∈Hk
V ∩D6=∅
sup
p(v) ∈ J
(3.8)
v∈V ∩D
for some k ∈ {1, . . . , n} then λk is the k-th eigenvalue of (3.1) and the characterization (3.7) holds.
(iii) If there exists the k1 -th and the k2 -th eigenvalue λk1 and λk2 in J and k1 < k2 ,
then J contains the k-th eigenvalue λk for k1 < k < k2 , as well, and
inf J < λk1 ≤ λk1 +1 ≤ · · · ≤ λk2 < sup J.
AN A PRIORI BOUND FOR AMLS
9
(iv) If λ1 ∈ J and λk ∈ J for some k then every V ∈ Sj with V ∩ D 6= ∅ and
λj = supu∈V ∩D p(u) is contained in D, and the characterization (3.7) can be
replaced by
λj = min max1 p(v),
V ∈Sj
V 1 ⊂D
v∈V
j = 1, . . . , k.
(3.9)
4. A priori error bounds. We first consider the component mode synthesis
method. With the notations of Section 2 let
ω := min diag Ω1
(4.1)
be the smallest eigenvalue of problem (2.3) neglected in the CMS method (which can
be replaced by the cut off threshold). For λ ∈ J := (−∞, ω) consider the family of
symmetric matrices
µ
¶
¶
µ
¶
µ
¡
¢
0
Ω2
0
I
M̃ℓi2
(Ω1 − λI)−1 0 M̃ℓi1 . (4.2)
+λ
+ λ2
T (λ) = −
M̃iℓ1
0 K̃ii
M̃iℓ2 M̃ii
Let λ1 ≤ λ2 ≤ · · · ≤ λn denote the eigenvalues of problem (2.1) ordered by
magnitude, and let m ∈ N such that λm < ω ≤ λm+1 . Then λ1 , . . . , λm ∈ J are the
eigenvalues of the nonlinear eigenproblem
T (λ)x = 0
(4.3)
in J. For
f (λ, x) := xT T (λ)x
¶
µ
I
M̃ℓi2
that
it follows from the positive definiteness of
M̃iℓ2 M̃ii
∂
f (λ, x) = xT
∂λ
µ
I
M̃iℓ2
¶
ν
X
(2λωj − λ2 )a2j
M̃ℓi2
x+
>0
M̃ii
(ωj − λ)2
(4.4)
(4.5)
j=1
for every ¡x ∈ Rν \¢{0}. Here ν denotes the dimension of the reduced problem (2.8),
and a := 0 M̃ℓi1 x.
Hence, for every x ∈ Rν \ {0} the real equation f (λ, x) = 0 has at most one
solution p(x) ∈ J, and condition (3.6) holds. Since λ1 ∈ J it follows from Theorem
3.1 that
λj = min max1 p(x).
V ∈Sj
V 1 ⊂D
(4.6)
x∈V
The eigenvalues λ̃1 ≤ λ̃2 ≤ · · · ≤ λ̃ν of the reduced problem (2.8) are minmax
values of the Rayleigh quotient R(x) corresponding to (2.8), and comparing p and
R on appropriate subspaces of Rν we arrive at the following bound for the relative
errors of the CMS approximations λ̃j to λj .
Theorem 4.1. It holds
0≤
λj
λ̃j
λ̃j − λj
≤
≤
,
λj
ω − λj
ω − λ̃j
j = 1, . . . , m.
(4.7)
10
KOLJA ELSSEL AND HEINRICH VOSS
Proof. The left inequality, i.e. λj ≤ λ̃j is trivial since CMS is a projection method,
and the right inequality follows from the monotonicity of the function λ 7→ λ/(ω − λ).
To prove the inequality in the middle denote by V ∈ Sj , V \ {0} ⊂ D the j
dimensional subspace of Rν such that
λj =
max
x∈V, x6=0
p(x).
Then p(x) ≤ λj for every x ∈ V , x 6= 0, and therefore it follows from (4.5)
µ
µ
µ
¶
¶
¶
¢
¡
Ω2
0
0
I
M̃ℓi2
(Ω1 −λj I)−1 0 M̃ℓi1 x ≥ 0.
x+λ2j xT
x+λj xT
−xT
0 K̃ii
M̃iℓ1
M̃iℓ2 M̃ii
Hence, for every x ∈ V , x 6= 0 one obtains
¶
µ
µ
¶
¡
¢
Ω2
0
0
T
T
x
x
(Ω1 − λj I)−1 0 M̃ℓi1 x
x
0 K̃ii
M̃
iℓ1
λj ≥
.
¶ − λ2j
¶
µ
µ
I
M̃
I
M̃ℓi2
ℓi2
T
T
x
x
x
x
M̃iℓ2 M̃ii
M̃iℓ2 M̃ii
In particular for x̂ ∈ V such that R(x̂) = maxx∈V, x6=0 R(x) we have
µ
¶
¡
¢
0
T
(Ω1 − λj I)−1 0 M̃ℓi1 x̂
x̂
M̃
iℓ1
λj ≥ max R(x) − λ2j
µ
¶
x∈V, x6=0
I
M̃ℓi2
T
x̂
x̂
M̃iℓ2 M̃ii
µ
¶
¡
¢
0
x̂T
(Ω1 − λj I)−1 0 M̃ℓi1 x̂
M̃
iℓ1
≥ min
max R(x) − λ2j
¶
µ
dim W =j x∈W, x6=0
I
M̃ℓi2
T
x̂
x̂
M̃iℓ2 M̃ii
µ
¶
¡
¢
0
T
x
0 M̃ℓi1 x
2
λj
M̃iℓ1
max
≥ λ̃j −
.
¶
µ
ω − λj x∈Rn ,x6=0
I
M̃ℓi2
xT
x
M̃iℓ2 M̃ii
From the positive definiteness of the transformed mass matrix
I
0
M̃ℓi1
0
I
M̃ℓi2
M̃iℓ1 M̃iℓ2 M̃ii
it follows that the Schur complement
¶ µ
µ
¶
¡
¢
0
I
M̃ℓi2
−
0 M̃ℓi1
M̃iℓ1
M̃iℓ2 M̃ii
is positive definite as well. Thus,
µ
¶
¡
¢
0
x
0 M̃ℓi1 x
M̃iℓ1
max
≤ 1,
¶
µ
x∈Rn ,x6=0
I
M̃ℓi2
T
x
x
M̃iℓ2 M̃ii
T
(4.8)
11
AN A PRIORI BOUND FOR AMLS
and (4.8) yields
λj ≥ λ̃j −
λ2j
ω − λj
(4.9)
which completes the proof.
REMARK 4.1. The special case of Theorem 4.1 for static condensation (2.4) was
proved already in [18].
REMARK 4.2. Based on accuracy considerations and an a priori error bound for
the smallest eigenvalue (which however usually can not be evaluated since it depends
on unknown quantities like a bound for the components of M̃ℓi1 x̃3 where x̃3 is the
interface portion of an eigenvector of (2.6) or the minimal distance of neglected diagonal entries of Ω1 belonging to the same substructure) Yang et al. [21] suggested to
neglect all eigenmodes (ωj , φj ) in (2.7) for which
λ1
<τ
ωj − λ 1
where τ ≪ 1 is a small quantity. Theorem 4.1 guarantees that with this choice the
relative error of the CMS approximation λ̃1 to the smallest eigenvalues λ1 is less than
τ.
For the one level AMLS method (2.14) one obtains with similar arguments as in
the proof of Theorem 4.1 the following bounds for the relative errors:
Theorem 4.2. Let 0 < λ̂1 ≤ λ̂2 ≤ . . . be the eigenvalues of the reduced problem
(2.14), and denote by µ̂ the smallest eigenvalue of
µ
Ω1
0
¶
µ
0
I
z=µ
Γ1
M̂21
¶
M̂12
z.
I
(4.10)
Then it holds
0≤
λj
λ̂j − λj
≤
λj
µ̂ − λj
(4.11)
for every eigenvalue λj of (2.1) such that λj < µ̂.
Proof. In the same way as in the proof of Theorem 4.1 we obtain from (2.13),
(2.14), and (2.15)
xT
λj ≥ λ̂j − λ2j max
µ
0
M̂41
M̂32
0
x6=0
λ2j
max
≥ λ̂j −
µ̂ − λj x6=0
T
x
µ
0
M̂41
¶
¶−1 µ
0
M̂14
Ω1 − λj I −λj M̂12
x
M̂23
0
−λj M̂21 Γ1 − λj I
¶
µ
I
M̂34
T
x
x
M̂43
I
¶
¶−1 µ
¶µ
0
M̂14
I
M̂12
M̂32
x
M̂23
0
M̂21
I
0
,
¶
µ
I
M̂34
T
x
x
M̂43
I
¶µ
and the positive definiteness of the transformed mass matrix in (2.13) yields that the
fraction on the right is less than or equal to 1.
12
KOLJA ELSSEL AND HEINRICH VOSS
A severe disadvantage of Theorem 4.2 is the fact the bound depends on the
smallest eigenvalue of problem (4.10), i.e. on the Matrix
−1
M̂12 = ΦT1 (Mℓi − Mℓℓ Kℓℓ
Kℓi )Ψ1
which usually is not at hand because it is much too expensive to determine the
eigenmodes of problems (2.5) and (2.10) corresponding to eigenvalues greater than
the cut off bound. The following Theorem 4.3 does not suffer this drawback.
Theorem 4.3. Let ω and γ be the smallest entry of Ω1 and Γ1 , respectively.
Then it holds
Ã
!
λ̂j − λj
λ̂j
λ̂j
λj
0≤
≤
1+
+
(4.12)
λj
ω − λj
γ − λ̂j
γ − λ̂j
≤
λ̂j
γ − λ̂j
+
λ̂j
λ̂2j
+
ω − λ̂j
(4.13)
(ω − λ̂j )(γ − λ̂j )
for every eigenvalue λj of (2.1) such that λj < min{ω, γ}.
Proof. To prove the inequality (4.12) we generate the reduced problem (2.14)
in two CMS reduction steps, first dropping eigenmodes of problem (2.4) to obtain
problem (2.8), and then applying CMS to this problem neglecting eigenmodes of
(2.10). Inequality (4.13) then follows immediately by a monotonicity argument.
Let λ̃j be the eigenvalues of problem (2.8). Then it follows from (4.9)
λj ≥ λ̃j −
λ2j
,
ω − λj
(4.14)
and applying (4.9) to the CMS reduction of (2.8) to (2.14) we obtain
λ̃j ≥ λ̂j −
λ̃2j
γ − λ̃j
.
(4.15)
Hence,
λ̂j ≥ λ̃j
Ã
λ̃j
1+
γ − λ̃j
!
µ
≥ λj 1 +
λj
ω − λj
¶Ã
1+
λ̃j
γ − λ̃j
!
from which we immediately obtain (4.12).
The proof of Theorem 4.3 suggests how to obtain an a priori bound for the general
AMLS method. Every reduction step obtaining a quasistatic/modal representation
and reducing the dimension by spectral truncation is identical to a CMS step utilizing
(ν)
the substructuring of the next level. Hence, if λj denotes the eigenvalues of the
reduced eigenvalue problem corresponding to the ν–th level ordered by magnitude,
(0)
(p+1)
if λj := λj , and λj
are the eigenvalues of the projected eigenproblem of AMLS
with p levels of substructuring, where on the ν-th level eigenvalues exceeding ων are
neglected, then it holds by (4.9)
(ν)
λj
≤
(ν−1)
λj
Ã
(ν−1)
1+
λj
(ν−1)
ων − λ j
!
,
ν = 1, 2, . . . , p + 1.
(4.16)
13
AN A PRIORI BOUND FOR AMLS
Fig. 1. FE model of a container ship
Thus, it follows for all λj ≤ minν=1,...,p ων
(p+1)
λj
≤ λj
p
Y
ν=0
Ã
(ν)
1+
λj
(ν)
ων+1 − λj
!
.
(4.17)
Theorem 4.4. Let
λ̃1 ≤ λ̃2 ≤ λ̃m < min ων ≤ λ̃m+1 ≤ . . .
ν=0,...,p
be the eigenvalues of the projected eigenproblem by AMLS with p levels of substructuring where on the ν-th level eigenvalues exceeding ων are neglected. Then it holds
!
Ã
(ν)
p
Y
λj
λ̃j − λj
≤
− 1, j = 1, . . . , m.
(4.18)
1+
(ν)
λj
ων − λ
ν=0
j
5. Numerical experiments. To verify the quality of our a priori bounds we
considered the vibrational analysis of a container ship which is shown in Figure 1.
Usually in the dynamic analysis of a structure one is interested in the response of the
structure at particular points to harmonic excitations of typical forcing frequencies.
For instance in the analysis of a ship these are locations in the deckshouse where the
perception of the crew is particularly strong.
The finite element model of the ship (a complicated 3 dimensional structure) is
not meshed with an automatic preprocessor like ANSYS or PATRAN since this would
result in a much too detailed model. Since bending displacements of the plates do
not influence the displacements in the deckhouse for global vibrations, it suffices to
discretize the surface by linear membrane shell elements with additional truss elements
for the stiffeners. Only the engine is modelled with more detail. For the ship under
consideration this yields a very coarse model with 19106 elements and 12273 nodes
resulting in a discretization with 35262 degrees of freedom.
We consider the structural deformation caused by a harmonic excitation at a
frequency of 4 Hz which is a typical forcing frequency stemming from the engine and
14
KOLJA ELSSEL AND HEINRICH VOSS
Fig. 2. Substructuring
the propeller. Since the deformation is small the assumptions of the linear theory
apply, and the structural response can be determined by the mode superposition
method taking into account eigenfrequencies in the range between 0 and 7.5 Hz (which
corresponds to the 50 smallest eigenvalues for the ship under consideration).
To apply the CMS method and the level 1 version of AMLS we partitioned the
FEM model into 10 substructures as shown in Figure 2. This substructuring by hand
yielded a much smaller number of interface degrees of freedom than automatic graph
partitioners which try to construct a partition where the substructures have nearly
equal size. For instance, our model ends up with 1960 degrees of freedom on the
interfaces, whereas Chaco [11] ends up with a substructuring into 10 substructures
with 4985 interface degrees of freedom.
We solved the eigenproblem by the CMS method using a cut off bound of 20,000
(about 10 times the largest wanted eigenvalue λ50 ≈ 2183). 329 eigenvalues of the
substructure problems were less than our threshold, and the dimension of the resulting
projected problem was 2289. Figure 3 shows the relative errors for the smallest 50
eigenvalues (lower crosses) and the error bounds by Theorem 4.1 (upper crosses). We
reduced the interface degrees of freedom as well with the same cut off bound 20,000.
This reduced the dimension of the projected eigenproblem to 436. The relative errors
(lower circles) and bounds by Theorem 4.3 are shown in Figure 3, too.
We substructured the FE model by Metis with 4 levels of substructuring. Neglecting eigenvalues exceeding 20,000 and 40,000 on all levels AMLS produced a projected
eigenvalue problem of dimension 451 and 911, respectively. The relative errors and
the bounds are shown in Figure 5 where the lower and upper crosses correspond to
the threshold 20,000, and the lower and upper circles to 40,000.
Acknowledgements. Thanks are due to Christian Cabos, Germanischer Lloyd,
who provided us with the finite element model of the container ship. The first author
gratefully acknowledges financial support of this project by the German Foundation
of Research (DFG) within the Graduiertenkolleg “Meerestechnische Konstruktionen”.
REFERENCES
[1] Z. Bai and B.-S. Lia. Towards an optimal substructuring method for model reduction. Technical
15
AN A PRIORI BOUND FOR AMLS
0
10
−1
10
−2
10
−3
10
−4
10
−5
10
−6
10
−7
10
−8
10
0
5
10
15
20
25
30
35
40
45
50
40
45
50
Fig. 3. Errors and bounds for CMS
0
10
−1
10
−2
10
−3
10
−4
10
−5
10
−6
10
−7
10
0
5
10
15
20
25
30
35
Fig. 3. Errors and bounds for AMLS
[2]
[3]
[4]
[5]
[6]
[7]
[8]
report, University of California at Davis, 2004. To appear in Proceedings of PARA’04,
Lyngby, Denmark, 2004.
C. Bekas and Y. Saad. Computation of smallest eigenvalues using spectral Schur complements.
Technical Report umsi-2003-191, Minnesota Supercomputer Institute, University of Minnesota, Minneapolis, 2004.
J.K. Bennighof and M.F. Kaplan. Frequency sweep analysis using multi-level substructuring,
global modes and iteration. In Proceedings of the AIAA 39th SDM Conference, Long
Beach, Ca., 1998.
J.K. Bennighof, M.F. Kaplan, M.B. Muller, and M. Kim. Meeting the NVH computational
challenge: automated multi-level substructuring. In Proceedings of the 18th International
Modal Analysis Conference, San Antonio, Texas, 2000.
J.K. Bennighof and C.K. Kim. An adaptive multi-level substructuring method for efficient
modeling of complex structures. In Proceedings of the AIAA 33rd SDM Conference, pages
1631 – 1639, Dallas, Texas, 1992.
J.K. Bennighof and R.B. Lehoucq. An automated multilevel substructuring method for the
eigenspace computation in linear elastodynamics. SIAM J. Sci. Comput., 25:2084 – 2106,
2004.
R.R. Craigh Jr. and M.C.C. Bampton. Coupling of substructures for dynamic analysis. AIAA
J., 6:1313–1319, 1968.
R.J. Duffin. A minmax theory for overdamped networks. J.Rat.Mech.Anal., 4:221 – 233, 1955.
16
KOLJA ELSSEL AND HEINRICH VOSS
[9] K. Elssel and H. Voss. A modal approach for the gyroscopic quadratic eigenvalue problem.
In R. Neittaanmk̈i, T. Rossi, S. Korotov, E. Onate, J. Periaux, and D. Knörzer, editors,
Proceedings of the European Congress on Computational Methods in Applied Sciences and
Engineering. ECCOMAS 2004, Jyväskylä, Finland, 2004. ISBN 951-39-1869-6.
[10] R. J. Guyan. Reduction of stiffness and mass matrices. AIAA J., 3:380, 1965.
[11] B. Hendrickson and R. Leland. The Chaco User’s Guide: Version 2.0. Technical Report
SAND94-2692, Sandia National Laboratories, Albuquerque, 1994.
[12] W.C. Hurty. Vibration of structure systems by component-mode synthesis. J.Engrg.Mech.Div.,
ASCE, 86:51–69, 1960.
[13] B. Irons. Structural eigenvalue problems: Elimination of unwanted variables. AIAA J., 3:961–
962, 1965.
[14] M.F. Kaplan. Implementation of Automated Multilevel Substructuring for Frequency Response
Analysis of Structures. PhD thesis, Dept. of Aerospace Engineering & Engineering Mechanics, University of Texas at Austin, 2001.
[15] G. Karypis and V. Kumar. Metis. a software package for partitioning unstructured graphs,
partitioning meshes, and computing fill-reducing orderings of sparse matrices. version 4.0.
Technical report, University of Minnesota, Minneapolis, 1998.
[16] A. Kropp and D. Heiserer. Efficient broadband vibro–acoustic analysis of passenger car bodies using an FE–based component mode synthesis approach. In H.A. Mang, F.G. Rammerstorfer, and J. Eberhardsteiner, editors, Proceedings of the Fifth World Congress on
Computational Mechanics (WCCM V), Vienna, Austria, 2002. available online from
http://wccm.tuwien.ac.at.
[17] E.H. Rogers. A minmax theory for overdamped systems. Arch.Rat.Mech.Anal., 16:89 – 96,
1964.
[18] H. Voss. An error bound for eigenvalue analysis by nodal condensation. In J. Albrecht, L. Collatz, and W. Velte, editors, Numerical Treatment of Eigenvalue Problems, Vol. 3, volume 69 of International Series on Numerical Mathematics, pages 205–214, Basel, 1984.
Birkhäuser.
[19] H. Voss. A maxmin principle for nonlinear eigenvalue problems with application to a rational
spectral problem in fluid–solid vibration. Applications of Mathematics, 48:607 – 622, 2003.
[20] H. Voss and B. Werner. A minimax principle for nonlinear eigenvalue problems with applications
to nonoverdamped systems. Math.Meth.Appl.Sci., 4:415–424, 1982.
[21] C. Yang, W. Gao, Z. Bai, X. Li, L. Lee, P. Husbands, and E. Ng. An algebraic sub-structuring
method for large-scale eigenvalue calculations. Technical Report LBNL–55050, Lawrence
Berkeley National Laboratory, Berkeley, Ca., 2004.