Distributed Simulation of Electric Power Systems
Distributed Simulation of Electric Power Systems
Distributed Simulation of Electric Power Systems
Abstract – Recent advancements in computer network- the computing time a critical issue in many practical
ing have enabled the interconnection of inexpensive desk- cases. Distributing the computational burden onto sev-
top computers to form powerful computational clusters eral computers appears to be a natural way of improving
that can be effectively utilized when simulating electric the simulation speed.
power systems. In order to maximize the computational
gain, it is necessary to identify the computational tasks that Since the modeling of power systems using a state-
can be performed concurrently and to optimally distribute variable approach involves solving an initial value prob-
those tasks among the available processors. In this paper, lem (IVP) with the corresponding DAEs or ODEs, the
the electrical power system is viewed as a collection of numerical integration is the process that determines the
interconnected dynamical subsystems each described by a overall simulation speed. Examples of recently devel-
set of differential/algebraic equations. The tasks that can be oped techniques for performing numerical integration on
performed concurrently are identified and a new approach parallel computers include parallel implementation of
of optimally distributing the corresponding calculations is waveform relaxation [5], parallel one-step methods [6]-
set forth. The effectiveness of the proposed approach is
[7], parallel multi-step methods [8], and others [9].
demonstrated by distributing a detailed simulation of the
Western System Coordinating Council (WSCC) three-
Unfortunately, none of the simulation languages [1]-[4]
machine nine-bus system on an SCI-based network com- support parallel ODE solvers. Implementation of any of
posed of three personal computers. The simulation includes the previously mentioned methods into an existing simu-
the effects of network and stator transients. Using the lation program may require a significant amount of
Runge-Kutta-Fehlberg integration algorithm, a 206% effort and may not always be feasible. An alternative
improvement in simulation speed was achieved. method of distributed simulation proposed in [10]
requires fixed-time-step integration and communication
Keywords – power systems, distributed simulation, between the component models at fixed intervals. Both
parallel computing fixed-step ODE solvers and fixed communication inter-
vals may not be appropriate when modeling power sys-
1 INTRODUCTION tem with switching and/or inter-component stiffness.
Computer simulations developed using ATP/EMTP On the other hand, the speed of sequential ODE solv-
[1], ACSL [2], Matlab/Simulink [3], EASY 5 [4], etc., ers available within a given simulation program may also
are commonly used to predict the transient behavior of be improved using parallel computers. In order to see
power systems and energy conversion devices. In each how this improvement can be realized, it is necessary to
case, the underlying systems of differential and algebraic note that most sequential solvers spend a significant por-
equations (DAEs) or ordinary differential equations tion of the overall CPU time evaluating the derivatives of
(ODEs), are solved numerically in the time domain state variables. In this paper, an approach herein referred
using available integration algorithms and/or solvers. For to as the distributed evaluation of state derivatives
example, in ATP/EMTP, the trapezoidal integration is (DESD) technique is considered. Utilizing this tech-
used. Other packages offer a library of solvers that may nique, the integration of the overall system is performed
be suitable for different problems. In general, the com- on one computer (master) using a single integration
putational complexity is determined by the ODE solver algorithm; whereas, the state derivatives are computed in
used, which increases as a polynomial function (typi- parallel on other computers (servers). DESD can be
cally cubic) of the system to be simulated (problem implemented as shown in Fig. 1. In addition to evaluat-
size). Thus, as the system to be simulated grows in size, ing the derivatives, sequential ODE solvers perform
so does the time required for computing the solution of other serial calculations (solving systems of equations,
the corresponding equations. Even though the speed of finding coefficients, etc.). Due to these solver-dependent
modern computers continuous to increase at a rapid rate, serial calculations, the maximum theoretical speed-up of
the need to model more complex electrical systems with DESD on an idealized computer network is bounded by
as many details taken into account as possible still makes Amdahl’s law [11]. Additionally, in an actual implemen-
14th PSCC, Sevilla, 24-28 June 2002 Session 33, Paper 2, Page 2
tion time. If T ( 1 ) > T ( n ) , it is appropriate to use the
node 1 node 2 node m-1 speed-up factor
server server server
S( n) = T ( 1 -)
---------- (2)
x dx x dx x dx T (n )
1 2 m-1
1.025 pu
Station C
Y = j0.153
230 kV
35 MVAR
the two-way communication latency between two nodes
Z = 0.017 + j0.092
Station A Station B
Y = j0.079
(10)
125 MW 90 MW
50 MVAR 30 MVAR
derivative function, µs
ment in computational speed. In such cases, when the
communication, µs
exchange variables
Time of two-way
model components have vastly different computing
Total number of
Total number of
Total number of
state variables
times, the component with the largest time may be parti- Component
tioned into yet smaller subsystems. In the WSCC test
system, it is possible to divide the transmission network
into three subsystems as shown in Fig. 4. These subnet-
Entire
works have an equal number of branches, similar topol- Transmission 42 6 48 317 42.49
ogy, and therefore equal computational complexity. Network
Subnetworks
Additionally, since all three generator-exciter systems SNW-1, 14 6 20 64.6 31.24
are represented by the same set of equations (with differ- SNW-2,
ent parameters), the corresponding models also have the SNW-3
same computational complexity. Although the overall Generator-
Exciter 9 2 11 2.64 27.63
system can be divided in numerous ways, the compo- Subsystems
G-1,G-2,G-3
nent-based partitioning shown in Fig. 4 was found to be
the most natural from an input-output perspective. As a Table 3: Computational characteristics of the subsystems.
check, it can be verified that the final six-component par-
titioning satisfies (8) indicating that there is a potential
communication time, µs
for improvement in speed.
Total time to compute
Total communication
Total compute-plus-
time per call, µs
Communication
derivatives, µs
B1 B2 B3 B4 B5 Computer
Implemented
subsystems
b2 b4 b6 b8 node
variables
SM2 SM3
b1 b3 b5 b7 b9
E2 E3
b10
b18 b19
G-2 SNW-3 G-3
B6 b11
SNW-2
SNW-1
1, Master G-1, 72.52 2 ×20 62.484 135.004
b20 G-2,
b21
SM - Synchronous
G-3
B9
Machine
b17
B7
2, Server SNW-2 64.6 20 31.242 95.842
b16 b12 E - Exciter
b15
b14
are loads
Table 4: Partitioning of the WSCC system.
B8 SNW - Subnetworks
G - Generator-Excitor
E1 G-1
Subsystem
In order to analyze the simulation speed, the follow-
SM1
Generator 3
Tem, (pu.)
flow within the system. At t = 2.0 s , the mechanical 1.0
Generator 2
Tem, (pu.)
ing transients in electromagnetic torque of each genera- 1.0
Generator 1
Tem, (pu.)
same time generator 3 undergoes a slight transient. The 0
on other calculations, s.
Implement- and update coefficients. This explains the fact that the
CPU time, s.