Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
18 views

A Comparison of Distributed Optimal Power Flow Algorithms

1) The document compares three distributed algorithms for solving optimal power flow (OPF) problems: the Auxiliary Problem Principle (APP), the Predictor-Corrector Proximal Multiplier Method (PCPM), and the Alternating Direction Method (ADM). 2) These decomposition coordination methods allow OPF problems to be solved in a distributed manner across multiple processors or systems. This enables the efficient solution of very large interconnections of power systems. 3) The key features of the proposed coordination methods and the formulation of distributed OPF are described. The algorithms are then demonstrated on several medium size test power systems.

Uploaded by

Vemalaiah Kasi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

A Comparison of Distributed Optimal Power Flow Algorithms

1) The document compares three distributed algorithms for solving optimal power flow (OPF) problems: the Auxiliary Problem Principle (APP), the Predictor-Corrector Proximal Multiplier Method (PCPM), and the Alternating Direction Method (ADM). 2) These decomposition coordination methods allow OPF problems to be solved in a distributed manner across multiple processors or systems. This enables the efficient solution of very large interconnections of power systems. 3) The key features of the proposed coordination methods and the formulation of distributed OPF are described. The algorithms are then demonstrated on several medium size test power systems.

Uploaded by

Vemalaiah Kasi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO.

2, MAY 2000 599

A Comparison of Distributed Optimal Power Flow


Algorithms
Balho H. Kim, Member, IEEE, and Ross Baldick, Member, IEEE

Abstract—We present an approach to parallelizing optimal Initial applications of parallel computing to power systems
power flow (OPF) that is suitable for coarse-grained distributed problems used array computers which are equipped with
implementation and is applicable to very large inter-connected specialized processors for performing vector computations
power systems. The proposed distributed scheme can be used to
coordinate a heterogeneous collection of utilities. Three math- efficiently. Sundarraj et al. [6] demonstrated a distributed de-
ematical decomposition coordination methods are introduced composition of constrained economic dispatch on a hypercube
to implement the proposed distributed scheme: the Auxiliary multiprocessor using Dantzig–Wolfe decomposition method.
Problem Principle (APP), the Predictor–Corrector Proximal Mul- While there has been some other works and progress in par-
tiplier Method (PCPM), and the Alternating Direction Method allelizing power systems problems (see the discussion and ref-
(ADM). We demonstrate the approach on several medium size
systems, including IEEE Test Systems and parts of the ERCOT erences in [1]), major efforts have concentrated on parallelizing
(Electric Reliability Council of Texas) system. individual steps such as Jacobian factorization.
In [7], Kim and Baldick proposed an approach to parallelizing
I. INTRODUCTION optimal power flow (OPF) that is suitable for distributed imple-
mentation and is applicable to very large inter-connected power

O PTIMAL Power Flow (OPF) has been the subject of con-


tinuous intensive research and algorithmic improvements
since its introduction in the early 1960’s. Recently, forces such
systems.
This paper is an extension of our previous work [7]. In this
paper, we demonstrate three mathematical decomposition co-
as increasing competition, advances in generation technology, ordination methods which are amenable for implementing dis-
interests in deregulation, and the advent of new planning tributed OPF: the Auxiliary Problem Principle (APP), the Pre-
strategies have put pressure on electric utilities to become more dictor–Corrector Proximal Multiplier Method (PCPM), and the
efficient and to improve the efficiency from nongeneration Alternating Direction Method (ADM).
technologies such as supervisory control and data acquisition The key features of the proposed methods and the formulation
(SCADA). As a consequence, the role of OPF is changing and of distributed OPF will be first given in the following two sec-
the importance for real-time computation, communication, and tions, Sections II and III, followed by a case study in Section IV.
data control is greatly increasing.
Distributed multi-processor environments can potentially II. DECOMPOSITION COORDINATION METHODS
greatly increase the available computational capacity and
decrease the communication burden, allowing for faster OPF This section gives the general concept of the decomposition
solutions. Parallel processing using inexpensive multi-proces- coordination methods adopted in our study. The first method
sors allocated to subsystems is useful for high-speed calculation is the so-called Auxiliary Problem Principle (APP), which
which is necessary for on-line control of a power system. Par- was first introduced and has been extended by Cohen et al.
allel processing of power systems problems has been identified [8], [9], [10]. The other one is the Proximal Point Algorithm
as an important task in the light of increasing performance (PPA) which has long been recognized as one of the attrac-
requirements and the availability of parallel processing hard- tive methods for convex programming. In this paper, two
ware [1]. We first give a brief review of the Parallel Distributed promising PPA-based algorithms are introduced and applied in
OPF techniques. Since Dantzig and Wolfe [2] proposed their formulating the distributed optimal power flow problems: the
decomposition principle for linear programming in 1960, an Predictor–Corrector Proximal Multiplier Method (PCPM) [11],
extensive work on large-scale mathematical programming has and the Alternating Direction Method (ADM) [12], [13].
followed. (See [3] and its references.) Recently, motivated by
A. Introduction
this influential work, various approaches have been taken to
parallelize power system problems including reactive power Consider a convex program with separable structure of the
optimization problem and constrained economic dispatch form:
problem [4], [5].
(1)

Manuscript received August 19, 1998; revised May 17, 1999. This work was where and are assumed to be convex, proper, and lower
funded in part by the National Science Foundation under grant ECS-9496193, semi-continuous functions. Then the augmented Lagrangian for
and KOSEF-981-0901-004-2.
B. H. Kim is with the Hong-Ik University, Seoul, Korea 121-791. problem is defined as;
R. Baldick is with the University of Texas at Austin, Austin, TX 78712.
Publisher Item Identifier S 0885-8950(00)03793-7. (2)

0885–8950/00$10.00 © 2000 IEEE


600 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 2, MAY 2000

where denotes the transpose of Lagrange multipliers and is


a constant. The principal disadvantage of the above Lagrangian
to the standard Lagrangian for decomposition methods is the where the Lagrangian multiplier is updated twice each iteration
presence of the term in the , which destroys the as follows;
separability between and , since they are linked by the cross
product term . This has long been recognized as one of (prediction)
the major drawbacks of the augmented Lagrangian approach,
and a number of strategies have been proposed to remove this (correction)
difficulty.
In [8], [9], and more recently in [10], Cohen et al. devel-
D. Alternating Direction Methods
oped a unified framework via APP and demonstrated the power
and versatility of APP in the analysis and development of new As mentioned earlier, the basic idea of alternating direction
decomposition algorithms. This approach will be described in methods is a relaxation approach, whereby one first minimizes
Section II-B. In [11], Chen proposed another PPA-based split- the augmented Lagrangian (2) with respect to and then with
ting method which can be used for parallel decomposition. This respect to , and finally updates the Lagrange multiplier . The
approach will be described in Section II-C. Garby and Mercier iterative scheme thus can be given as;
[13], and recently, Tseng [15], Eckstein et al. [12], [16] proposed Algorithm—ADM [16]:
the alternating direction method. The basic idea underlying this
approach is to sequentially perform the minimization with re-
spect to while fixing , , then with respect to , followed by
an update of the multiplier . This approach will be described
in Section II-D. The fundamental difference between algorithm ADM and algo-
rithm PCPM is the minimization steps. In ADM, the minimiza-
B. Auxiliary Problem Principle tion steps cannot be performed independently, and this restricts
Consider an optimization problem of the following form its potential advantage in parallel implementations. However,
Tseng [15] proved the ADM converges at least at linear rate, and
(3) under special condition superlinear rate can be achieved [12].
where and are assumed to be additive. Then solving the The details of the implementation of these algorithms in the dis-
problem (3) is equivalent to solving the following sequence of tributed OPF problems are given in the following section.
auxiliary problems:
Algorithm - APP [8]: III. DISTRIBUTED OPTIMAL POWER FLOW
In our distributed scheme, we use the regional decomposition
technique as in [7], where the regions buy and sell electricity
from adjacent regions at prices that are coordinated by negotia-
tions between adjacent regions. All the variables and constraints
where is a differentiable function, are constants. Then are the same as defined in [7].
taking the auxiliary functional yields the fol- Variables: Consider a power system consisting of two re-
lowing subproblems, for , gions, region- and region- , where a single tie-line joining re-
gions and . Define two vectors of variables and for re-
gion- and region- , respectively. Between and common to the
two regions there is an overlap region, with a vector of variables
denoted by . The vector consists of all the OPF variables that
(4) are relevant to region- but not already included in . Similarly,
includes the region- variables not included in . The entries
For instance, setting in are defined as follows. For each tie-line we must include
a bus in the border region. If there is no bus already there, we
create a “dummy bus.” Associated with each dummy bus are the
identifies Eq. (1). real and reactive power flows through the bus and the voltage
and angle at the bus. That is, the vector has four entries for
C. Predictor–Corrector Proximal Multiplier Method
each tie-line. The border variables are then ,
The predictor–corrector proximal multiplier method is based and . In summary, region- has state vector
on the properties of Rockafellar’s proximal method of multi- , while region- has state vector . The variables are
pliers, and its primal-dual application [17], which involves an the overlap or border variables, while and can be thought of
augmented Lagrangian with an additional quadratic proximal as core variables for regions and , respectively.
term. Constraints: We assume that the constraints on the system
Algorithm-PCPM [11]: involve and or and , but not and nor , , and . That
is, we assume that the constraints in each region involve only
the core variables and the border variables for that region. With
KIM AND BALDICK: A COMPARISON OF DISTRIBUTED OPTIMAL POWER FLOW ALGORITHMS 601

this assumption, we can write the power flow constraints for re- where the superscript is the iteration index, and are
gion- in the form and for region- in the form positive constants. Some sufficient conditions for this iterative
. Similarly, we can write the inequality constraints scheme to converge to a solution of (6) are presented in [7].
for region- in the form and for region- in the The initial conditions can be any convenient
form . The functions and represent the line starting point such as a previous solution or flat start. The value
flow, voltage, and contingency constraints in the individual re- of the Lagrange multiplier at iteration is an estimate of the
gions. Define the two sets: cost to maintain the constraint . If represents,
for example, power flow from region to along a particular
line, then is the “shadow-cost” on the interchange of power
along that line. If some region must import power to satisfy
Then a feasible power flow solution is a point that local demand, then the initial conditions for the border flows
satisfies and . can be set to reflect the generation deficiency; however, this
is not necessary for convergence since the dummy generators
A. OPF Formulation can be arranged to supply the imports necessary for a feasible
initial solution.
With the above definition on variables and constraints, the Notice that the terms
OPF problem can be written as
(5)
in the objective of (7) can be interpreted as being the sum of cost
where we assume that the cost functions and are convex functions of generators placed at the border buses in region- .
approximations to the actual cost functions in each region and The cost functions of these dummy generators include costs for
that there is a unique solution to (5). We decompose problem (5) real and reactive power generation, voltage support, and phase
into regions by duplicating the border variables and imposing angle control. A similar interpretation applies for the terms in
coupling constraints between the two variables. (8). The costs are quadratic and depend on the values of the La-
First, define the copies of to be and , assigned to the grange multipliers as well as on previous values of the iterates.
regions and , respectively. Then the problem (1) is equivalent 2) Algorithm PCPM: Similarly, using Algorithm-PCPM,
to: we obtain the following regional OPF problems:

(6)
The quadratic term added to the objective does not affect the so-
lution since the constraint will make the quadratic
term equal to zero at any solution; however, when we decom-
pose the problem, this term will significantly aid in convergence 3) Algorithm ADM: In the same manner, the Algo-
[2]. rithm-ADM produces the following subproblems

B. Decomposition for OPF


Next we apply the three decomposition algorithms described
in the previous section to obtain sub-problems for a distributed
OPF implementation. 4) Distributed Algorithm: A natural implementation of the
1) Algorithm APP: First, with the use of auxiliary problem proposed OPF algorithms is given in Fig. 1.
principle (5), we can solve (6) by solving a sequence of prob- The Telemeter and Dispatch steps require intra-regional com-
lems of the form: munication of data and control signals. The loop termination cri-
terion requires global communication, while the Exchange step
only requires communication between adjacent regions. In the
case of multiple regions, each region will solve an OPF for its
(7) core and border variables.

IV. CASE STUDIES


Several case studies were performed to demonstrate the pro-
posed distributed OPF algorithms. The objectives of the case
studies are, first, to discover the viability of the algorithms in
(8) practical implementation and, second, to test and compare the
overall performance of the algorithms.
Performance comparisons are based on the cputimes and
number of iterations required for desired accuracy.
602 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 2, MAY 2000

power costs were adopted from [20] and [21]. The constants ,
, and were tuned for each system to improve convergence
[7].
In order to see how the algorithm responds to small changes
in system status, we solved a base-case and several change-cases
for each system. Each base-case was solved from a flat start with
initially no interchange on any tie-line, while the change-cases
were solved using the solution of the base-case as a starting
point. The change-cases were as follows:
1. increase in demand of 5% at all demand buses;
2. increase in demand of 10% at all demand buses;
Fig. 1. Distributed implementation of parallel OPF. 3. an outage of a single generator with capacity equal to
approximately 2–3% of the total system demand.
TABLE I The change-cases demonstrate the tracking behavior of the al-
CASE STUDY SYSTEMS gorithm for an on-line application.

B. Stopping Criterion
We chose the maximum mismatch between the border
variables as the stopping criterion. It is noted that the methods
adopted in this study have very bad tail behavior. That is, while
they reduce the mismatch significantly in the first few itera-
tions, it can take many more iterations to drive the mismatch
very close to zero.
To select a practical tolerance on the maximum mismatch
we experimented with the performance of the algorithm. We
For the case studies, an optimization package, GAMS 2.25 found that the choice 0.03 per unit maximum mismatch yielded
[18], and a state-of-the art Interior-Point OPF code (INTOPF) a solution with total costs that were within 0.1% of the optimal
[19] were employed. production costs from the serial algorithm. Typically, the mis-
GAMS (MINOS5 and CONOPT) was used to demonstrate matches on most buses were much smaller than 0.03 per unit.
the basic convergence properties of the algorithms, while The mismatch tolerance of 0.03 per unit may seem large; how-
INTOPF was used to estimate the speed-ups and efficiencies ever, the sensitivity of total dispatch costs to tie-line flows are
possible with the algorithms. Non-contingency constrained relatively small when the system is close to being optimally dis-
AC OPF’s were performed for all cases with real and reactive patched.
generator limits and line and voltage constraints imposed. Furthermore, the mismatch in net interchange was usually
All GAMS computations were performed on a Sun Sparc-20 much smaller than 0.03 per unit when summed across all and
workstation, while parallel (distributed) computations with lines from one region to another. Therefore, in practice, any
INTOPF code were implemented on several Sparc-20 and simple strategy such as splitting the difference will usually be
Ultra-Sparc workstations. adequate in scheduling near-to-optimal interchange levels.

A. Case Study Systems C. Test Results


Data from two IEEE Reliability Test System and eight Texas Selected case study results are presented in this section. To
utilities were used to demonstrate the performance of the algo- compare the overall performance of the algorithms, the total
rithm. Table I summarizes the test systems. The first column cputimes and iteration counts are tabulated. Then, the speed-ups
denotes the system identification number, which will be used and efficiency of the three algorithms are discussed.
throughout the paper instead of real names, the second column The cputimes presented here are the average of 3 to 5 repeated
shows the total number of buses in each system, while the third test runs for each case study system, thus the presented results
and fourth columns show the number of regions and the number are not necessarily the best of each alternative algorithm. For
of core buses in each region. The fifth column shows the number the two largest case study systems, no. 8 and no. 9, we were
of tie-lines that interconnect the regions, while the sixth column unable to solve the undecomposed OPF’s with GAMS. The test
shows the total number of transmission lines in each complete run results for the parallel OPF’s are, nevertheless, provided for
system. The last column shows the total per unit loads in the performance comparison.
systems. The five smaller systems consist of two, three, or four It is noted that because our prototypical GAMS implemen-
copies of two IEEE Test Systems, while the four Texas systems tation is not efficient, the cputime to perform the calculations
use data from two to eight Texas utilities. is not reflective of performance in a production environment.
The objective to be minimized is the production cost for active Nevertheless, the data is presented for completeness and also
and reactive power. The cost of reactive power is assumed to because it provides some qualitative information that is useful
be of the active power cost for each generator, while real in judging the performance of an efficient implementation.
KIM AND BALDICK: A COMPARISON OF DISTRIBUTED OPTIMAL POWER FLOW ALGORITHMS 603

TABLE II TABLE V
NUMBER ITERATIONS FOR PARALLEL OPF WITH GAMS: ALGORITHM-APP CPUTIME FOR UNDECOMPOSED OPF WITH INTOPF (SEC)

TABLE VI
CUMULATIVE CPUTIME FOR PARALLEL OPF WITH INTOPF (SEC)
TABLE III
NUMBER ITERATIONS FOR PARALLEL OPF WITH GAMS: ALGORITHM-PCPM

TABLE VII
SPEED-UP AND EFFICIENCY: ALGORITHM-APP
TABLE IV
NUMBER ITERATIONS FOR PARALLEL OPF WITH GAMS: ALGORITHM-ADM

TABLE VIII
1) Progress toward optimum: The number of iterations re- COMPARISON OF EFFICIENCY (%)
quired to satisfy the 0.03 per unit mismatch criterion are sum-
marized in Tables II, III, and IV.
Based on these tables, the Algorithm-APP gives better results
than does Algorithm-ADM as system size increases.
For the systems, no. 1, no. 4, no. 5, and no. 7 Algorithm-ADM
shows at least same or better convergence property than Algo-
rithm-APP. That is for the system where the number of inter-
or 80%, with production costs still within 0.1% of optimal. The
connected regions is small, say 2 or 3, Algorithm-ADM seems
size of our test systems are modest and the ratio of the number of
very competitive.
border to core variables is large. We expect better Performance
It is interesting that as shown earlier the Algorithm-APP is
for larger systems with lower ratios of border to core variables.
inferior to the Algorithm-ADM intotal cputimes, but superior
Table VIII compares the estimated efficiencies of the algo-
in total number of iterations. In an efficient implementation, we
rithms for the base-case: Algorithm-APP dominates the other
would expect the time per iteration to be almost independent of
two alternatives in almost all cases, especially for large systems.
the algorithms. Therefore Algorithm-APP, which requires fewer
iterations to converge, can be expected to perform better overall
V. CONCLUSION AND FUTURE STUDY
than Algorithm-PCPM and Algorithm-ADM.
2) Speed-Up and Efficiency [7]: The cputime results from We have presented three effective parallel algorithms that
the undecomposed and the parallel implementation of INTOPF can achieve significant speed-up over serial implementations.
code are summarized in Tables V and VI, respectively, where all In a distributed environment there are overheads that may re-
the cputimes include the overheads necessary for reading data duce the possible speed-up. However, even if speed-ups of the
and communicating among processors. As seen in Table V, the OPF computation itself were less than ideal, there would still be
cputimes and the number of buses have almost a linear relation- three powerful incentives to explore a distributed implementa-
ship. Table VI shows that the first iteration of the INTOPF al- tion. First, institutional arrangements may prevent the pooling
gorithm takes much more cputime than each subsequent itera- of data. Second even if pooling of data were possible, communi-
tion. Table VII shows the measured cputime for the base-case cation bottlenecks at a central control center may prove a major
for the serial and parallel implementations of Algorithm-APP, obstacle for centralized multi-utility OPF. For real-time appli-
using INTOPF. cations, particularly, a distributed implementation using our ap-
The estimated efficiencies (based on the ratio of serial to par- proach will therefore be much more attractive than a central im-
allel cputime) for the larger systems are between about 50 and plementation.
75%, based on the 0.03 per unit tie-line mismatch criterion. The We note that most traditional approaches to parallelizing OPF
case study results show that almost all of the potential produc- involve a master process assigning tasks to slave processes.
tion cost savings are achieved within 3 or 4 iterations. If we ter- Telemetered data is passed from the master process to the
minate after 3 or 4 iterations, then the efficiency improves to 70 assigned slave process, making communication overhead heavy
604 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 15, NO. 2, MAY 2000

for distributed implementation. For this reason, the traditional [6] R. P. Sundarraj, S. Kingsley Gnanendran, and J. K. Ho, “Distributed
approaches are unlikely to be practical for on-line applications. price-directive decomposition applications in power systems,” in IEEE
Power Engineering Society 1994 Summer Meeting, San Francisco, CA,
A distributed implementation has a third important advantage July 15-19, 1994, Paper 94 SM 596-7 PWRS.
over a centralized implementation (whether serial or parallel.) [7] B. H. Kim and R. Baldick, “Coarse-grained distributed optimal power
A communication failure between regions can be handled more flow,” IEEE Transactions on Power Systems, vol. 12, no. 2, pp. 932–939,
August 1997.
gracefully by a group of decentralized processors than in a cen- [8] G. Cohen, “Optimization by decomposition and coordination: A unified
tralized implementation because each regional processor can at- approach,” IEEE Transactions on Automatic Control, vol. AC-23, no. 2,
tend to the local needs of its region, perhaps with increased gen- pp. 222–232, April 1978.
[9] , “Auxiliary problem principle and decomposition of optimization
eration costs, even while inter-regional communication is inter- problems,” Journal of Optimization Theory and Applications, vol. 32,
rupted. no. 3, pp. 277–305, November 1980.
In this paper, three decomposition algorithms based on the [10] G. Cohen and D. L. Zhu, “Decomposition coordination methods in large
scale optimization problems,” Advances in Large Scale Systems, vol. 1,
augmented Lagrangian method were introduced to implement pp. 203–266, 1984.
the distributed OPF, namely Algorithm-APP Algorithm-PCPM, [11] G. Chen, “Proximal and Decomposition Method in Convex Program-
and Algorithm-ADM, respectively. ming,” Ph.D thesis, University of Maryland, Baltimore, 1993.
[12] J. Eckstein, “Parallel alternating direction multiplier decomposition of
Our future study is first to explore ways to improve conver- convex programs,” Journal of Optimization Theory and Applications,
gence of the algorithms. The critical issue is then how many iter- vol. 80, no. 1, pp. 39–63, January 1994.
ations are necessary before the Lagrange multipliers and border [13] D. Garby and B. Mercier, “A dual algorithm for the solution of nonlinear
variational problems via finite-element approximation,” Comp. Math.
variables converge. The quadratic term introduced in (6) and ap- Appl., vol. 2, pp. 17–40, 1976.
proximated in the algorithms is designed to tie the copies of the [14] E. V. Tamminen, “Sufficient conditions for the existence of multipliers
border variables together more strongly than just using a linear and Lagrangian duality in abstract optimization problems,” Journal of
Optimization Theory and Applications, vol. 82, no. 1, pp. 93–104, July
constraint in (6). The reason is that the quadratic term strongly 1994.
convexifies the problem. The effect is to enhance the rate of con- [15] P. Tseng, “Applications of a splitting algorithm to decomposition in
vergence. convex programming and variational inequalities,” SIAM Journal on
Control and Optimization, vol. 29, no. 1, pp. 119–138, January 1991.
An important challenge is to theoretically analyze the [16] J. Eckstein and P. D. Bertsekas, “On the Douglas–Rachford splitting
improvement in convergence speed due to the quadratic term. method and the proximal point algorithm for maximal monotone oper-
Clearly, careful choice of regions will also enhance the conver- ators,” Mathematical Programming, vol. 55, no. 3, pp. 293–318, 1992.
[17] R. T. Rockafellar, Convex Analysis. Princeton, NJ: Princeton Univer-
gence of the algorithm. Since the inter-regional communication sity Press, 1970.
requirements will be relatively small under essentially any [18] A. Brooke, D. Kendrick, and A. Meeraus, GAMS User’s
choice of regional decomposition, the main goal in choosing Guide. Redwood City, CA: The Scientific Press, 1990.
[19] Y.-C. Wu, A. S. Debs, and R. E. Marsten, “A direct nonlinear pre-
the regional decomposition will be to enhance convergence. dictor–corrector primal-dual interior point algorithm for optimal
Finally, incorporation of contingency constraints will also power flows,” IEEE Transactions on Power Systems, vol. 9, no. 2, pp.
be studied. We will investigate ways to represent security 876–883, May 1994.
[20] A. J. Wood and B. F. Wollenberg, Power Generation, Operation, and
constraints and to solve the SCOPF’s efficiently and reliably in Control, 2nd ed. New York: Wiley, 1996.
distributed manner. [21] M. L. Baughman et al., “Electric Utility Rcsource Planning and Pro-
duction Costing Projects: Final Report,” Center for Energy Studies, The
University of Texas at Austin, 1993.
ACKNOWLEDGMENT
We would like to thank Dr. Craig Chase and Mr. Yufeng Luo
of the Department of Electrical and Computer Engineering at
the University of Texas at Austin, and Dr. Jong-Bae Park at
Anyang University for comments and help during the course of Balho H. Kim received his BSEE from Seoul Na-
tional University, and his MS and Ph.D from the Uni-
this work. We would also like to thank the referees. versity of Texas at Austin. He had worked for Korea
Electric Power Corporation from 1984 to 1990. His
REFERENCES research fields are power system operation, pricing,
and DSM. He is currently an assistant professor in
[1] Task Force of the Computer and Analytical Methods Subcommittee the Department of Electrical Engineering at Hong-ik
of the Power Systems engineering Committee, “Parallel processing in University in Seoul, Korea.
power systems computation,” IEEE Transactions on Power Systems,
vol. 7, no. 2, pp. 629–637, May 1992.
[2] G. B. Dantzig and P. Wolfe, “Decomposition principle for linear pro-
grams,” Operations Research, vol. 8, January 1960.
[3] L. S. Lasdon, Optimization Theory for Large Systems. New York: The
Macmillan Company, 1970.
[4] N. I. Deeb and S. M. Shahidehpour, “Decomposition approach for min-
imizing real power losses in power systems,” IEE Proceedings, pt. C,
vol. 138, no. 1, pp. 27–38, January 1991. Ross Baldick received his B.Sc. and B.E. from the University of Sydney, Aus-
[5] N. I. Deeb and S. M. Shahidehpour, “Linear reactive power optimiza- trlia and his MS. and Ph.D. from the University of California, Berkeley. He is
tion in a large power network using the decomposition approach,” IEEE currently an associate professor in the department of Electrical and Computer
Transaction on Power Systems, vol. 5, no. 2, pp. 428–438, May 1990. Engineering at the University of Texas at Austin.

You might also like