On_implementing_2D_rectangular_assignment_algorithms
On_implementing_2D_rectangular_assignment_algorithms
INTRODUCTION
The two-dimensional (2D) assignment problem, also
known as the linear sum assignment problem and the
On Implementing 2D bipartite matching problem, arises in many contexts such
as scheduling, handwriting recognition, and multitarget
Rectangular Assignment tracking, as discussed in [12]. Strong and weak
polynomial time algorithms exist for solving the 2D
Algorithms assignment problem, unlike assignment problems
involving more than two indices, such as the multiframe/
S-dimensional assignment problem, which are NP
complete [40, ch. 15.7].1 The execution time of strong
polynomial algorithms scales polynomially with the size
DAVID F. CROUSE, Member, IEEE of the problem; the execution time of weak polynomial
Naval Research Laboratory
Washington, DC, USA
algorithms scales polynomially with the size of the
problem but also depends on values within the problem, in
some cases allowing for very slow worst case execution
time depending on particular values chosen. This paper
This paper reviews research into solving the two-dimensional considers the task of obtaining the best (lowest cost) and
(2D) rectangular assignment problem and combines the best the k-best solutions to the 2D rectangular assignment
methods to implement a k-best 2D rectangular assignment algorithm problem.2 Only strong polynomial time algorithms are
with bounded runtime. This paper condenses numerous results as an
given serious consideration as they are best suited for
understanding of the “best” algorithm, a strong polynomial-time
algorithm with a low polynomial order (a shortest augmenting path critical applications that cannot tolerate rare, very slow run
approach), would require assimilating information from many times given certain inputs.
separate papers, each making a small contribution. 2D rectangular Given an NR × NC matrix C of costs (which might be
assignment Matlab code is provided. positive, negative, or zero) with NC ≥ NR , the 2D
rectangular assignment problem consists of choosing one
element in each row and at most one element in each
column such that the sum of the chosen elements is
minimized or maximized. For example, a hotel might want
to assign rooms to clients based upon the price that the
clients have bid to stay in each room. If there are more
rooms than clients, then the clients are the rows and some
rooms will remain unassigned; if there are more clients
than rooms, then the rooms are the rows and some clients
will not be able to stay in the hotel.
Expressed mathematically, the 2D rectangular
assignment problem for minimization is
NR
NC
∗
X = arg min ci,j xi,j (1)
x
i=1 j =1
NC
subject to xi,j = 1 ∀i Every row is assigned
j =1 to a column. (2)
NR
Manuscript received December 21, 2014; revised August 28, 2015, xi,j ≤ 1 ∀j Not every column is
December 30, 2015; released for publication March 14, 2016. i=1 assigned to a row. (3)
DOI. No. 10.1109/TAES.2016.140952.
xi,j ∈ {0, 1} ∀xi,j Equivalent to
Refereeing of this contribution was handled by S. Maskell. (4)
xi,j ≥ 0 ∀xi,j ,
This research is supported by the Office of Naval Research through the
Naval Research Laboratory (NRL) Base Program.
1 The relationship between the complexity classes P and N P is a major
Author’s address: Naval Research Laboratory, Code 534, 4555
Overlook Ave., SW, Washington, DC 20375-5320. E-mail: unsolved problem in theoretical computer science for which a million
(david.crouse@nrl.navy.mil). dollar prize is offered [17]. Many believe that P = N P, though no proof
exists to date.
2 This paper is an extension of the second half of the conference
0018-9251/16/$26.00
C 2016 IEEE publication [22].
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016 1679
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
where min is replaced by max if one wishes to maximize assignment algorithm can be implemented using the 2D
the cost function, ci,j is the element in row i and column j rectangular assignment algorithm of Section II, where
of the cost matrix C, and the matrix X is the set of all of simulation examples are also presented. The results are
the xi,j . If xi,j = 1, then the item in row i is assigned to the summarized in Section IV. To facilitate the understanding
item in column j. Implicitly, the cost of not assigning a and use of the algorithms described here, the Matlab code
column to a row is zero. implementing the 2D rectangular shortest augmenting
In (4), it is indicated that the binary constraint on the path algorithm is given in the Appendix.
xi,j terms can be replaced by a nonnegativity constraint.
This substitution is acceptable as it has been proven that II. A 2D RECTANGULAR ASSIGNMENT ALGORITHM
such a substitution does not change the optimal value of WITH FEASIBILITY DETECTION
the 2D optimization problem [11, ch. 7.3, 7.8]. The A. Problem Formulation and Background
optimal X satisfying all the constraints will still be such
that all elements are binary. The substitution of inequality One of the most frequently used techniques for solving
constraints for integer constraints is possible on families the linear sum assignment problem is the auction
of optimization problems that are considered unimodular algorithm, described in its basic form in [4, 11, ch. 7.8].
[7, ch. 5.5.1]. This substitution turns the 2D assignment The basic form of the auction algorithm assumes that
problem into a linear programming problem. NR = NC , that is, that all items represented by rows must
This paper focuses on solving the problem in (1) when be assigned to all items represented by columns, which
the cost of the globally optimal solution is finite. The limits the scope of problems that can be solved by the
solution is derived for the minimization problem where all algorithm. Generalizations of the basic auction algorithm
of the costs in the matrix are positive, because any are discussed in [5, 8, 9, 42]. However, the auction
minimization problem with finite negative costs can be algorithm is not always the best solution. All formulations
transformed to have all positive costs and any of the auction algorithm are weakly polynomial time
maximization problem can be transformed into an algorithms. That means that the worst case computational
equivalent minimization problem. Specifically, if one complexity of the algorithms depends not only on the size
wishes to perform minimization on a matrix C̃ (where an of the problem (in this case on NR and NC ), but also on the
element in the ith row and jth column is c̃i,j ) that might relative values of the elements of C. Given an
have negative elements, one can transform the matrix into appropriately degenerate C matrix, if one wants to be
a usable cost function C as guaranteed the globally optimal solution, then an upper
bound on the execution time of the algorithm can become
C = C̃ − min c̃i,j . (5) arbitrarily long. Versions of the auction algorithm utilizing
i,j
-scaling have the lowest theoretical bound, which does
The requirement that the globally optimal solution have a not always translate into fast execution times in practice
finite cost ensures that the term min c̃i > −∞. However, [6, ch. 7.1.4], [10, ch. 5.4], as demonstrated in Subsection
i,j j, II-D.3
the transformation in (5) does not preclude the use of
On the other hand, a number of other 2D assignment
certain c̃i,j values being set to ∞ to forbid certain
algorithms exist. Many of these have strong polynomial
assignments. Detecting whether a problem with positive,
complexity; their worst case execution time scales
infinite costs is feasible (whether any finite-cost solution
polynomially dependent only on the dimensions NR and
exists) will be subsequently discussed and is essential to
NC and not on the actual values of the elements of C. The
implementing an efficient algorithm for finding the k-best
first such algorithm is often referred to as the “Hungarian
assignments. Similarly, the problem of maximizing the
algorithm”4 and is described in [14, ch. 4.2], among many
cost of assignments on a matrix C̃ can be transformed into
other places. When considering a square cost matrix,
a problem of minimizing the cost of assignments with the
NR = NC = N, the Hungarian algorithm has a complexity
matrix
of O(N4 ). Many of the most efficient 2D assignment
C = −C̃ + max c̃i,j , (6) algorithms tend to be variants of the Hungarian algorithm.
i,j For example, the algorithm of Jonker and Volgenant [32],
under the assumption that none of the elements of C̃ is ∞, which unbeknownst to many can be considered a
though elements of C̃ are allowed to be – ∞ to forbid particularly efficient variant of the Hungarian algorithm
certain assignments.
Section II describes how the 2D rectangular 3 It is not unusual for an algorithm with a low worst case bound to have
assignment algorithm can be solved in polynomial time poor average performance. For example, when considering linear
with an upper bound on the total number of instructions programming, the popular simplex algorithm has an exponential worst
necessary to run to completion and how such an algorithm case complexity [11, ch. 3.7], whereas the ellipsoid method is weakly
polynomial in complexity [11, ch. 3.7]. However, the simplex method is
can be implemented to quickly determine whether or not a
generally much faster than the ellipsoid method [11, ch. 3.7].
cost matrix presents a feasible solution. Simulation 4 The algorithm was first named the “Hungarian” algorithm by Kuhn
examples demonstrating the runtime of the algorithm are [34], who based the approach on work done by Jenő Egerváry that was
also given. Section III then describes how a k-best 2D published in Hungarian.
1680 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
[14, ch. 4.4], has a complexity of O(N3 ) [32]. A variant of In [36], it is noted that the JVC algorithm had been
the Jonker-Volgenant algorithm that has been generalized implemented in a subsystem used in a (at that time) next
to rectangular cost matrices is often called the JVC generation helicopter for the U.S. Army. The JVC
algorithm, with the C standing for Castañon, who algorithm is compared with an unpublished, proprietary,
provided a generalized implementation of the algorithm to heuristic algorithm called “competition” and is shown to
work with rectangular matrices and replaced the original be faster. However, the competition algorithm is shown to
initialization step with a few iterations of the auction have fewer assignment errors. Note, however, that Jonker
algorithm, which is faster [27].5 and Volgenant’s algorithm [32] is guaranteed to converge
A number of studies have been conducted comparing to the globally optimal solution at each time-step. The
algorithms for 2D assignment in numerous applications, source of the suboptimal convergence of the JVC
such as multiple target tracking. In [41], three 2D algorithm used in [36] probably comes from the fact that
assignment algorithms are compared considering their the auction algorithm was used in an initialization step.
performance in multiframe optimization, namely, the The auction algorithm does not strictly guarantee
auction algorithm, the RELAX II algorithm and the complementary slackness (which shall subsequently be
generalized signature method, with the auction algorithm defined), as the Jonker-Volgenant algorithm requires, but
performing the best. In [27], the JVC algorithm is only does so within a factor of . Thus, the accelerated
compared with three variants of the Munkres algorithm initialization provided with the code in [27], which uses a
[38],6 with the JVC algorithm performing the best. In [15], fixed, heuristic value of for all problems, is probably the
the JVC algorithm is compared with variants of the cause of the suboptimal results. Such problems will be
auction algorithm, with a scaled forward-reverse auction avoided in the 2D assignment algorithm presented in this
algorithm performing the best. Other studies have section.
concluded that the JVC algorithm is often a better In [35], the JVC algorithm, the auction algorithm, the
alternative to the auction algorithm. In [33], the Munkres, Munkres algorithm, and a suboptimal greedy approach are
JVC, deepest hole,7 and auction algorithms are compared compared in a tracking scenario. The Munkres algorithm
based on a measurement assignment problem for multiple is found to be significantly slower than the other methods,
target tracking, where the JVC algorithm is found to be the agreeing with the study done in [44] that found the auction
best overall, with the auction algorithm being faster on algorithm to be faster than the Munkres algorithm. The
sparse problems. In the assignment problem, “sparse” JVC algorithm is found to be fast enough to negate any
means that many elements of C are not finite (certain rows speed benefit from using a suboptimal greedy technique.
cannot be assigned to certain columns). In [28], it is also concluded that the speed of the JVC
However, it is noted that the auction algorithm in the algorithm negated the need for greedy approximations.
simulations in [33] does not always produce optimal In [35], the JVC algorithm is shown to be faster than
results. Such suboptimal solutions can arise when the the auction algorithm in all instances when implemented
parameter in the auction algorithm is not small enough. in C, and on dense problems (mostly finite elements in C)
The auction algorithm is a type relaxation dual when implemented in Matlab. At first, this seems to
optimization technique [7, ch. 6.3.4]. When considering contradict the results of [33], which deemed the auction
NR = NC = N, the accuracy of the cost function is within algorithm superior on sparse problems. However, the
N of the optimal value in both the forward and reverse [9] focus of [33] was on sparse problems for target tracking
versions of the algorithm. Though papers on the auction applications, which at the time were considered difficult,
algorithm generally consider setting in view of integer because one did not always include missed detection
values of ci,j , nothing in the proof [4] of that accuracy hypotheses in the hypothesis matrix C. In such an
bound requires the costs to be integers. Thus, to ensure instance, if the only thing that two targets could be
convergence to an optimal solution N should be less than assigned to was a single, common measurement, no
the minimum nonzero difference between all pairs of feasible assignment would be possible and 2D assignment
elements in C. However, as decreases, the worst case algorithms would fail.
computational complexity of the auction algorithm Eight different assignment algorithms are compared
increases [4]. with NR = NC in [14, ch. 4.10]. Variants of the
Jonker-Volgenant algorithm are shown to perform the best
on the majority of dense problems, and are competitive on
sparse problems (when many elements of C are not finite,
5 In other words, the shortest augmenting path algorithm of Jonker and representing forbidden assignments). However, the
Volgenant was kick-started by a form of the auction algorithm. Jonker Jonker-Volgenant algorithm variants are beaten on the
and Volgenant’s algorithm does not actually have to be initialized, though most difficult problem by the cost scaling implementation
that can speed it up. [30] of the algorithm of push-relabel algorithm
6 The Munkres algorithm is an old O(N4 ) version of the Hungarian
[18, ch. 26.4], which performs poorly on one of the sparse
algorithm.
7 The deepest hole algorithm is suboptimal. Given the speed of optimal problems. The cost scaling algorithm of [30] is an
2D assignment algorithms, there is seldom need to use a suboptimal -scaling technique like that used in the auction algorithm.
approach now. This paper avoids such algorithms as the choice of is
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
problem dependent and an algorithm that provides the case where NR ≤ NC add “dummy rows” to make the
globally optimal solutions given noninteger costs in C cost matrix C square again. Though dummy rows prove
without “tweaking” is desired. However, it has been noted necessary in Section III when finding the k-best
that good implementations of the auction algorithm tend associations, they are not necessary when finding only the
to be relatively insensitive to the range of costs in C [48]. most likely hypothesis.
In writing this paper, it was empirically observed that
large magic squares8 tend to make particularly bad cost B. A Modified Jonker-Volgenant Algorithm
matrices, causing poor implementations of the auction The Jonker-Volgenant algorithm [32] uses a shortest
algorithm to grind to a halt. Magic squares can be augmenting path approach to perform dual-primal
generated in Matlab using the magic command. optimization to solve the assignment problem. Jonker and
In general, the JVC algorithm has come to be Volgenant’s paper describes a solution to the assignment
considered good at solving assignment problems under problem when NR = NC , meaning that the inequality
most conditions. The only real area of contention is with constraint in (3) becomes an equality constraint. In other
sparse problems and with extremely large problems, words, each target must be assigned to an event and each
where some authors have considered adaptively choosing event can only be assigned to one target. Here, the more
the assignment technique (JVC or auction) based upon the general case where NR ≤ NC is considered.
sparsity of the problem [43]. As sparse problems are Let x be the vector obtained by stacking the columns
naturally faster than dense problems, such adaptive of X. That is
algorithmic switching will not be considered in this paper,
with the focus being placed on the worst case (dense) x = [x1,1 , x2,1 , . . . , xNR ,1 , x1,2 , x2,2 , . . . , xNR ,2 , x1,3 ,
scenario. When considering very large problems, the . . . , xNR ,NC ] . (7)
algorithm with the lowest strong polynomial
computational complexity on sparse problems is the Written in the more traditional vector notation used in
primal simplex method in [1], which makes use of linear programming literature, the optimization problem
dynamic trees [45] and Fibonacci heaps [18, ch. 20] to can be stated as
speed up the implementation. However, the algorithm can x̂ = arg min c x (8)
be difficult to implement, and its polynomial complexity x
on large, dense problems is no better than the
Jonker-Volgenant algorithm. subject to Ax = 1 (9)
The following subsection develops a modified version
of the Jonker-Volgenant algorithm that can handle the case Bx ≤ 1 (10)
where NR ≤ NC , and that can detect when the assignment
problem is infeasible, that is, when it is not possible to
x≥0 (11)
assign every row to a column keeping the cost function
finite. Though an initialization stage could potentially where c is the matrix C with all of its columns stacked into
speed up the algorithm, one was omitted both for brevity a single row-vector. The matrices A and B are such that
and because it is not necessary. The omission of an (9) and (10), respectively, represent the constraints in (2)
initialization stage based on the auction algorithm avoids and (3). That is,
many of the pitfalls of other implementations and
guarantees convergence to a globally optimal solution. A = [INR ×NR INR ×NR ... INR ×NR ] (12)
Despite the omission of an initialization step, the ⎡ ⎤
algorithm still executes quickly on the simulations in this 11×NR 01×NR ... 01×NR
⎢ 01×NR 11×NR ... 01×NR ⎥
paper in Subsection II-C. ⎢ ⎥
Additionally, omitting the initialization step allows for B=⎢ . .. .. ⎥ (13)
⎣ .. . . 01×NR ⎦
worst case execution times to be obtained, letting one 01×NR 01×NR ... 11×NR
determine whether the algorithm can be guaranteed to run
in real-time on problems of certain sizes. The algorithm in where INR ×NR 11×NR and 01×NR are, respectively, the
the following subsection was designed with its use in a identity matrix and matrices of ones and zeros, all having
larger approach that obtains not just the best hypothesis, the dimensionalities given by their respective subscripts.
but the k-best hypotheses. The generalization to the k-best The A is NR × NR NC dimensional and B is NC × NR NC
hypotheses is presented in Section III. The original dimensional. The (unknown) optimal solution to the
version of the Jonker-Volgenant algorithm assumes that optimization problem in (8) will be designated as x* .
NR = NC . As such, some adaptations of the algorithm to Using this notation, the basic concepts behind dual
optimization, which underly the Jonker-Volgenant
algorithm are discussed.
8An n × n magic square is a matrix containing the positive integers from The aforementioned linear programming problem is
1 through n2 such that all row sums, column sums and main diagonal known as the primal problem. However, handling equality
sums are equal [50]. and inequality constraints is difficult, so a dual problem
1682 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
will be formulated. The function (inequality) constraint, the primal problem is a linear
⎡ ⎛ ⎞ programming problem. For linear programming problems,
NR NC
NR
NC
g(u, v) = min ⎣ ci,j xi,j + ui ⎝1 − xi,j ⎠ the strong duality theorem says that the duality gap, that
x,x≥0
i=1 j =1 i=1 j =1
is, the expression c x* – g(u* , v* ), is zero [11, ch. 4.3].
⎤ Thus, solving the dual problem provides the value of c x* ,
NC
NR
but does not directly provide the value of x* . Given (16),
+ vj 1− xi,j ⎦ (14a) one can say that xi,j = 0 if ci,j – ui – v j > 0. However, this
j =1 i=1 does not explicitly say which values of x should be one. It
could be possible that multiple values of x, not all of
= min c x + u (1 − Ax) + v (1 − Bx) (14b)
x,x≥0 which satisfy the constraints of the original problem, yield
the same cost.
= min c − A u − B v x + u 1 + v 1 (14c)
x,x≥0 Consider, first, the case where NR = NC , meaning that
is known as the dual cost function. The u and v variables the inequality constraint in (3) becomes an equality
are known as Lagrange multipliers or dual variables and constraint. In this case, given the optimal dual solution,
their use in eliminating constraints in optimization valid solutions for x are those satisfying the equality
problems is known as Lagrangian relaxation [7, ch. 3]. constraints. The satisfaction of the constraints means that
There is one Lagrange multiplier variable per constraint the terms in (14b) involving dual variables disappear so
that has been eliminated, except for the nonnegativity that the dual and primal costs are equal, as expected. On
constraint on x, which will not be relaxed. Due to the extra the other hand, if NR ≤ NC , then by the strong duality
degrees of freedom introduced by the dual variables, under theorem, it is known that the dual and primal cost
the constraint that v ≤ 0, it can be shown that g(u, v) ≥ functions should be equal at the globally optimal values
c x* [11, ch. 4.1]. In other words, the dual cost function u* , v* , and x* . To eliminate the terms in (14b) involving v,
forms a lower bound on the value of the primal cost it is necessary that
function at the globally optimal solution. The dual
v (1 − Bx) = 0 (20)
optimization problem seeks to find the values of u and v
that maximize this lower bound. Expressed in scalar form, this says that
To formulate the dual optimization problem note that
NR
NC
NR
vj 1 − xi,j = 0 ∀j (21)
c−Au−Bv x= ci,j − ui − vj xi,j (15)
i=1
i=1 j =1
Given that the binary constraint on x has been relaxed to a This requirement is also known as another complementary
nonnegativity constraint, if ci,j – ui – v j < 0, then xi,j can slackness condition [11, ch. 4.3], [7, ch. 3.3]. The
be chosen arbitrarily large so that g(u, v) is arbitrarily complementary slackness theorem [11, ch. 4.3] springs
small. Since the dual optimization problem concerns logically from this. The complementary slackness theorem
maximizing the dual cost function, it makes sense to only says that given vectors u, v, and x such that (16) and (20)
consider solutions greater than –∞ by introducing the hold, then u, v, and x are optimal solutions to both the
constraint that ci,j – ui – v j ≥ 0, or expressed in vector primal as well as the dual optimization problems.
form that c – A u – B v ≥ 0. This constraint is known as a The complementary slackness theorem plays an
complementary slackness condition [11, ch. 4.3]. important role in algorithms such as the Jonker-Volgenant
However, if ci,j – ui – v j > 0, then the value of xi,j that algorithm that use shortest augmenting path techniques for
minimizes (15) is 0, implying that the globally minimum solving the assignment problem, as well as in more
value of (15) is always zero if ci,j – ui – v j ≥ 0. Put general augmenting path methods, such as the
differently, given the complementary slackness condition, Ford-Fulkerson and Edmonds-Karp algorithms [18, ch.
the following equation is true, 26.2], for use in general network optimization problems.
Such algorithms solve the assignment problem by
min c − A u − B v x = 0. (16) sequentially solving a series of assignment problems with
x:x≥0
NR = 1, NR = 2, et cetera. The complementary slackness
Substituting this into the dual cost function of (14c), the theorem is used to verify that the globally optimal solution
dual optimization problem is to each subproblem is obtained. Most presentations of
{u∗ , v∗ } = arg max u 1 + v 1 (17) shortest augmenting path algorithms, such as in [26, 46]
u,v
and [14, ch. 4.4], relate them to minimum cost network
subject to v ≤ 0 (18) flow problems from which the shortest path algorithm can
be derived. Others, such as [24], simply provide the
algorithm and then show that the complementary
c−Au−Bv≥0 Complementary Slackness (19)
slackness conditions hold after each step. Due to the
Because the binary constraint on x in the primal complexity of the minimum cost network flow problem, a
problem was replaced by a vector nonnegativity direct derivation directly along those lines will be avoided.
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
assignments that are formed have the minimum cost for all
of the rows included in the assignment, which need not be
the same assignment present once all rows are assigned.
An alternating path is a path that starts at an
unassigned row, in this case t1 or t2 , and ends at an
unassigned column. In between, the path must alternate
between assigned rows and columns. In Fig. 1(a), there are
four possible alternating paths. The first is t1 → e5 , which
illustrates that an alternating path can go directly between
an unassigned row and column without visiting any
assigned nodes. The second is t2 → e1 → t3 → e2 → t4
→ e3 , which begins at the unassigned node t2 and
alternates back and forth from assigned rows and columns
until reaching e3 , which is unassigned. The fourth possible
alternating path is the same as the third, except it ends at
e4 , which is unassigned. The fifth possible alternating path
is t2 → e3 .
Fig. 1. (a) The bipartite graph corresponding to regular and reduced A maximum cardinality bipartite match is the largest
cost matrices in (22) and (23). Partial assignment of two rows is marked possible assignment of rows to columns such that no two
with bold lines. Nodes for rows are on left and columns are on right. In rows are assigned to the same column and no two columns
absence of rows t1 through t3 , partial assignment is globally optimal. (b) are assigned to the same row. It is considered regardless of
Minimum cost alternating path starting from node t2 when using reduced
costs of (23). Only forward arcs contribute to cost, which is 21; all reverse
the cost of the assignment. As discussed in [18, ch. 26.3]
arcs have reduced costs of 0. Other alternating paths starting at t2 are t2 and [24], alternating paths play a role in determining the
→ e3 , which has reduced cost of 23, and t2 → e1 → t3 → e2 → t4 → er , maximum bipartite match. In the rectangular assignment
which has reduced cost of 21. Forward arcs form new, larger assignment. problem, the maximum bipartite match should be such
that all rows are assigned. However, when finding the
Shortest augmenting path algorithms for solving the k-best hypotheses, it is possible that infeasible problems
2D assignment problem can be decomposed into a few might be presented. Given any partial assignment, such as
basic steps: that shown in Fig. 1(a), a larger assignment can be
achieved by finding any alternating path, and then
1) Initialize. augmenting with that path. Augmentation means that all
2) Find the shortest augmenting path. parts of the path that go from rows to columns are the new
3) Update the dual variables to assure complementary assignments, any parts of the path that go from columns to
slackness. rows are unassigned, and any assignments that do not
4) Augment the previous solution with the shortest overlap with the path remain unchanged. If no augmenting
path. path can be found, then the maximum possible assignment
5) If all rows have been assigned, then the problem is of rows to columns has been found [24], [18, ch. 26.3]. If
solved. Otherwise, go to step 2. not all rows are assigned, then that the assignment
problem under consideration is infeasible.
To understand such algorithms, the notion of an Shortest augmenting path algorithms, such as the
augmenting path must be defined. The concept comes Jonker-Volgenant algorithm, solve the assignment problem
from graph theory, and is easiest to explain with an by finding a series of minimum cost augmenting paths to
example. Consider the following square cost matrix: iteratively increase the number of assigned objects.
⎡ ⎤ However, what is less intuitive is that dual variables must
∞ ∞ ∞ ∞ 3
⎢ 7 ∞ 23 ∞ ∞ ⎥ be maintained and the cost matrix must be modified using
C =⎢ ⎣ 17 24 ∞ ∞ ∞ ⎦
⎥ (22) the dual variables after each assignment to ensure that a
∞ 6 13 20 ∞ globally optimal solution is ultimately obtained. In the
example at hand, for the given partial assignment
The infinite entries represent forbidden assignments. Fig. illustrated in Fig. 1(a), the reduced cost matrix is
1(a) shows a graph representation of the structure of the ⎡ ⎤
∞ ∞ ∞ ∞ 3
cost matrix. The left-hand nodes in Fig. 1(a) represent the ⎢ 7 ∞ 23 ∞ ∞ ⎥
rows in the assignment matrix. The right-hand nodes C̄ = ⎢⎣ 0
⎥ (23)
7 ∞ ∞ ∞⎦
represent the columns. A line is drawn between a node and
∞ 0 7 14 ∞
a column if the cost of the association in C is finite.
Illustrated in bold in the figure is a partial assignment: How the costs in the matrix were adjusted using dual
t3 → e1 and t4 → e2 , meaning that the third and fourth variables u and v will be discussed shortly. One thing to
rows are assigned to the first and second columns, note is that the zero entries in the matrix correspond to the
respectively. In the Jonker-Volgenant algorithm, all partial chosen assignments. Using this reduced cost matrix, the
1684 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
shortest augmenting path in Fig. 1(b) starting from node t2 AC will be the set of all columns. Allocate NC × 1 space
is highlighted, and the new assignment, which is given by for a vector called path, which will specify which row is
the parts of the path going from rows to columns, is associated with which column in the minimum cost path.
illustrated. 2) Prepare for Augmentation. Set all elements of the
In this example, it was decided that the shortest NC × 1 vector shortestPathCosts equal to infinity. Set SR
augmenting path starting at row t2 would be found. and SC equal to the empty sets. They will hold the row and
However, the shortest augmenting path among all possible column vertices, respectively, that have been reached by a
augmenting paths is t1 → e5 , which has a cost of 3. If one shortest path emanating from row k. Set sink = – 1,
were to augment using that path, one would get new minVal = 0 and the current row to be assigned is set to
assignments of t3 → e1 , t4 → e2 , and t1 → e5 . Because i = curRow. The variable minVal will ultimately hold the
that path did not overlap with any already existing cost of the shortest augmenting path that is found, and sink
assignments, those assignments would remain unchanged. will ultimately be the index of the final column in the
To solve the complete assignment problem, however, it alternating path. The variable j will select the current
does not matter whether a shortest augmenting path column.
starting from t1 or t2 is found, because the algorithm 3) Find the Shortest Augmenting Path.
provides the minimum cost assignment for the rows that while sink = -1 do
have been chosen to be assigned [26], [14, ch. 4.4.2], as SR ← SR ∪ {i};
shall be elucidated when the dual update step is discussed. for all j AC\SC such that minVal + C [i,j] – u[i] –
An important aspect of this realization is that if an v[j] < shortestPathCosts [j] do
augmenting path to add a given unassigned target to the path [j]← i;
partial assignment cannot be found, then the assignment shortestPathCosts [j]← minVal + C [i,j]–
problem is infeasible.9 The rapid identification of u [i]– v [j];
infeasible assignment problems is useful in efficiently end for
implementing techniques that find the k-best assignments j ← arg min (shortestPathCosts[h] given h
and is often overlooked in the literature. AC\SC); (If shortestPathCosts[j] = ∞, then infeasible!)
The simplest way to show that augmenting with the SC← SC ∪ j;
shortest augmenting path algorithm produces an optimal minVal ← shortestPathCosts[j]
assignment for the subproblem involving only the rows if row4col [j] = –1 then
that one has chosen to add to the problem is to present the sink ← j;
algorithm with its dual update step and show that the result else
satisfies the complementary slackness conditions for the i ← row4col [j];
dual problem that were previously mentioned. The end if
procedure for finding the shortest augmenting path (given end while
a partial assignment) that is commonly used in the The ∪ operation means that the two sets are being merged.
assignment problem is the Dijkstra algorithm [25], [18, ch. Thus, SR ← SR ∪ {i} means that row i is being added to
24.3], [11, ch. 7.9]. A particularly clear description of the the collection of rows that have been visited. A backslash
Dijkstra algorithm is given in [11, ch. 7.9]. A modified means that the right-hand quantity is subtracted from the
version of the algorithm that properly utilizes and updates set. Vectors are indexed using brackets. The shortest
the dual variables for calculating the reduced costs is augmenting path ends at column sink. The first row in the
presented in [14, ch. 4.4], [32] and is given as steps path is thus, r = path[sink]. The next column is given by
2–4 of a complete rectangular version of the 2D col4row[r]. The following row is then path[r] and so on.
Jonker-Volgenant algorithm as follows. The path is traced until row curRow, which began the
1) Initialize. Initialize the NR × 1 vector u, the dual path, is reached. After updating the dual variables, the
cost vector for the rows, and the NC × 1 vector v, the dual assignments from the shortest augmenting path step must
cost vector for the columns, to all zeros. The scalar value be saved for the next iteration.
cur Row will be the index of the current unassigned row 4) Update the Dual Variables.
that is to be assigned. Set cur Row = 0 to select the first u [cur Row] ← u [cur Row] + minVal;
row to assign (indexation is assumed to start from 0). The for all i ∈ SR \curRow do
NR × 1 vector col4row and the NC × 1 vector row4col will u [i] ← u[i] + minVal
hold the values of the assigned indices. As nothing has – shortestPathCosts [col4row [i]];
been assigned yet, set all of their elements to –1. The set end for
for all j ∈ SC do
v [j] ←v [j] –minVal + shortestPathCosts [j];
9 When considering infeasible assignments as having infinite cost, an end for
augmenting path can always be found, but it might have infinite cost. 5) Augment the Previous Solution.
Since adding more rows (targets) to the problem can only increase the
cost of the assignment problem, the total cost will always be infinite, so
j← sink;
the algorithm can be stopped once it find a single infeasible (infinite cost) do
path. i ← path [j];
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
row4col [j]← i; prove that the complementary slackness condition in (20)
temp ← col4row [i]; for the inequality constraints has been satisfied.
col4row [i] ← j; The fact that dummy rows are not necessary when
j ← temp; NR < NC and one uses a shortest augmenting path
while i = curRow algorithm has been previously considered. In [47], the idea
6) Loop of terminating after all rows have been assigned is
curRow ← curRow + 1; mentioned. In [12], the rectangular assignment problem is
if curRow = NR then considered in more detail, where it is also noted that the
Exit. augmentation phase of the algorithm produces optimal
else partial assignments at each step. Related algorithms for
Go to step 2. more general network optimization problems, such as the
end if push-relabel method [18, ch. 26.4], can also be stopped
with optimal partial assignments. Auction algorithms for
By the nature of Dijkstra’s shortest augmenting path rectangular assignment problems without using dummy
algorithm, assuming that all entries in C are nonnegative, rows are presented in [8, 9].
step 3 finds the minimum cost augmenting path for the
reduced cost matrix C such that c̄i,j = ci,j − ui − vj [11,
ch. 7.9]. Equation (23) is the reduced cost matrix for (22) C. Discussion of the Implementation
after the bottom two rows have been assigned. By the The Jonker-Volgenant algorithm [32] is typically
nature of augmenting path techniques for maximum implemented with an initialization step to accelerate the
bipartite matching, the association obtained by convergence rate. In its basic form, when considering the
augmenting with the path that was found is feasible [18, case where NR = NC = N, the initialization algorithm has
ch. 26.3]. Additionally, it was proven in [14, ch. 4.4] that a worst case complexity of O(N3 R), where R is the range
the resulting dual solution satisfies the complementary of the elements in C [14, ch. 4.4.4], [32]. However, it has
slackness condition in (19), and assures that ci,j – ui – v j ≥ been noted [14, ch. 4.4.4], [32] that a modified version of
0. The satisfaction of the complementary slackness the initialization routine can be made to have an O(N3 )
condition is why the reduced costs in (23) have zeros in complexity. Castañon’s modification to the algorithm [27]
the entries for the partial assignment. After updating the uses an initialization step that is similar to the auction
dual solution, zeros are placed in the locations of the new algorithm with a fixed -scaling parameter, which means
assignment by step 4. Note that the update also does not that one is not always guaranteed to obtain a globally
change the optimal solution to the subproblem optimal solution. The modified Jonker-Volgenant
(considering only the rows that have been assigned), algorithm described in this paper does not use any
meaning that the solution is optimal for the reduced initialization.
primal. Thus, if NR = NC , once all rows have been The median and worst case execution times of the
assigned, the algorithm will have terminated with a implementation of the Jonker-Volgenant algorithm given
solution that, based upon the duality theorem, is optimal. in this paper are considered when the algorithm is run on
Another proof of the optimality of the solution is given in random matrices where every element was chosen
[26]. uniformly between 0 and 1. It does not matter whether the
The assumption that NR = NC is important for assuring elements were chosen between 0 and 1 or between 0 and
optimality of the solution, because it means that the some other number, because unlike other assignment
complementary slackness constraint of (20), which algorithms, simply multiplying the cost matrix by a
pertains to the inequality constraints, need not be proven positive constant does not change the computational
to have been fulfilled to assure optimality. What is of complexity of the algorithm. Because fourth-generation
interest here, however, is the case where NR ≤ NC . programming languages, such as Matlab, are commonly
Traditional approaches to the problem, such as that in [37] used for prototyping algorithms, whereas third-generation
and [14, ch. 5.4.4], add NC – NR dummy rows to the cost languages, such as C, C++, are commonly used for more
matrix that gate with all events. If the dummy rows have practical implementations that can be built into real
costs that are significantly greater than maxi,j ci,j , then they systems,10 the computational speed of the algorithm
will only be assigned to columns that would not have is determined for two implementations in Matlab,
participated in the original assignment problem. However, two implementations in C and one implementation
it does not matter in which order the rows are added to the in C++.
assignment problem. Thus, one could choose to add the
dummy rows last. Consequently, if all of the dummy rows
10 Though some systems might be implemented using field
have zero cost, and are added to the problem last, they
cannot change any existing assignments. Thus, the dummy programmable gate arrays (FPGAs) that are programmed using hardware
description languages, such as Verilog or the Very High Speed integrated
rows are not necessary and the assignment algorithm will
Circuits Hardware Description Language (VHDL), C is simpler to
provide an optimal solution once all NR ≤ NC have been program and a quick search online will reveal multiple programs that can
assigned. Consequently, one does not need to directly convert C code into such hardware languages.
1686 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
TABLE I
The Median and Worst Observed Execution Times of the Modified Jonker-Volgenant Algorithm without Initialization when run on Random Matrices
of the Sizes Indicated, Implemented in Matlab, C, and C++. In Matlab and C, the Algorithm was Implemented Either so that the Innermost Loop
Scanned the Data Across Rows or Across Columns. The C++ Implementation, Which Forms the Basis of the First Step of the k-Best Rectangular 2D
Assignment Algorithm of Section III, Scanned only Across Rows. The Execution Times are taken from 1000 Monte Carlo Runs. Note that the
Execution Times for the 500 × 1000 Problem are Always Less Than Those for the 500 × 500 Problem and that the Median Execution Times of the
Row-Wise Algorithms are Always Less Than Those of the Corresponding Column-Wise Algorithms
Problem Size Median Worst Case Median Worst Case Median Worst Case Median Worst Case Median Worst Case
100 × 100 22.1 ms 63.4 ms 22.4 ms 68.4 ms 0.518 ms 1.34 ms 0.553 ms 1.28 ms 0.723 ms 2.20 ms
200 × 200 65.9 ms 139 ms 71.5 ms 145 ms 2.59 ms 5.07 ms 3.08 ms 9.18 ms 3.09 ms 6.12 ms
500 × 500 376 ms 697 ms 530 ms 892 ms 20.9 ms 54.2 ms 46.4 ms 163 ms 26.4 ms 62.3 ms
500 × 1000 165 ms 288 ms 137 ms 206 ms 14.7 ms 26.2 ms 23.0 ms 57.5 ms 18.1 ms 32.2 ms
3000 × 3000 20.6 s 25.4 s 49.2 s 63.3 s 1.90 s 3.56 s 9.81 s 1351 s 2.16 s 3.73 s
The C implementations of the algorithm mirror the upon how long it takes to find the shortest augmenting
Matlab implementations, except care is taken to allocate path each time. Thus, by modifying the termination
all memory outside of the loops. The reason two condition to force the algorithm to take the maximum
implementations are present each in C and in Matlab is number of loops, modifying the elements in the loops to
because modern processors, such as the Intel Xeon E5645 force the if-statement to always be true, and adjusting the
[19], on which the simulations are run, contain code to make sure that no invalid memory locations are
sophisticated prefetch algorithms that try to fill the accessed, one can estimate the worst case execution time
processor’s cache with data (prefetch data) that it of the algorithm. Whereas such worst case execution
anticipates the program will need. However, if the program times were given in the conference work preceding this
requests data from widely-separated places in memory, paper [22], they are omitted here as a truly firm bound is
then the prefetch algorithms will perform poorly leading very processor specific, requiring one to force a as many
to a large number of cache misses and slower execution false branch predictions12 and cache misses on modern
time. The implementations of the 2D assignment processors as possible for the bound to
algorithm in Matlab and C thus differ in the order be valid.
in which the rows or columns of the assignment matrix The algorithms are run on random assignment
are scanned. matrices of varying sizes, as shown in Table I. All of the
The assignment matrices given to the different algorithms modify copies of the input matrices using (5)
algorithms are generated in Matlab. The implementations and (6) to guarantee that the matrices are appropriate for
in C and C++ are called from Matlab. Matlab stores the algorithm. Only minimization is performed in the
matrices in memory with column-major ordering.11 simulations. One thousand Monte Carlo runs are
Consequently, the Jonker-Volgenant algorithm performed on a computer made by the Xi Corporation
implemented such that the shortest path portion scans running Windows 7 with two Intel Xeon E5645 processors
across rows rather than columns (unlike the and 12 gigabytes (GB) of random access memory (RAM)
implementation described in Section II-B) would be in Matlab 2013b. In order to speed up the simulations, the
expected to be faster. The C++ implementation of the 2D parallel processing toolbox is used to run Monte Carlo
rectangular algorithm is shared by the k-best 2D runs across 12 processor cores simultaneously. (The
rectangular assignment algorithm of Section III and computer has 24 cores in total).
only scans across rows in the innermost loop of As can be seen, even for hundreds of targets, the
the algorithm. execution time of the algorithms implemented in C is on
One benefit of the simple implementation of the the order of milliseconds. The row-wise implementations
Jonker-Volgenant algorithm without initialization is that of the algorithms, which allow for fewer cache-misses, are
the worst case execution time can be estimated without faster than the column-wise implementations with the
running many Monte Carlo runs. Ignoring the influence of
background processes running on a computer, the
execution time of the algorithm varies only depending 12 When an “if” statement arises in the code, modern processors might
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
TABLE II
The Median and Worst-Observed Execution Times of the -Scaled Forward Auction Algorithm, the Forward Auction Algorithm with a Fixed
Guaranteeing Global Convergence and the Modified Jonker-Volgenant Algorithm of this Paper Implemented in Matlab Solving the Maximization
Problem over 1002 Monte Carlo Runs, whereby the First Two Runs were not Counted, because they were Generally Significantly Slower
(Presumably, Matlab Compiled and Optimized the Code in Those Steps). The Problem Size is the Size of the Random Matrices (4 × 4, 8 × 8, ...). The
Auction Algorithms were Designed for Use with Integer Costs, so the Cost Matrix C was Randomly Chosen to Provide 9 Digits of Precision. In All
Instances, the Modified Jonker-Volgenant Algorithm had Better Mean and Worst Case Performance
Problem Size Median Worst Case Median Worst Case Median Worst Case
difference increasing as a function of the dimensionality Volgenant algorithm used in this paper is good on generic,
of the cost matrix. The Matlab implementations, with unstructured problems.
Matlab being an interpreted language rather than a
compiled programming language, are the slowest of all.
D. Comparison to Auction Algorithms
The execution time for the rectangular assignment
problem is less than that of the square assignment problem To better understand why this paper focusses on
with the same number of rows. An explanation for this is shortest augmenting path 2D assignment algorithms rather
that the extra columns decreased the likelihood that two than using variants of the auction algorithm, which are
rows would contest the same column. significantly more common in the literature, this
The 3000 × 3000 matrix example is chosen to subsection looks at specific scenarios where the auction
demonstrate how the need for parallelization has changed algorithm can perform poorly. Here, all of the auction
with advances in hardware and algorithms over the years. algorithm variants are implemented in Matlab. The
In 1991, a parallelized shortest augmenting path algorithm variants considered are:
that ran over 14 processors (and was implemented in such 1) the forward auction algorithm using -scaling
a manner that the run time depended on the range of described in [6, ch. 7.1] using the open-source
values of the costs) took 811 s to run in the worst case on a implementation for square matrices given in [3]. Like
random 3000 × 3000 matrix [2]. That is about 427 times most versions of the auction algorithm in the literature,
slower than the median run time of this algorithm. this is only suited for integer-valued cost matrices C. The
However, for such large problems, the quality of the -scaling causes this variant of the auction algorithm to
assignment algorithm is more important than the hardware have a particularly low computational complexity. The
on which the algorithm is run. For example, if one were to default heuristic method of initially setting the parameter
solve the 3000 × 3000 assignment problem by evaluating used in the code is
all combinations via brute force, one would need to
= max ci,j (N + 1) (24)
consider 3000! ≈ 4 × 109130 different possible i,j
assignments, which is not computationally feasible on if the cost matrix C is an N × N matrix.
modern hardware. 2) the basic forward auction algorithm for square
In the event that the algorithm given here is not fast matrices with a fixed set to guarantee an optimal
enough to solve a particularly massive optimization solution. This is described in [6, ch. 7.2], among other
problem within a desired time interval, then modifications sources. An optimal solution is guaranteed by setting the
for parallelization considered in [2, 48] can be used. as
Additional references for parallelization techniques
min
applied to shortest augmenting path algorithms are given = (25)
in [14, ch. 4.11.3]. One of the main arguments for using 1.01N
the auction algorithm over the shortest augmenting path where min is the smallest, positive nonzero difference
algorithms is its ability to be more easily parallelized. between entries in C. The 1.01 term could be any value
Given well-structured problems, a well-implemented larger than 1 to guarantee that the algorithm converges to
highly parallelized auction algorithm can be faster than a the globally optimal solution.
shortest augmenting path algorithm for assignment in the Table II shows the runtimes of the different algorithms
average case [51]. However, the modified Jonker when run on random integer cost matrices with up to 9
1688 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
TABLE III TABLE IV
The Execution Times of the Auction Algorithms and the Modified The Execution Times of the Auction Algorithms and the Modified
Jonker-Volgenant Algorithm when Run in Matlab 2015b on Magic Jonker-Volgenant Algorithm when Run in Matlab 2015b on Magic
Matrices or Matrices Full of Ones of the Indicated Dimensions (32 × 32, Matrices of the Indicated Dimensions where all Odd Numbered Entries
64 × 64, ...). The Forward Auction Algorithm Performs Significantly were Marked as Impermissible Assignments. The Results are Compared
Worse than with Random Matrices in Table II as the Size Increases. It with Matrices of all Ones with the Same Impermissible Entries. Compare
Can be Seen that the Values of the Matrices have an Effect on Runtime. with Table III to see the Effects of Sparsity on the Problem
This Changes with Sparsity, as Table IV Shows
Problem -Scaled Forward Modified
Problem -Scaled Forward Modified Size Auction Auction Jonker-Volgenant
Size Auction Auction Jonker-Volgenant
32 (Magic) 14.3 ms 2.60 ms 1.34 ms
32 (Magic) 11.3 ms 235 ms 1.87 ms 32 (Ones) 2.92 ms 4.72 ms 1.80 ms
32 (Ones) 4.49 ms 8.17 ms 2.62 ms 64 (Magic) 42.2 ms 9.45 ms 4.17 ms
64 (Magic) 33.8 ms 4.64 s 6.61 ms 64 (Ones) 5.30 ms 17.58 ms 6.20 ms
64 (Ones) 9.04 ms 34.3 ms 10.0 ms 128 (Magic) 155 ms 38.5 ms 18.0 ms
128 (Magic) 119 ms 93.6 s 31.8 ms 128 (Ones) 64.27 ms 76.5 ms 28.7 ms
128 (Ones) 115 ms 147 ms 46.6 ms
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
by the cost of each solution. The solution S is associated For square assignment matrices such that NR = NC =
with an empty list of constraints, CS . N, the complexity of Murty’s algorithm when
3) (Record the Solution) Let S and CS be the lowest implemented with an O(N3 ) assignment algorithm without
cost solution on the ordered list and its associated list of using any particular optimizations is O(kN4 ). However,
constraints. Remove the lowest cost solution from the list multiple authors have noted [16, 20, 37] that when using a
and record it as the kth solution. If k is the maximum 2D assignment algorithm, such as that described in
number of solutions desired, then terminate the algorithm. Section II, and a square cost matrix, a partial solution can
4) (Split the Solution) Solution S will be split into a be used to kickstart the assignment algorithm from the
number of disjoint subproblems each with an increasing problem that is being split, bringing the computational
number of constraints. All subproblems inherit the complexity down to O(kN3 ).
constraint set CS of the parent solution augmented by Let col4rowS and row4colS , uS , and vS contain the
something else. optimal solution and dual variables to problem S, which is
• The first subproblem has the constraint set CS being split in step 4 of Murty’s algorithm. Every split
augmented with the constraint that the first assignment in hypotheses contains a new constraint for an assignment
the solution to S that is not directly specified by a that is no longer allowed. What was realized in [16, 20,
constraint in CS is forbidden. Forbidding an assignment is 37] is that the optimal solution to S with this assignment
the same as replacing an entry in C with ∞. Solve this removed from col4rowS and row4colS and the unchanged
assignment problem and add its solution and constraint set dual solutions uS and vS , satisfy the complementary
to the ordered list. slackness condition in (19). Thus, uS and vS as well as uS
• The second subproblem has the constraint set CS and vS with the forbidden assignment removed can be the
augmented with the constraint that the first assignment in initial values to the shortest path algorithm described in
S that is not forced due to a constraint in CS must be made the pseudocode in Section II-B to complete the
and the second assignment in S not directly specified by a assignment problem. The previous best solution is
constraint in CS is forbidden. Adding a constraint that an forbidden (by putting an infinite value in the appropriate
assignment be made is the same as removing a row and entry in the cost matrix) and the assignments that are not
column from C, since there is no longer a choice on that allowed to change are not considered in the shortest path
assignment. Forbidding an assignment is the same as optimization (like removing rows and columns form the
replacing the corresponding element with ∞. Solve this cost matrix). Thus, only one run of the shortest path
assignment problem and add its solution and constraint set algorithm is needed to update the solution.
to the ordered list. The requirement of using that optimization, however,
• The nth subproblem is generated by requiring that is that the assignment matrix be square, as there is no
the first n – 1 assignments in S that are not strictly longer a guarantee that after the shortest path
assigned due to constraints in CS be assigned and the nth augmentation, the second complementary slackness
assignment in S that is not due to a constraint in CS be condition in (20) will be fulfilled. Any rectangular cost
forbidden. Each new problem will be solved and added to matrix with NC > NR can be made square by adding
the ordered list along with its associated constraint set. If a zero-cost dummy rows, so this is not an issue. In [37], it
problem is infeasible, then it will be discarded and will not was noted that no constraints should be applied to the
be added to the list. This continues until solution S can no dummy rows when splitting hypotheses in step 4 of
longer be split into subproblems. Murty’s algorithm lest one obtain hypotheses that differ
5) (Loop) If the ordered list is empty, then all possible only in meaningless assignments of dummy rows to
solutions to the problem have been found and the otherwise unassigned columns. This obviates the need for
algorithm terminates. Otherwise, set k = k + 1 and go to steps described in some implementation, such as in [13,
step 3. ch. 6.5.2], where duplicate solutions must be removed.
This modified version of Murty’s algorithm is not the
The description of Murty’s algorithm given above
only O(kN3 ) complexity algorithm for finding the k-best
makes use of an ordered list. However, any type of ordered
hypotheses. Another is presented in [16] and [14, ch.
data structure can be used. For example in [31], the use of
5.4.1]. That algorithm does not require any complicated
a binary search tree was suggested, and in [20], it was
inheritance of dual variables to assure its computational
suggested that a heap be used.13 The Matlab
complexity, but it does appear to be a more complicated
implementation used for the simulations in this paper uses
algorithm, requiring the use of a special algorithm to find
a binary heap as described in [49, ch. 6.4].
second-best assignments, and it makes use of a tree data
structure. For that reason, only Murty’s algorithm is
considered here.
13 A heap is a type of advanced data structure whose complexity adding
and deleting elements is O(ln[n]) or O(1) depending upon the type of B. Discussion of the Implementation
heap, where n is the number of items in the heap when the operations is
being performed, and finding the minimum item also occurs in O(ln[n]) Though the previous subsection discussed enforcing
or O(1) [18]. constraints akin to setting elements in the cost matrix
1690 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
TABLE V
The Median and Worst Case Execution Times to Generate the Given Number of Hypotheses Shown for Varying Problem Sizes with 1000 Monte Carlo
Runs in Matlab and C++. The Median Speed of the C++ Implementation is up to 379 Times Faster than the Matlab Implementation in the Scenarios
Considered
equal to zero, or removing rows and columns from the algorithm, despite its simplicity, is not always an ideal
cost matrix for assignments that are fixed, such an choice for performing 2D assignment if one requires
implementation is quite computationally inefficient, globally optimal solutions, due to difficulties associated
particularly for large cost matrices, requiring allocating with choosing the complementary slackness parameter. A
and copying large amounts of data for each cost matrix in complementary slackness parameter that is too large
addition to allocating space for the partial solutions that cannot guarantee a globally optimal solution; a parameter
are constrained to exist. A more efficient implementation that is too small can have a very slow rate of convergence,
never copies the cost matrix, but rather keeps track of the and adaptive methods for setting the complementary
partial assignment as well as which rows and columns are slackness parameter can be difficult to implement for use
fixed or forbidden. Such constraints can be stored in with arbitrary cost matrices. Moreover simulations
arrays. Moreover, an efficient implementation would demonstrated that a forward auction algorithm with
inherit the dual variables and partial solution from each scaling of the complementary slackness parameter was
problem as it splits to bring down the computational slower than a modified form of the Jonker-Volgenant
complexity to O(kN3 ). Doing that, however, would require algorithm. Consequently, the Jonker-Volgenant shortest
the addition of dummy rows to a rectangular cost matrix augmenting path algorithm was chosen for
when NR = NC . Dummy rows were used in the implementation in this paper.
implementation of the algorithm for this paper in Matlab The implementation combines concepts from multiple
and C++. C++ was used instead of C so that the papers in the literature to create an efficient algorithm that
priority_queue template class could be used as an ordered can be generalized to find the k-best 2D assignments
list. In Matlab, a binary heap class was created to function rather than just the best 2D assignment. Whereas previous
as an ordered list. work has considered dual variable inheritance to improve
Table V shows the execution times for 1000 Monte k-best 2D assignment algorithms, this paper appears to be
Carlo runs of the two implementations on Murty’s the only work that combines aspects of rapid infeasibility
algorithm on assignment matrices of various sizes. As is detection and dual variable inheritance to create a more
the case in Section II-C, the cost matrices are generated efficient k-best 2D assignment algorithm. It was
with elements randomly generated uniformly between 0 demonstrated that the order in which one scans the
and 1. Both implementations of the algorithm scanned the elements in the cost matrix matters, with the influence
assignment matrices row-wise to obtain the best being attributable to cache prediction algorithms that are
computational performance. The results demonstrate built into the processor. This is an implementation aspect
that the algorithm can produce a large number of that is not necessarily obvious to noncomputer science
hypotheses on moderate sized problems in a fraction of professionals. Additionally, execution time differences
a second. between the C and the C++ implementations of the 2D
assignment algorithm demonstrate the difference that the
compiler and a few minor changes between languages can
IV. CONCLUSIONS
make.
An overview of the literature covering 2D assignment Matlab code implementing the 2D rectangular shortest
algorithms for rectangular problems and for k-best augmenting path algorithm is given in the Appendix. It has
assignment was given. It was determined that the auction been tested in Matlab 2013b, 2014a, 2014b, and 2015a.
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
APPENDIX. MATLAB CODE FOR A RECTANGULAR SHORTEST AUGMENTING PATH 2D ASSIGNMENT
ALGORITHM
Below is code for a 2D assignment algorithm in Matlab. Code for 2D and k-best 2D assignment is available online at
https://github.com/DavidFCrouse/Tracker-Component-Library.
function [col4row, row4col, gain, u, v]=assign2D(C,maximize)
%%ASSIGN2D Solve the two-dimensional assignment problem with a
% rectangular cost matrix C, scanning row-wise using a shortest
% augmenting path algorithm.
%
%INPUTS: C A numRowXnumCol cost matrix that does not contain
% any NaNs and where the largest finite element minus
% the smallest element is a finite quantity (does not
% overflow).
% maximize If true, the minimization problem is transformed
% into a maximization problem. The default if this
% parameter is omitted is false.
%
%OUTPUTS: col4row A numRowX1 vector where the entry in each element
% is an assignment of the element in that row to a
% column. 0 entries signify unassigned rows.
% row4col A numColX1 vector where the entry in each element
% is an assignment of the element in that column to a
% row. 0 entries signify unassigned columns.
% gain The sum of the values of the assigned elements in
% C.
% u The dual variable for the columns.
% v The dual variable for the rows.
%
%DEPENDENCIES: None
%
%If the number of rows is <= the number of columns, then every row is
%assigned to one column; otherwise every column is assigned to one row. The
%assignment minimizes the sum of the assigned elements (the gain).
%During minimization, assignments can be forbidden by placing Inf in
%elements. During maximization, assignment can be forbidden by placing -Inf
%in elements. The cost matrix can not contain any -Inf elements during
%minimization nor any +Inf elements during maximization to try to force an
%assignment. If no complete assignment can be made with finite cost,
%then col4row and row4col are empty and gain is set to -1.
%
%Note that the dual variables produced by a shortest path assignment
%algorithm that scans by row are not interchangeable with those of a
%shortest path assignment algorithm that scans by column. Matlab stores
%matrices row-wise. Additionally, the dual variables are only valid for the
%transformed cost matrix on which optimization is actually performed, which
%is not necessarily the original cost matrix provided.
%
%October 2013 David F. Crouse, Naval Research Laboratory, Washington D.C.
if(nargin<2)
maximize=false;
end
numRow=size(C, 1);
numCol=size(C, 2);
didFlip=false;
if(numCol>numRow)
C=C’;
temp=numRow;
1692 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
numRow=numCol;
numCol=temp;
didFlip=true;
end
%The cost matrix must have all non-negative elements for the assignment
%algorithm to work. This forces all of the elements to be positive. The
%delta is added back in when computing the gain in the end.
if(maximize== true)
CDelta=max(max(C));
C=-C + CDelta;
else
CDelta=min(min(C));
C=C-CDelta;
end
%These store the assignment as it is made.
col4row=zeros(numRow, 1);
row4col=zeros(numCol, 1);
u=zeros(numCol, 1);%The dual variable for the columns
v=zeros(numRow, 1);%The dual variable for the rows.
%Initially, none of the columns are assigned.
for curUnassCol=1:numCol
%This finds the shortest augmenting path starting at k and returns
%the last node in the path.
[sink,pred,u,v]=ShortestPath(curUnassCol,u,v,C,col4row,row4col);
%If the problem is infeasible, mark it as such and return.
if(sink== 0)
col4row=[];
row4col=[];
gain=-1;
return;
end
%We have to remove node k from those that must be assigned.
j=sink;
while(1)
i=pred(j);
col4row(j)=i;
h=row4col(i);
row4col(i)=j;
j=h;
if(i== curUnassCol)
break;
end
end
end
%Calculate the gain that should be returned.
if(nargout>2)
gain=0;
for curCol=1:numCol
gain=gain + C(row4col(curCol),curCol);
end
%Adjust the gain for the initial offset of the cost matrix.
if(maximize== true)
gain=-gain + CDelta*numCol;
else
gain=gain + CDelta*numCol;
end
end
if(didFlip== true)
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
temp=row4col;
row4col=col4row;
col4row=temp;
temp=u;
u=v;
v=temp;
end
end
function [sink, pred, u, v]=ShortestPath(curUnassCol,u,v,C,col4row,row4col)
%This assumes that unassigned columns go from 1:numUnassigned
numRow=size(C, 1);
numCol=size(C, 2);
pred=zeros(numCol, 1);
%Initially, none of the rows and columns have been scanned.
%This will store a 1 in every column that has been scanned.
ScannedCols=zeros(numCol,1);
%This will store a 1 in every row that has been scanned.
ScannedRow=zeros(numRow,1);
Row2Scan=1:numRow;%Columns left to scan.
numRow2Scan=numRow;
sink=0;
delta=0;
curCol=curUnassCol;
shortestPathCost=ones(numRow,1)*inf;
while(sink== 0)
%Mark the current row as having been visited.
ScannedCols(curCol)=1;
%Scan all of the columns that have not already been scanned.
minVal=inf;
for curRowScan=1:numRow2Scan
curRow=Row2Scan(curRowScan);
reducedCost=delta + C(curRow,curCol)-u(curCol)-v(curRow);
if(reducedCost<shortestPathCost(curRow))
pred(curRow)=curCol;
shortestPathCost(curRow)=reducedCost;
end
%Find the minimum unassigned column that was
%scanned.
if(shortestPathCost(curRow)<minVal)
minVal=shortestPathCost(curRow);
closestRowScan=curRowScan;
end
end
if(∼isfinite(minVal))
%If the minimum cost column is not finite, then the problem is
%not feasible.
sink=0;
return;
end
closestRow=Row2Scan(closestRowScan);
%Add the column to the list of scanned columns and delete it from
%the list of columns to scan.
ScannedRow(closestRow)=1;
numRow2Scan=numRow2Scan-1;
Row2Scan(closestRowScan)=[];
delta=shortestPathCost(closestRow);
%If we have reached an unassigned row.
if(col4row(closestRow)== 0)
1694 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
sink=closestRow; [12] Bijsterbosch, J., and Volgenant, A.
else Solving the rectangular assignment problem and applications.
Annals of Operations Research, 181, 1 (Dec. 2010), 443–462.
curCol=col4row(closestRow);
[13] Blackman, S. S., and Popoli, R.
end Design and Analysis of Modern Tracking Systems. Norwood,
end MA: Artech House, 1999.
%Dual Update Step [14] Burkard, R., Dell’Amico, M., and Martello, S.
%Update the first row in the augmenting path. Assignment Problems. Philadelphia, PA: Society for Industrial
and Applied Mathematics, 2009.
u(curUnassCol)=u(curUnassCol) + delta;
[15] Castañón, D. A.
%Update the rest of the rows in the agumenting path. New assignment algorithms for data association.
sel=(ScannedCols∼=0); In Proceedings of SPIE: Signal and Data Processing of
sel(curUnassCol)=0; Small Targets Conference, Orlando, FL, Apr. 20, 1992,
u(sel)=u(sel) + delta-shortestPathCost(row4col(sel)); 313–323.
[16] Chegireddy, C. R., and Hamacher, H. W.
%Update the scanned columns in the augmenting path.
Algorithms for finding k-best perfect matchings.
sel=ScannedRow∼=0; Discrete Applied Mathematics, 18, 2 (Nov. 1987), 155–165.
v(sel)=v(sel)-delta + shortestPathCost(sel); [17] Cook, S.
end The P versus NP problem.
In The Millenium Prize Problems, J. Carlson, A. Jaffe, and A.
Wiles, (Eds.) Providence, RI: The American Mathematical
REFERENCES Society for the Clay Mathematics Institute, 2006.
[18] Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C.
[1] Akgül, M. Introduction to Algorithms, 2nd ed. Cambridge, MA: The MIT
A genuinely polynomial primal simplex algorithm for the Press, 2001.
assignment problem. [19] Intel Corporation.
Discrete Applied Mathematics, 45, 2 (1993), Intel(R) 64 and IA-32 architectures optimization reference
93–115. manual.
[2] Balas, E., Miller, D., Pekny, J., and Toth, P. Intel Corporation, Tech. Rep. 248966-026, Apr. 2012.
A parallel shortest augmenting path algorithm for the [Online]. Available: http://www.intel.com/content/
assignment problem. www/us/en/processors/architectures-software-
Journal of the Association for Computing Machinery, 38, 4 developer-manuals.html
(Oct. 1991), 985–1004. [20] Cox, I. J., and Miller, M. L.
[3] Bernard, F. On finding ranked assignments with application to
Fast linear assignment problem using auction algorithm. multi-target tracking and motion correspondence.
Nov. 13, 2014. [Online]. Available: http://www.mathworks. IEEE Transactions on Aerospace and Electronic Systems, 32,
com/matlabcentral/fileexchange/48448-fast-linear- 1 (Jan. 1995), 486–489.
assignment-problem-using-auction-algorithm [21] Cox, I. J., Miller, M. L., Danchick, R., and Newman, G. E.
[4] Bertsekas, D. P. A comparison of two algorithms for determining ranked
The auction algorithm: A distributed relaxation method for the assignments with application to multitarget tracking and
assignment problem. motion correspondence.
Annals of Operations Research, 14, 1 (Dec. 1988), IEEE Transactions on Aerospace and Electronic Systems, 33,
105–123. 1 (Jan. 1997), 295–301.
[5] Bertsekas, D. P. [22] Crouse, D. F.
Auction algorithms for network flow problems: A tutorial Advances in displaying uncertain estimates of multiple targets.
introduction. In Proceedings of SPIE: Signal Processing, Sensor Fusion,
Computational Optimization and Applications, 1, 1 (Oct. and Target Recognition XXII, Baltimore, MD, Apr. 2013.
1992), 7–66. [23] Crouse, D. F., and Willett, P.
[6] Bertsekas, D. P. Identity variance for multi-object estimation.
Network Optimization: Continuous and Discrete Models. In Proceedings of SPIE: Signal and Data Processing of Small
Belmont, MA: Athena Scientific, 1998. Targets, Vol. 8137, San Diego, CA, Aug. 25, 2011.
[7] Bertsekas, D. P. [24] Derigs, U.
Nonlinear Programming, 2nd ed. Belmont, MA: Athena The shortest augmenting path method for solving assignment
Scientific, 2003. problems.
[8] Bertsekas, D. P. and Castañón, D. A. Annals of Operations Research, 4, 1 (Dec. 1985), 57–102.
A forward/reverse auction algorithm for asymmetric [25] Dijkstra, E. W.
assignment problems. A note on two problems in connection with graphs.
Computational Optimization and Applications, 1, 3 (Dec. Numerische Mathematik, 1, 1 (Dec. 1959), 269–271.
1992), 277–297. [26] Dorhout, B.
[9] Bertsekas, D. P., Castañón, D. A., and Tsaknakis, H. Experiments with some algorithms for the linear assignment
Reverse auction and the solution of inequality constrained problem.
assignment problems. Stichting Mathematisch Centrum, Amsterdam, The
SIAM Journal on Optimization, 3, 2 (May 1993), 268–297. Netherlands, Tech. Rep., Nov. 1970.
[10] Bertsekas, D. P., and Tsitsiklis, J. N. [27] Drummond, O., Castañon, D. A.,and Bellovin, M.
Parallel and Distributed Computation: Numerical Methods. Comparison of 2-D assignment algorithms for sparse,
Englewood Cliffs, NJ: Prentice-Hall, 1989. rectangular, floating point, cost matrices.
[11] Bertsimas, D., and Tsitsiklis, J. N. Journal of the SDI Panels on Tracking, 4 (1990), 81–97.
Introduction to Linear Optimization. Belmont, MA: Athena [28] Fitzgerald, R. J.
Scientific/Dynamic Ideas, 1997. Performance comparisons of some association algorithms.
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.
In ONR/GTRI Workshop on Target Tracking and Sensor [40] Papadimitriou, C. H.
Fusion, Key West, FL, June 22-23, 2004. Combinatorial Optimization: Algorithms and
[29] Gadaleta, S., Herman, S., Miller, S., Obermeyer, F., Slocumb, B., Complexity. Englewood Cliffs, NJ: Prentice-Hall,
Poore, A., and Levedahl, M. 1982.
Short-term ambiguity assessment to augment tracking data [41] Pattipati, K., and Deb, S.
association information. Comparison of assignment algorithms with applications to the
In Proceedings of the 8th International Conference on passive sensor data association problem.
Information Fusion, Vol. 1, Philadelphia, PA, July 25-29, In Proceedings of the IEEE International Conference on
2005, 691–698. Control and Applications, Jerusalem, Israel, Apr. 3-6, 1989,
[30] Goldberg, A. V., and Kennedy, R. 317–322.
An efficient cost scaling algorithm for the assignment [42] Pattipati, K. R., Deb, S., Bar-Shalom, Y., and Washburn, R. B., Jr.
problem. A new relaxation algorithm and passive sensor data
Mathematical Programming, 71, 2 (Dec. 1995), 153–177. association.
[31] Hamacher, H. W., and Queyranne, M. IEEE Transactions on Automatic Control, 37, 2 (Feb. 1992),
k-best solutions to combinatorial optimization problems. 198–213.
Annals of Operations Research, 4, 1 (Dec. 1985), 123–145. [43] Popp, R. L., Pattipati, K. R., and Bar-Shalom, Y.
[32] Jonker, R., and Volgenant, A. Dynamically adaptable m-best 2-D assignment algorithm and
A shortest augmenting path algorithm for dense and sparse multilevel parallelization.
linear assignment problem. IEEE Transactions on Aerospace and Electronic Systems, 35,
Computing, 38, 4 (Mar. 1987), 325–340. 4 (Oct. 1999), 1145–1160.
[33] Kadar, I., Eadan, E. R., and Gassnet, R. R. [44] Rink, K. A., and O’Conner, D. A.
Comparison of robustized assignment algorithms. Use of the auction algorithm for target object mapping.
In Proceedings of SPIE: Signal Processing, Sensor Fusion, Lincoln Laboratory, Cambridge, MA, Tech. Rep. 1044, Feb. 9,
and Target Recognition VI, Vol. 3068, Orlando, FL, Apr. 21, 1998.
1997, 240–249. [45] Sleator, D. D., and Tarjan, R. E.
[34] Kuhn, H. W. A data structure for dynamic trees.
The Hungarian method for the assignment problem. Journal of Computer and System Sciences, 26, 3 (June 1983),
Naval Research Logistics, 2, 1–2 (Mar. 1955), 83–97. 362–391.
[35] Levedahl, M. [46] Tomizawa, N.
Performance comparison of 2-D assignment algorithms for On some techniques useful for solution of transportation
assigning truth objects to measured tracks. network problems.
In Proceedings of SPIE: Signal and Data Processing of Small Networks, 1, 2 (1971), 173–194.
Targets, Vol. 4048, Orlando, FL, Apr. 24, 2000, 380–389. [47] Volgenant, A.
[36] Malkoff, D. B. Linear and semi-assignment problems: A core-oriented
Evaluation of the Jonker-Volgenant-Castanon (JVC) approach.
assignment algorithm for track association. Computers and Operations Research, 23, 10 (Oct. 1996),
In Proceedings of SPIE: Signal Processing, Sensor Fusion, 917–932.
and Target Recognition VI, Vol. 3068, Orlando, FL, Apr. 21, [48] Wang, Z.
1997, 228–239. The shortest augmenting path algorithm for bipartite network
[37] Miller, M. L., Stone, H. S., and Cox, J. problems.
Optimizing Murty’s ranked assignment method. Ph.D. dissertation, Southern Methodist University, Dallas, TX,
IEEE Transactions on Aerospace and Electronic Systems, 33, May 19, 1990.
3 (July 1997), 851–862. [49] Weiss, M. A.
[38] Munkres, J. Data Structures and Algorithm Analysis in C++, 2nd ed.
Algorithms for the assignment and transportation problems. Reading, MA: Addison-Wesley, 1999.
Journal of the Society for Industrial and Applied Mathematics, [50] Weisstein, E. W. Magic square. 2012. Mathworld. [Online].
5, 1 (Mar. 1957), 32–38. Available: http://mathworld.wolfram.com/MagicSquare.html
[39] Murty, K. G. [51] Zaki, H. A.
An algorithm for ranking all the assignments in order of A comparison of two algorithms for the assignment problem.
increasing cost. Computational Optimization and Applications, 4, 1 (Jan.
Operations Research, 16, 3 (May-June 1968), 682–687. 1995), 23–45.
David Frederic Crouse (S’05—M’12) received B.S., M.S., and Ph.D. degrees in
electrical engineering in 2005, 2008, and 2011 from the University of Connecticut
(UCONN). He also received a B.A. degree in German from UCONN for which he spent
a year at the Ruprecht-Karls Universität in Heidelberg, Germany.
He is currently employed at the Naval Research Laboratory in Washington, D.C.
and serves as an associate editor for the IEEE Aerospace and Electronic Systems
Magazine and has shared online a library of reusable algorithms for target trackers
called the Tracker Component Library. His interests lie in the areas of stochastic signal
processing and tracking.
1696 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 52, NO. 4 AUGUST 2016
Authorized licensed use limited to: Purdue University. Downloaded on November 12,2024 at 15:40:21 UTC from IEEE Xplore. Restrictions apply.