RPT Manual 1.12
RPT Manual 1.12
RPT Manual 1.12
Tugdual Le Bouffant1
Nils T Siebel1
Stephen Cook2
Steve Maybank1
1
Computational Vision Group
2
Applied Software Engineering Research Group
Department of Computer Science
The University of Reading
November 2002
1
Funding provided by the European Union, grant ADVISOR (IST-1999-11287).
2
Funding provided by the EPSRC, grant DESEL (GR/N01859).
Tugdual Le Bouffant1 , Nils T Siebel1,∗,† , Stephen Cook2 , Steve Maybank1
Abstract
The Reading People Tracker has been maintained at the Department’s
Computational Vision Group for the past 3 years. During this time,
it has undergone major changes. The software has been completely
restructured, adapted for multiple cameras and multiple trackers per
camera, and integrated into the automated surveillance system ADVISOR.
Part of this process has been the creation of extensive documentation
from the source code, of which this manual is the most important part.
The manual is mostly written for programmers and covers a range of
issues, from architectural descriptions up to detailed documentation on
the software process now to be followed for all maintenance work on the
tracker.
Keywords. Software Process Documentation, Design Patterns, Soft-
ware Architecture, Reference Manual, People Tracking.
1
Computational Vision Group, 2 Applied Software Engineering Research Group
Department of Computer Science, The University of Reading
∗
Correspondence to: Nils T Siebel, Computational Vision Group, Department of
Computer Science, The University of Reading, PO Box 225, Whiteknights, Reading
RG6 6AY, United Kingdom.
†
E-Mail: nts@ieee.org
2
Contents
Contents 4
1 Introduction 6
1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 The Need to Redesign the Code . . . . . . . . . . . . . . . . . . . . . 6
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Design Patterns 15
3.1 Requirements for the People Tracker . . . . . . . . . . . . . . . . . . 15
3.2 The way to use Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Architectural Pattern . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Idioms and Rules . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Patterns Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 ReadingPeopleTracker . . . . . . . . . . . . . . . . . . . . . . 24
3.3.3 PeopleTracker . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.4 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.5 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 Libraries Description 35
4
7 Reverse engineering of the RPT code using Rational Rose’s C++
Analyser Module 40
7.1 Rational Rose C++ Analyser . . . . . . . . . . . . . . . . . . . . . . . 40
7.2 Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References 46
5
1 Introduction
This document gives an overview of the Reading People Tracker in the Computa-
tional Vision Group, Department of Computer Science, at the University of Reading.
The aim of this documentation is to provide information in order to help future de-
velopment of the People Tracker. Most of the material presented in this section and
section 2 was taken from [1].
1.1 History
The original People Tracker was written by Adam Baumberg at the University of
Leeds in 1993–1995 using C++ running under IRIX on an sgicomputer. It was a
research and development system within the VIEWS project and a proof of concept
for a PhD thesis [2]. The main focus during development was on functionality and
experimental features which represented the state-of-the-art in people tracking at
that time. Only a few software design techniques were deployed during the initial
code development. The only documentation generated was a short manual on how
to write a program using the People Tracking module.
In 1995–1998 the People Tracker was used in a collaboration between the Uni-
versities of Leeds and Reading in the IMV project. The software was changed at
the University of Reading to inter-operate with a vehicle tracker which ran on a
Sun/Solaris platform [3]. Only little functionality was changed and added during
this time and no new documentation was created.
Starting in 2000, the People Tracker has been changed for its use within the
ADVISOR system shown in Figure 8. This new application required a number of
major changes on different levels.
• The People Tracker has to be fully integrated within the ADVISOR system.
• It has to run multiple trackers, for video input from multiple cameras (original
software: one tracker, one camera input).
• The ADVISOR system requires the People Tracker to operate in realtime. Previ-
ously, the People Tracker read video images from hard disk which meant there
were no realtime requirements.
6
• Within ADVISOR, the People Tracker has to run autonomously once it has been
set up, without requiring input from an operator and without the possibility
to write any message to screen or display any image.
The status of the existing People Tracker was evaluated in relation to the new require-
ments. It was observed that the system had significant deficiencies which hindered
the implementation of the required new functionality:
• The heavy use of global variables and functions meant that multiple cameras
could not be used.
• There were very little comments in the source code which made it difficult to
read and understand.
In the years 2000 and 2001, the People Tracker was reverse-engineered and re-
engineered. The aim of the reverse engineering/re-engineering step was:
• to recover all the software engineering artefacts like the requirements and the
design documents.
From the source code, the class diagram was obtained by using the tool Rational Rose
2000e [4]. The analysis of the class diagram revealed that many class names did not
reflect the inheritance hierarchy, a number of classes were found to be redundant,
and many classes had duplicated functionality. The following correctional steps were
performed:
• Global variables and functions were eliminated and their functionality dis-
tributed into both existing and new classes. Exceptions were global helper
functions like min(), max() etc which were extracted and moved into one C++
module.
7
– Filtering out functionality duplicated in similar classes and moving it into
newly created base classes.
– Re-distribution of functionality between classes and logical modules.
– Re-distribution of functionality between methods.
• The previous version of the code contained many class implementations in the
header files; they were moved to the implementation (.cc) files.
• From both static analysis and dynamic analysis [7], a requirement document
and the UML artefacts like the Use Case, component, and package level se-
quence diagrams [8] were obtained. The UML diagrams have been shown in
Figures 3 through 7 respectively.
In the final step, the remaining part of the required new functionality was incorpo-
rated into the re-engineered People Tracker. This includes the addition of separate
processing threads for each video input, addressing the synchronisation and tim-
ing requirements etc. A newly created master scheduler manages all processing,
in order to guarantee realtime performance with multiple video inputs. This ver-
sion of the People Tracker incorporates most of the functionality needed for its use
within ADVISOR and improvements to the people tracking algorithms which make
it appropriate for the application [9]. The module has been validated against test
data. Currently, the final stage of system integration is being undertaken, and this
document, together with a User Manual to be written, completes the documentation.
1.3 Contributions
The following people have contributed to the Reading People Tracker up to the
current version 1.12 of the software (dated Mar 11 2002).
1. Design / Coding
2. Coding
8
• Philip Elliott (PTE), The University of Reading.
• Rémi Belin (RB), Université de Toulon et du Var.
• Tugdual Le Bouffant (TLB), The University of Reading.
3. Documentation
• PCA: Principal Component Analysis and shape modelling, used by the Active
Shape Tracker. PCA is used to generate the space of pedestrian outlines from
training examples.
• data: set of standard data features (skeleton, profiles, human features, etc).
• tracking: this library contains all the tracker, detector and camera classes.
9
Figure 1: Software Packages of the People Tracker
• Use Case Model: deals with the actions performed by the system under the
actors’ interactions. Who are the users of the system (the actors), and what
are they trying to do?
• Robustness Diagram: defines the types of object from the use case model.
What object are needed for each use case?
• Sequence Diagram: represents the object flow. How do the objects collaborate
within each use case?
10
• Developer,
perform with the system in Standalone mode to achieve a particular goal which is
the result of the Track New People Position use case.
Figure 2: Use Cases for the People Tracker (in Standalone/Development Mode)
11
Use Case: Track New People Position.
Actors: Decompress JPEG Image, Internal Motion Detector, Predict Old Object
Position, Determine Tracking Parameters and Generate Tracking Result.
Description: This the Central task of the system, it generates the result of the
tracking of a new person’s position. Non replicated.
The goal is to Track New People Position with Video and Motion Detection as inputs
and the result as output sent to the Behaviour Analysis.
12
Figure 3: Use Cases for the People Tracker (as a Subsystem of ADVISOR)
13
Actor: Behaviour Analysis Module, Generate Tracking Results
Description: Result generated are encode into XML format to be transported via
Ethernet to an other module which will use these result to perform the behaviour
analysis.
• Entity object, which represent a result and which will be used for external
tasks.
14
2.2.4 Sequence Diagram
This diagram is also named interaction diagram. It shows the flow of requests be-
tween objects. Interaction modelling is the phase in which the threads that weave
objects together are built. At this stage it is possible to see how the system performs
useful behaviour. The sequence diagrams for the People Tracker have been shown in
Figures 6 and 7.
3 Design Patterns
Design patterns systematically describe names, motivate, and explain a general de-
sign that addresses a recurring design problem in all object-oriented systems. They
describe the problem, the solution, when to apply the solution, and its consequences.
They also give implementation hints and examples. The solution is a general ar-
rangement of objects and classes that solve the problem in a particular context.
15
Figure 7: ADVISOR Sequence Diagram (at a Higher Level)
This two mode of operation are controlled at compile time using preprocessor
#defines. The ADVISOR mode is enabled if and only if THALES_WRAPPER is defined.
Image output is disabled if NO_DISPLAY is defined, and it has to be define in ADVISOR
mode.
• Input: The video capture and motion detection subsystems are done by respec-
tively Thales and INRIA. These subsystems send to the People Tracker XML
files containing the images:
16
2. Video image
3. Background image
• Output: the result of the People Tracking is sent to the behaviour analysis
subsystem done by INRIA. The result sent by People Tracker is an XML file
containing the information on tracked people.
Physical Architecture
Figure 8 shows the overall system layout, with individual subsystems for tracking,
detection and analysis of events, storage and human-computer interface. Each of
these subsystems is implemented to run in realtime on off-the-shelf PC hardware,
with the ability to process input from a number of cameras simultaneously. The
connections between the different subsystems are realised by Ethernet. As we have
already seen, this subsystem has one input and one output.
Naming Conventions
• Class names and their associated files have to be written as uppercase for
the first letter of each component, e.g. ActiveShapeTracker. Underscores ’ ’
are not used in class names. Two different filename extensions are used for
C++ files header declaration (.h) and implementation definition (.cc). The
following examples demonstrate the way to name things:
17
– ClassName
– associated files ClassName.h for the class declaration and ClassName.cc
for its implementation.
– method_name() (within a class)
– a_local_variable (global variables are forbidden)
– get_a_variable(), “Accessor”
– set_a_variable(), “Modifier”
• All main() programs and test cases have to be situated in the progs/ subdi-
rectory.
Include Files
• Use the directive #include "filename.h" for user-prepared include files
(i.e. those which are part of our own code).
• Use the directive #include <filename.h> for include files from system li-
braries.
• Every include file must contain a mechanism that prevents multiple inclusions
of the file. The defined “guard definition” has to be written in uppercase
letters, it has to begin and finish with 2 underscores, and each term has to
be separated by one underscore. For example, at beginning of the include file
called |ActiveShapeTracker.h— you will see
#ifndef __ACTIVE_SHAPE_TRACKER_H__
#define __ACTIVE_SHAPE_TRACKER_H__
18
• Minimal use of #define macros, use inline functions instead. The normal rea-
son for declaring a function inline is to improve performance. Small functions,
such as access functions, which return the value of a member of the class and
so-called forwarding functions which invoke another function should normally
be inline.
• The difference between int &a (or int *a) and int& a is that the first one
seems to associate the & or (or *, for that matter) with the type, and the second
one with the variable. From the compiler point of view there is no difference
between them. Associating the & or * with the type name reflects the desire
of some programmers for C++ to contain a separate pointer type. However,
neither the & nor the * is distributive over a list of variables. See for example
int *a,b; Here b is declared as an integer (not a pointer to an integer), only
a is a pointer to an integer.
For these reasons, never write int& a, only int &a.
The normal reason for declaring a function inline is to improve the perfor-
mance of your program. Correct use of inline functions may also lead to
reduced size of code.
• The use of unsigned type: the unsigned type is used to avoid inconsistency an
error generation. It is widely use in the matrix library (e.g., you cannot specify
a negative number of columns for a matrix calculation). Always use unsigned
for variables which cannot reasonably have negative values.
19
• The use of const: this identifier has to be used when you want to keep the value
of a variable. It makes it impossible to overwrite its value. A member function
that does not affect the state of the an object is to be declared const. All
trackers or detectors have to be derived from BaseTracker using pure virtual
methods.
• All types used by more than one class have to be defined in
tracker_defines_type_and_helpers, for instance: the image source type.
Classes
• Definitions of classes that are only accessed via pointers (*) or references (&)
shall not be included as include files. Instead, use forward declarations, for
example, in ActiveShapeTracker.h one can find:
class Image;
class ActiveModel;
...
#include "Image.h"
#include "ActiveModel.h"
...
• Use base classes where useful and possible. A base class is a class from which
no object is created; it is only used as a base class for the derivation of other
classes. For example see BaseTracker class
• public, protected, private sections of a class are to be declared in that
order. By placing the public section first, everything that is of interest to a
user is gathered in the beginning of the class definition. The protected section
may be of interest to designers when considering inheriting from the class. The
private section contains details that should have the least general interest.
– public members of a class are member data and functions which are
everywhere accessible by specifying an instance of the class and the name.
– protected members are variables or functions which are accessible by
specifying the name within member functions of derived classes.
– private members are variables or functions which are only accessible
inside the class.
• A friend function has access to all private and protected members of the
class for which it is a friend.
20
Use and Update the Source Code
The Reading People Tracker source code is shared between several developers. To
access the source code it is necessary to use CVS (Concurrent Version System) which
permits to a group of people to work at the same time on the same code. There is a
Master Repository of the Source code you can use it in three ways:
• CVS update source_name: this command gives the updated source code .
• CVS commit source_name: this command applies the changes made in the
code to the Master Repository of the Source code.
• The Comment first describes the method created and the return or parameter
values.
• Any implementation which is more than a very few lines of code has to be
written in the .cc file, not in the .h file.
///////////////////////////////////////////////////////////////////////////////
// //
// int PeopleTracker::get_state() check whether there is an error //
// //
// returns the number of active cameras (>= 0) if no error can be detected //
// < 0 (value of state variable, qv) if there is a known error //
// //
///////////////////////////////////////////////////////////////////////////////
int PeopleTracker::get_state()
{
if (state == 0) // everything OK?
{
21
// count number of active cameras (ie. existing and enabled ‘Camera’s)
unsigned int number_of_active_cameras = 0;
unsigned int index;
return number_of_active_cameras;
}
else
{
assert (state < 0); // (state != 0) means error and this should be < 0
return state; // this will hold an error value < 0
}
}
3.3.1 Inputs
The Inputs class, shown in Figure 9, sets up video input, external motion input and
camera calibration. This class collect all input to given frame and returns actual and
new frame id.
1. CONSTRUCTOR:
Inputs::Inputs(ConfigurationManager *configuration_manager) Cre-
ates new configuration manger, Input configuration.
22
Configuration Manager
Inputs
Pnm Source
Buffered Slave Image Source
2. DESTRUCTOR:
Inputs::~Inputs() delete old sources.
3. Setup Inputs:
void Inputs::setup_inputs()
• Check for special filename “slave”. Use buffered slave input but fed from
a file source. In order to do that, we determine and create file source first,
to get the image dimensions. Then we open the slave source.
• Open input_source which reads images from hard disk. The following
image formats are recognised: JPEG, PPM, PGM (PNM formats may be
‘compress’ed or ‘gzip’ed).
• Create slave source if we need it. Open slave source to be fed from
\input_source. This feed contains the image dimensions necessary to
instantiate the BufferedSlaveImageSource class.
23
• Check for special filename “slave”. Use buffered slave input which is fed
with input by Thales’ Wrapper. Get the image dimensions.
• Create a NumberedFileSource which reads XML data from hard disk.
Set up file source for the slave. This is a NumberedFileSource.
6. Proceed Inputs:
frame_id_t Inputs::proceed_to_frame(...) First proceed the
video_image_source to the next frame id with an id >= the given
one. Proceed all inputs to frame given frame_id_t next_frame_id, if
possible and return actual new frame id. Then we use video_image_source
as the definitive source for the frame id, proceeding other inputs to it. The
rationale here is that if there is no video image for a given frame, no other
source needs to provide data for that frame.
• Get next video image, check for frame number wraparound, wrap around
input sources, e.g.. frame_id 999999 → 000000 within ADVISOR.
• Choose latest background image: if input from slave, until buffer is empty.
3.3.2 ReadingPeopleTracker
The main() program for the Reading People Tracker. The diagram below shows the
flow of actions performed in the tracking task. When the ReadingPeopleTracker is
launched, it instantiates a PeopleTracker object which will starts the camera treat-
ment threads.
24
ReadingPeopleTracker
thread
PeopleTracker
3.3.3 PeopleTracker
The PeopleTracker class handles and schedules trackers and outputs tracking re-
sults. It instantiates Camera objects which in turn hold everything associated with
the respective camera: Inputs, Calibration, the actual Tracking class which generates
and stores results, a Configuration class etc. The PeopleTracker lets each Camera
class start a thread. These wait for the PeopleTracker to supply the input (images,
XML motion data). After tracking, they signal us that they have new results. The
PeopleTracker then extracts the results from the Camera and fuses data from all
cameras in order to write the person/object tracks out in XML format.
ConfigurationManager
OUTPUTS (XML)
• CONSTRUCTOR:
PeopleTracker::PeopleTracker(char *toplevel_config_filename)
toplevel_config_filename is the base for all configuration file names.
Other configuration files are generated by appending camera number (e.g.,
LUL-conf1 -> LUL-conf1-01 for the first camera) and initials of tracking
25
modules (e.g., LUL-conf1-01-RT for the RegionTracker of the first camera).
The constructor sets a state variable to the number of active cameras if there
is no error during set-up. Otherwise the state is set to a value < 0. Use
get_state() to query this state.
3.3.4 Camera
The Camera object holds everything associated with a camera: Inputs, Calibration,
the actual Tracking class which generates and stores results, a ConfigurationMan-
ager class etc. The PeopleTracker class can start a thread for each Camera class
instantiated and waits for input to be fed by the parent. Results from Tracking are
generated by the thread and the availability of new Results is signalled.
26
Configuration Manager Results
1. CONSTRUCTOR:
Camera::Camera(char *camera_config_filename, bool quiet_mode) set
up all inputs, set up output movie, set up trackers.
2. Processing:
void *Camera::do_processing(void *unused) This is the threaded method
waiting for data and doing all the processing. The following steps are taken
for each frame.
• Calculate frame id,
• Proceed inputs,
• Mark results: not yet finished,
• Get new data set from RegionSet,
• Put background image into result,
• Run trackers,
• Update display,
• Draw result into image.
3. Start thread:
pthread_t Camera::start_thread() Start a thread which does all processing
as data arrives. returns thread id.
4. Calculate next frame id:
void Camera::calculate_next_frame_id().
5. Get new data set:
inline void Camera::get_new_data_sets() get next image / XML Region-
Set from each of the 4 inputs as necessary the Inputs class will set the pointers
to NULL if they are not available.
6. Register config parameter:
void Camera::register_configuration_parameters().
27
3.3.5 Tracking
The Tracking class is the nucleus of the tracking task. From this class are launched
all the different trackers and detectors. The trackers enabled in the the tracking
Configuration File are created: generation of a new configuration file for each tracker
and start running them.
Configuration Manager
Motion Detection
Region Tracker
1. CONSTRUCTOR:
Tracking::Tracking(ConfigurationManager *configuration_manager)
2. DESTRUCTOR:
Tracking::~Tracking() deletes old configurations.
3. Setup trackers:
void Tracking::setup_trackers(Inputs *inputs,
unsigned char *camera_configuration_filename_base) generates tracker
configuration file names by using the camera configuration file name and ap-
pending a suffix, e.g. “-MD” for the Motion Detector module (generally, up-
percase initials of modules)
4. Run trackers:
void Tracking::run_trackers(Inputs *inputs, Results *results)
28
3.3.6 Results
The Results class is used as Storage class for tracking results. The results from
tracking will be added by the individual trackers This class is an Accessor and Mod-
ifier to all the tracking results:
inline TrackedObjectSet *get_tracked_objects()
{
return tracked_objects;
}
inline void set_tracked_objects(TrackedObjectSet *new_tracked_objects)
{
tracked_objects = new_tracked_objects;
}
inline Image *get_motion_image()
{
return motion_image;
}
inline void set_motion_image(Image *new_image)
{
motion_image = new_image;
}
inline Image *get_background_image()
{
return background_image;
}
inline void set_background_image(Image *new_image)
{
background_image = new_image;
}
inline Image *get_difference_image()
{
return difference_image;
}
inline void set_difference_image(Image *new_image)
{
difference_image = new_image;
}
inline Image *get_thresholded_difference_image()
{
return thresholded_difference_image;
}
inline void set_thresholded_difference_image(Image *new_image)
{
thresholded_difference_image = new_image;
}
inline Image *get_filtered_difference_image()
{
return filtered_difference_image;
29
}
inline void set_filtered_difference_image(Image *new_image)
{
filtered_difference_image = new_image;
}
4. Build PCA shape model (using process_squence) and output to current model
file,
6. Run the tracker with current model on segmented training images. We consider
each training image as a sub-sequence of “n” identical images. The tracker is
run on each sub-sequence so that the final shape has had a chance to converge
onto the precise training image shape.
8. goto 3.
1
The given algorithm was kindly provided by Adam Baumberg in a personal communication on
Wed Jun 5 2002
30
Step 5 is needed to ensure the fit to the training shapes does not degrade over time.
However for a large training set and a couple of iterations it is not critical. There
used to be a short program called add noise.cc that did this, however, this program
is not a part of the current distribution.
• The get_current method defined in ImageSource class, gets the current image
or frame.
• The get_next method gets the next image or frame. If it is the first image
the method returns the first image and get_current may not be called. The
get_next method will modify the value of the current pointer.
• internal for Standalone: they are files stored in the hard drive (video images),
31
• Calibration inputs if there are some.
Results
All data are exchanged via a central class Results. The results are the outputs of
the ReadingPeopleTracker. This results contain:
• Images: motion image, background image, difference image and threshold im-
age.
• TrackedObjectSet:
– Can contain invisible (not detected) hypotheses or predictions
– Tracking result/output:
∗ only Profile,
∗ only older result,
∗ only visible.
– Observation class: for each region or profile detected is associated an new
Observation object which will date the frames of the region or profile.
For Example:
class Profile → class ProfileSet : Observation,
class Region → class RegionSet : Observation,
class NewX → class NewXSet : Observation.
4.3.3 Detection
• Measurements: each picture is compared to the original or updated back-
ground.
• No “memory”: blob is simply detected (just detected) at the instant t.
For these reasons there is no need for a post_process_frame(), as there is nothing
to clean.
32
4.3.4 Tracking
The tracking module includes the detection module.
• Over the time: the aim is to follow one or many blobs, used of
track_old_objects()
• Measurements: the measurements are done in a loop which predicts the position
of an object for the next frame. The previous results are stored in current. It
is checked if the prediction was right.
– yourself (AST):,
– other modules (RT).
• MotionDetector,
• RegionTracker,
• ActiveShapeTracker,
• HeadDetector.
Figure 14 shows the organisation of the tracking and detection modules and how
these modules interact with each other from the input to the output. The video
input is taken by the MotionDetector module. This module applies some filtering
to the image and obtains detected regions:
The detected regions are used as input for the RegionTracker module:
33
Video image
Motion Detector
me[di]an filtering
incorporation
Background Image
Head Detector
filtering and differencing
Processed
Regions matching and
removing doubles
Predicted Static
matching Revised prediction
Regions Regions
Tracked Profiles
removal if
moving again
Unmatched Identified and Active Shape Tracker
Pred. Regions New Regions
prediction
Region Tracker
• Then the program tries to match these region with the human features data.
• When a static region is detected, the program adds the region to the back-
ground image.
The HeadDetector module takes its input from the identified region from
RegionTracker module. The Image difference obtained in the MotionDetector is
use as input in ActiveShapeTracker module. The region identified result from
RegionTracker module is used by HeadDetector and ActiveShapeTracker mod-
ules. Then the HeadDetector module provides information about the mean profile
to ActiveShapeTracker module:
• First the module make some hypothesis on the profile of the region detected
using data from RT and HD,
• Then the module try to fit a shape onto this region,
• When the new profile is identify the output confirm the others on the Tracking
output.
34
and their configuration file names. For example the name of the top level configu-
ration file is TLC-00 and this file contains the name of the camera(s) configuration
file(s) TLC-00.PTLF0n.C0n (n is the camera number 1 to 4). Each camera configura-
tion file contains information on the trackers used and their configuration file names
for example the Region tracker file name will be TLC-00.PTLF0n.C0n-RT.
6 Libraries Description
This subsection deals with the 7 libraries and the most prominent classes of the Read-
ing People Tracker. It contains information about class definitions and description
of their behaviour.
• The PCA library:
– PCAclass.cc: defines the PCAclass which is a class for carrying out Prin-
cipal Component Analysis on arbitrary data.
– MeanProfile.cc: defines the MeanProfile which is a class for calculating
an updated mean profile.
– HMetric.cc: define the HMetric class.
– ProfileSequence.cc: define the ProfileSequence which is a class which
allows a number of manipulations to ProfileSet stored in file. Only used
by the process sequence program to generate and manipulate PCA models.
35
– NagMatrix.cc: defines the NagMatrix which is a class for manipulating
(2D) matrices (uses either NAG or BLAS/LAPACK library).
– NagVector.cc: defines the NagVector which is a class for manipulating
realno n-vectors.
36
– SplineMatrix.cc: defines SplineMatrix class with methods to convert
a Region Boundary to spline form.
– SplineWeights.cc: defines SplineWeights class.
37
MinimumSource, MiddleSource, NoisySource, NeighbourSource,
RepeatSource, RotateSource, ResampleSource, VFlipSource,
ThresholdSource, ThresholdedDifferenceSource, SobelEdgeSource,
SobelSource, SubImageSource, SimpleDiffSource, SkipSource classes.
This file is not directly included. We are included by PipeSource.h.
– PnmSource.cc: defines PnmSource class which is an utility function to
read a PNM file header; returns 5 for P5 etc, 0 for error. NOTE: does not
open or close file. Leaves file pointer at first data byte.
– PnmStream.h: defines PnmStream which passes a Unix command that will
generate a stream of PNM images to stdout OR simply an input stream
containing PNM images. This class connects to the stream to gener-
ate images accessed using the get_next() function note the get_next()
method is implemented in PgmSource.cc.
– RGB32Image.cc: defines RGB32Image class which 24-bit (plus al-
pha) colour image class derived from generic. Image class in
Image.h/Image.cc.
– XanimStream.h: defines XanimStream class.
38
ForegroundEdgeDetector class, MovingEdgeDetector class which is
looking at spatial and temporal derivative but do not assume back-
ground image is available, SimpleEdgeDetector class, SobelDetector
class, GenericEdgeDetector class.
– HumanFeatureTracker.cc (structure copied from ActiveShape-
Tracker.cc): defines HumanFeatureTracker class which is a tracker
class which tracks human features such as head etc.
– Inputs.cc: defines Inputs class.
– MotionDetector.cc: defines MotionDetector class which support for
external motion image source. Concept for creation of motion image,
colour filtering techniques, pre-calculated difference image, dilation for
motion image.
– OcclusionHandler.h: defines OcclusionHandler base class to handle
occlusion.
– OcclusionImage.cc: defines OcclusionImage class.
– PeopleTracker.cc: defines PeopleTracker class handles and schedules
trackers and outputs tracking results.
– RegionTracker.cc: defines RegionTracker class which tracks regions
from frame to frame.
– ScreenOutput.cc
– SkeletonTracker.cc (structure copied from HumanFeatureTracker.h):
defines SkeletonTracker class which tracks a humans “skeleton”.
– TrackedObject.cc: storage class for tracked objects (person, group, car,
other) holding all data from all trackers.
– TrackedObjectSet.cc: defines TrackedObjectSet class which lists to
hold ‘TrackedObject’s.
– Tracking.cc
• The XML library:
– BufferedSlaveXMLRegionSource.cc: in this file we define the
BufferedSlaveXMLRegionSource class which defines an interface to
XML Region data as defined by the XML Schema namespace
http://www.cvg.cs.reading.ac.uk/ADVISOR (the current name). An
instance of this class will read and buffer XML data given through
handle_new_blob_data(...) until it is queried by get_next().
get_next() will parse the XML data, return a RegionSet and delete
the XML buffer.
– XMLRegionHandler.cc: defines XMLRegionHandler class which defines an
interface to XML Region data as defined by the XML Schema namespace
http://www.cvg.cs.reading.ac.uk/ADVISOR (the current name). Sim-
ilar in design to our ImageSource classes, some methods are pure virtual
39
and the class should therefore not be instantiated directly. The XMLRe-
gionHandler class is derived from the XML SAX2 DefaultHandler class
which is designed to ignore all requests. We only redefine the methods
that we need such that there is little overhead.
– XMLRegionSource.cc (abstract): this class defines an interface to
XML Region data as defined by the XML Schema namespace
“http://www.cvg.cs.reading.ac.uk/ADVISOR” (the current name). Sim-
ilar in design to our ImageSource classes, some methods are pure virtual
and the class should therefore not be instantiated directly. The XMLRe-
gionSource class uses the XMLRegionHandler class to extract data from
the XML structures.
40
Figure 16: File Name Case Sensitivity configuration Edit/Preferences...
The next step is to set the map path of the files locations in order to indicate
to the C++ Analyser where the files to be analysed are situated on the hard
drive.
41
Figure 18: Map path user data configuration Edit/Path Map...
Create a new project: when the analyser is configured properly create a new
project File/New. Add to this project all the files to analyse.
Include all files (source code and libraries): this step is an important one, add
to the project all the files to be analysed, this includes all the libraries
(#include <...> and #include "...").
Define variables: as the program uses some global variables, you have to define
them in the analyser.
42
Figure 20: Global variables definition Edit/Defined Symbols...
launch the analysis process: when you have included all the files and defining all
variables you can run the analyse: Action/Analyse
• Cannot find anything named: you probably haven’t included all the libraries.
• The inclusion of this library introduces a circular dependency: declare the file
as Type 2 and go to Edit then Type 2 Contexts and edit it including the file
which generates the error.
43
Figure 21: Edition of the type2 inclusions Edit/Type 2 Contexts...
44
References
[1] M. Satpathy, N. T. Siebel, and D. Rodrı́guez, “Maintenance of object oriented
systems through re-engineering: A case study,” in Proceedings of the IEEE Inter-
national Conference on Software Maintenance (ICSM 2002), Montréal, Canada,
pp. 540–549, October 2002.
[4] Rational Software Corporation, Cupertino, USA, Rational Rose 2000e, 2000.
[6] D. Gries, The Science of Programming. New York, USA: Springer-Verlag, 1981.
[8] J. Rumbaugh, I. Jacobson, and G. Booch, The Unified Modeling Language Ref-
erence Manual. Reading, USA: Addison-Wesley, 1999.
[9] N. T. Siebel and S. Maybank, “Fusion of multiple tracking algorithms for robust
people tracking,” in Proceedings of the 7th European Conference on Computer
Vision (ECCV 2002), K obenhavn, Denmark (A. Heyden, G. Sparr, M. Nielsen,
and P. Johansen, eds.), vol. IV, pp. 373–387, May 2002.
45
[13] N. T. Siebel, Designing and Implementing People Tracking Applications for Au-
tomated Visual Surveillance. PhD thesis, Department of Computer Science, The
University of Reading, Reading, UK, January 2003. To appear.
46