Cplusplus System
Cplusplus System
MANAGMENT
SYSTEM
SUBMITTED BY:
ACKNOWLEDGEMENT
1. Preface
2. System Study
2.1. Introduction
2.2. Feasibility Study
2.3. System Overview
3. System Analysis
3.1. Importance of Computerized
PAYROLL MANAGMENT System
3.2. About the Project
3.3. Functional Requirements
4. System Design
4.1. System Development Cycle
4.2. Context Level DFD
4.3. DFD for Car Renting System
4.4. Search Process
5. Data Dictionary
5.1. Physical Design
5.2. Source Code
6. Testing
6.1. Testing Phases
6.2. Verification & Validation
6.3. Reports
7. System Implementation
9. Users Manual
9.1. Operational instruction for the User
9.2. Introduction to various operations
10. Bibliography
PROBLEM DEFINITION
1.2 NEED:
I have designed the given proposed system in the C++ to automate the
process of Payroll system.
The complete set of rules & procedures related to PayRoll and generating
report is called PAYROLL MANAGEMENT SYSTEM. My project gives
a brief idea regarding automated Payroll activities.
The following steps that gives the detailed information of the need of
proposed system are:
Efficiency: The basic need of the project is accuracy and efficiency . The
project should be efficient so that whenever a Employee is added,add his
Record , delete his record, display and generate his payslip.
Security: Security is the main criteria for the proposed system. Since
illegal access may corrupt the database. So security has to be given in this
project.
OBJECTIVE
During the past several decades personnel function has been
transformed from a relatively obscure record keeping staff to
central and top level management function. There are many factors
that have influenced this transformation like technological
advances, professionalism, and general recognition of human
beings as most important resources.
1. To propose a set of solution that can realize the project goal. These
solutions are usually descriptions of what the new system should look
like.
BENEFITS
Increasing income, or
Decreasing costs, or
Both
Technical Feasibility:
Technical Feasibility includes existing and new H/W and S/W requirements
that are required to operate the project on the platform Turbo C++. The basic
S/W requirement is TURBO C++ in which the front end of the hospital
management project has been done. The basic entry forms are developed in
TURBO C++ and the data is stored in the FILES.
Operational Feasibility:
Operational feasibility is mainly concerned with issues like whether the
system will be used if it is developed and implemented. Whether there will
be resistance from users that will effect the possible application benefits?
The essential questions that help in testing the technical feasibility of a
system are following:
Legal Feasibility:
Alternatives:
Costs:
According to the Payroll System, Rs. 250 pay for a day of a single
employee .
Amount collected from the Employee when he returns car this year
=12*(40 * (250 + 450) + 150 * 30 * 30 + 250000)
=Rs. 49,56,000
Now using Net Present Value Method for cost benefit analysis we have,
Net Present Value (origin) = Benefits Costs
=14868000-491000
=Rs. 14377000
1st year:
Investment = 491,000
Benefit = 49,56,000
2nd year:
Investment = 491,000
Benefit = 10412,000
3rd year:
Investment = 491,000
Benefit = 15859000
From cost and benefit analysis we have found that the project is
economically feasible since it is showing great gains (approx. above
3000%).
After economic feasibility, technical feasibility is done. In this, major issue
is to see if the system is developed what is the likelihood that itll be
implemented and put to operation? Will there be any resistance from its
user?
It is clear that the new automated system will work more efficiently and
faster. So the users will certainly accept it. Also they are being actively
involved in the development of the new system. So our system is
operationally feasible.
After the feasibility study has been done and it is found to be feasible, the
management has approved this project.
The analyst needs data about the requirements and demands of the project
undertaken and the techniques employed to gather this data are known as
fact-finding techniques. Various kinds of techniques and the most popular
among them are interviews, questionnaires, record views, case tools and also
the personal observations made by the analyst himself.
Interviews
One very essential aspect of conducting the interview is that the interviewer
should first establish a rapport with the interviewee. It should also be taken
into account that the interviewee may or may not be a technician and the
analyst should prefer to use day to day language instead of jargon and
technical terms.
The advantage of the interview is that the analyst has a free hand and the he
can extract almost all the information from the concerned people but then as
it is a very time consuming method, he should also employ other means such
as questionnaires, record reviews, etc. This may also help the analyst to
verify and validate the information gained. Interviewing should be
approached, as logically and from a general point of view the following
guides can be very beneficial for a successful interview:
1. Set the stage for the interview.
2. Establish rapport; put the interview at ease.
3. Phrase questions clearly and succinctly.
4. Be a good listener; a void arguments.
5. Evaluate the outcome of the interview.
The interviews are of the two types namely structured and unstructured.
I . Structured Interview
Structured interviews are those where the interviewee is asked a standard set
of questions in a particular order. All interviews are asked the same set of
questions. The questions are further divided into two kinds of formats for
conducting this type if interview.
Questionnaires:
Questionnaires are another way of information gathering where the potential
users of the system are given questionnaires to be filled up and returned to
the analyst.
Questionnaires are useful when the analyst need to gather information from
a large number of people. It is not possible to interview each individual.
Also if the time is very short, in that case also questionnaires are useful. If
the analyst guarantees the anonymity of the respondent then the respondent
answers the questionnaires very honestly and critically.
The analyst should sensibly design and frame questionnaires with clarity of
its objective so as to do just to the cost incurred on their development and
distribution.
Record Reviews
Records and reports are the collection of information and data accumulated
over the time by the users about the system and its operations. This can also
put light on the requirements of the system and the modifications it has
undergone. Records and reports may have a limitation if they are not up-to-
date or if some essential links are missing. All the changes, which the system
suffers, may not be recorded. The analyst may scrutinize the records either at
the beginning of his study which may give him a fair introduction about the
system and will make him familiar with it or in the end which will provide
the analyst with a comparison between what exactly is/was desired from the
system and its current working.
On-Site Observation
On-site observations are one of the most effectively tools with the analyst
where the analyst personally goes to the site and discovers the functioning of
the system. As a observer, the analyst can gain first hand knowledge of the
activities, operations, processes of the system on-site, hence here the role of
an analyst is of an information seeker. This information is very meaningful
as it is unbiased and has been directly taken by the analyst. This exposure
also sheds some light on the actual happenings of the system as compared
to what has already been documented, thus the analyst gets closer
to system. This technique is also time-consuming and the analyst
should not jump to conclusions or draw inferences from small samples of
observation rather the analyst should be more.
Analyst: Ill come straight to the point. Dont hesitate, you can be
as much open you want. There are no restrictions.
Administrator: Ill give you my whole contribution.
1. What are your expectations out of the new system (computerized)? Rate
the following on a scale of 1-4 giving allow value for low priority.
(a) better cataloguing
(b) better managing of users
(c) better account and patients management
(d) computer awareness
(e) any other________________
2. Report/Details Functions
Transaction System
Decision Support System
Transaction System:
A transaction is a record of some well-defined single and usually small
occurrence in a system. Transactions are input into the computer to
update the database files. It checks the entering data for its accuracy. This
means that numeric data appears in numeric field and character data in
character field. Once all the checks are made, transaction is used to
update the database. Transaction can be inputted in on-line mode or batch
mode. In on-line mode, transactions are entered and updated into the
database almost instantaneously. In batch mode, transactions are
collected into batches, which may be held for a while and inputted later.
User can store information as per requirement, which can be used for
comparison with other reports.
FUNCTIONDETAILS
The basic objective of PAYROLL MANAGEMENT SYSTEM is to
generalize and simplify the monthly or day to day activities of Payroll like
Admission of New employee, payroll, payslip Assigning related to particular
employee, Reports of Number of Employee and delete the employee record
etc. which has to be performed repeatedly on regular basis. To provide
efficient, fast, reliable and user-friendly system is the basic motto behind this
exercise.
Let us now discuss how different functions handle the structure and data
files:
This is the function used to open a new record for a employee so that
he/she can assign a separate Record. In that screen, the automatic
EMPLOYEE number . After opening a new record for the employee,
finally a CODE is assigned to a EMPLOYEE .
This function is used for employee in our company after entering his
all personal details like Name, Address, Phone, Sex including date of
joining , he have his own convence or
Not and his salary.
2. Function EDIT( )
4. Function DISPLAY_RECORD()
1999 Standard C is not widespread yet, so please do not require its features
in programs. It is ok to use its features if they are present. However, it is
easy to support pre-standard compilers in most programs, so if you know
how to do that, feel free. If a program you are maintaining has such support,
you should try to keep it working.
int
foo (int x, int y)
...
int
foo (x, y)
int x, y;
...
You need such a declaration anyway, in a header file, to get the benefit of
prototypes in all the files where the function is called. And once you have
the declaration, you normally lose nothing by writing the function definition
in the pre-standard style.
This technique does not work for integer types narrower than int. If you
think of an argument as being of a type narrower than int, declare it as
int instead.
There are a few special cases where this technique is hard to use. For
example, if a function argument needs to hold the system type dev_t, you
run into trouble, because dev_t is shorter than int on some machines; but
you cannot use int instead, because dev_t is wider than int on some
machines. There is no type you can safely use on all machines in a non-
standard definition. The only way to support non-standard C and pass such
an argument is to check the width of dev_t using Autoconf and choose the
argument type accordingly. This may not be worth the trouble.
Conditional Compilation
if (HAS_FOO)
...
else
...
instead of:
#ifdef HAS_FOO
...
#else
...
#endif
A modern compiler such as GCC will generate exactly the same code in both
cases, and we have been using similar techniques with good success in
several projects.
While this is not a silver bullet solving all portability problems, following
this policy would have saved the GCC project alone many people hours if
not days per year.
#ifdef REVERSIBLE_CC_MODE
#define HAS_REVERSIBLE_CC_MODE 1
#else
#define HAS_REVERSIBLE_CC_MODE 0
#endif
Source-file-name:lineno: message
If you want to mention the column number, use one of these formats:
Source-file-name:lineno:column: message
Source-file-name:lineno.column: message
Line numbers should start from 1 at the beginning of the file, and column
numbers should start from 1 at the beginning of the line. (Both of these
conventions are chosen for compatibility.) Calculate column numbers
assuming that space and all ASCII printing characters have equal width and
assuming tab stops every 8 columns.
The string message should not begin with a capital letter when it follows a
program name and/or file name. Also, it should not end with a period.
Error messages from interactive programs, and other messages such as usage
messages, should start with a capital letter. But they should not end with a
period.
FUNCTIONAL REQUIREMENT
The platform is the hardware and software combination that the
Client/Server runs on. While hardware systems vary widely in features and
capabilities, certain common features are needed for the operating system
software.
HARDWARE SPECIFICATIONS
Hardware is a set of physical components, which performs the functions of
applying appropriate, predefined instructions. In other words, one can say
that electronic and mechanical parts of computer constitute hardware.
Video displays
Earlier, the IBM-compatible computers had a simple text-only monochrome
for the video display. Now, they use the advanced high-resolution color
displays. For Client/Server systems one should have VGA or better video
display.
In the following table TLA stands for the various types of adapters that can
be used with IBM compatible PCs and the standard resolution for each one
of them.
Disk Drives
Each client computer must have enough disk space available to store the
client portion of the software and any data files that needs to be stored
locally.
It is best to provide a local disk drive for each client computer. However
Client/Server applications can use the diskless workstations for which the
only disk access is the disk storage located on a network file server. The
hard disk drive at database server should be at least of the capacity 4.1 GB.
But it is recommended to have one of capacity 8.2 GB.
Mouse
A mouse is a must for the client software running under Windows
OS or any other graphical environment.
Keyboard
Each client must have a 104 keys extended keyboard.
SOFTWARE REQUIREMENTS
The software is a set of procedures of coded information or a program which
when fed into the computer hardware, enables the computer to perform the
various tasks. Software is like a current inside the wire, which cannot be
seen but its effect can be felt.
Requirement Initial
Determination Feasibil
Requirement
ity
Decision to Investigation
Analysi
Design Information s
System
Feasibility
Test Plan Study
.
Physical
Requirement
System Configuration
Data
Schedule Budget
System Hardware
Evaluation Study
CODE EMPLOYEE
PAYROLL
MANAGEMENT
SYSTEM
DELETED GENERATE
EMPLOYEE PAYSLIP
RECORD
DATA FLOW DIAGRAM
OPENING A EMPLOYEE
RECORD
1 Generating
EMPLOYEE new CODE
number
1.1 Display
Form
FILE
Process
1.2 Get
Update Table Details
Employee Document
1.3 Open
1.4
new code
Update
DATA FLOW DIAGRAM
ADMISSION OF A NEW EMPLOYEE
1 Assigning a
EMPLOYEE newcode
number
1.1 Display
Form
FILE
Process
employee Details
1.3
1.4 generate
Update
display
DATA FLOW DIAGRAM
RECORD MODIFICATION
1
Read the
employee
USER code
Scan Record
2
Show the
Details of FILE
Record
Processing
Update
3
Modify
Details of
Record
DATA FLOW DIAGRAM
DELETE OF EMPLOYEE
1 Scan the
EMPLOYEE EMPLOYEE
number
1.1 Display
Form
FILE
Process
Update Table
Emploiyee Details
1.2 Get
1.4
Details
Update
DATA FLOW DIAGRAM
LISTING OF EMPLOYEE
FILE
Scan Record
1 2 Select 3 Copy
EMPLOYEE Read the Record Selected
code from Record
number Database
Processing
6 Copy
Selected 4
Record Compute
Total
Processing
Output
5 Select
Record
8
7 Generate
Compute
Total List
Bill
Final Output
To Screen/Printer
OUTPUT
UNIT
DATA FLOW DIAGRAM
GENERATING PAYSLIP OF
EMPLOYEE
FILE
Scan bed No
1
MANAGEE Read bed
MENT number
2 Check for
Discharged
Patient
Update
Processing 4
Close
Database
3
Compute
Bill
Cash
PATIENT
DATA FLOW DIAGRAM
LIST OF ALL RECORDS
FILE
MANAG 1
Read the
2 Select
Record
EMENT Request from File
Processing
3 Copy
Selected
Record
7 Copy
Selected 4
Record Compute
Total
5
Select
Output
Record
Processing
8 Generate
7
Total List
Compute
bill
Final Output
To Screen/Printer
OUTPUT
UNIT
System Design
The design document that we will develop during this phase is the blueprint
of the software. It describes how the solution to the customer problem is to
be built. Since solution to complex problems isnt usually found in the first
try, iterations are most likely required. This is true for software design as
well. For this reason, any design strategy, design method, or design
language must be flexible and must easily accommodate changes due to
iterations in the design . Any technique or design needs to support and guide
the partitioning process in such a way that the resulting sub-problems are as
independent as possible from each other and can be combined easily for the
solution to the overall problem. Sub-problem independence and easy
combination of their solutions reduces the complexity of the problem. This
is the objective of the partitioning process. Partitioning or decomposition
during design involves three types of decisions: -
Define the boundaries along which to break;
Determine into how money pieces to break; and
Identify the proper level of detail when design should stop and
implementation should start.
Basic design principles that enable the software engineer to navigate the
design process suggest a set of principles for software design, which have
been adapted and extended in the following list:
Free from the suffer from "tunnel vision." A good designer should consider
alternative approaches, judging each based on the requirements of the
problem, the resources available to do the job.
The design should be traceable to the analysis model. Because a single
element of the design model often traces to multiple requirements, it is
necessary to have a means for tracking how requirements have been satisfied
by the design model.
The design should not repeat the same thing. Systems are constructed using
a set of design patterns, many of which have likely been encountered before.
These patterns should always be chosen as an alternative to reinvention.
Time is short and resources are limited! Design time should be invested in
representing truly new ideas and integrating those patterns that already exist.
The design should "minimize the intellectual distance" between the software
and the problem as it exists in the real world. That is, the structure of the
software design should (whenever possible) mimic the structure of the
problem domain.
The design should exhibit uniformity and integration. A design is uniform if
it appears that one person developed the entire thing. Rules of style and
format should be defined for a design team before design work begins. A
design is integrated if care is taken in defining interfaces between design
components.
The design activity begins when the requirements document for the software
to be developed is available. This may be the SRS for the complete system,
as is the case if the waterfall model is being followed or the requirements for
the next "iteration" if the iterative enhancement is being followed or the
requirements for the prototype if the prototyping is being followed. While
the requirements specification activity is entirely in the problem domain,
design is the first step in moving from the problem domain toward the
solution domain. Design is essentially the bridge between requirements
specification and the final solution for satisfying the requirements.
The design of a system is essentially a blueprint or a plan for a solution for
the system. We consider a system to be a set of components with clearly
defined behavior that interacts with each other in a fixed defined manner to
produce some behavior or services for its environment. A component of a
system can be considered a system, with its own components. In a software
system, a component is a software module.
The design process for software systems, often, has two levels. At the first
level, the focus is on deciding which modules are needed for the system, the
specifications of these modules, and how the modules should be
interconnected. This is what is called the system design or top-level design.
In the second level, the internal design of the modules, or how the
specifications of the module can be satisfied, is decided. This design level is
often called detailed design or logic design. Detailed design essentially
expands the system design to contain a more detailed description of the
processing logic and data structures so that the design is sufficiently
complete for coding.
Because the detailed design is an extension of system design, the system
design controls the major structural characteristics of the system. The system
design has a major impact on the testability and modifiability of a system,
and it impacts its efficiency. Much of the design effort for designing
software is spent creating the system design.
The input to the design phase is the specifications for the system to be
designed. Hence, a reasonable entry criteria can be that the specifications are
stable and have been approved, hoping that the approval mechanism will
ensure that the specifications are complete, consistent, unambiguous, etc.
The output of the top-level design phase is the architectural design or the
system design for the software system to be built. This can be produced with
or without using a design methodology. A reasonable exit criteria for the
phase could be that the design has been verified against the input
specifications and has been evaluated and approved for quality.
A design can be object-oriented or function-oriented. In function-oriented
design, the design consists of module definitions, with each module
supporting a functional abstraction. In object-oriented design, the modules in
the design represent data abstraction (these abstractions are discussed in
more detail later). In the function-oriented methods for design and describe
one particular methodology the structured design methodology in some
detail. In a function- oriented design approach, a system is viewed as a
transformation function, transforming the inputs to the desired outputs. The
purpose of the design phase is to specify the components for this
transformation function, so that each component is also a transformation
function. Hence, the basic output of the system design phase, when a
function oriented design approach is being followed, is the definition of all
the major data structures in the system, all the major modules of the system,
and how the modules interact with each other.
Once the designer is satisfied with the design he has produced, the
design is to be precisely specified in the form of a document. To specify the
design, specification languages are used. Producing the design specification
is the ultimate objective of the design phase. The purpose of this design
document is quite different from that of the design notation. Whereas a
design represented using the design notation is largely to be used by the
designer, a design specification has to be so precise and complete that it can
be used as a basis of further development by other programmers. Generally,
design specification uses textual structures, with design notation helping in
understanding.
Scheduling
Scheduling of a software project does not differ greatly from scheduling of
any multi- task engineering effort. Therefore, generalized project scheduling
tools and techniques can be applied with little modification to software
projects.
Program evaluation and review technique (PERT) and critical path method
(CPM) are two project scheduling methods that can be applied to software
development. Both techniques are driven by information already developed
in earlier project planning activities.
Estimates of Effort
A decomposition of the product function
The selection of the appropriate process model and task set
Decomposition of tasks
Interdependencies among tasks may be defined using a task network. Tasks,
sometimes called the project Work Breakdown Structure (WBS) are defined
for the product as a whole or for individual functions.
Both PERT and CPM provide quantitative tools that allow the software
planner to (1) determine the critical path-the chain of tasks that determines
the duration of the project; (2) establish "most likely" time estimates for
individual tasks by applying statistical models; and (3) calculate "boundary
times" that define a time window" for a particular task.
Boundary time calculations can be very useful in software project
scheduling. Slippage in the design of one function, for example, can retard
further development of other functions. It describes important boundary
times that may be discerned from a PERT or CPM network: (I) the earliest
time that a task can begin when preceding tasks are completed in the shortest
possible time, (2) the latest time for task initiation before the minimum
project completion time is delayed, (3) the earliest finish-the sum of the
earliest start and the task duration, (4) the latest finish- the latest start time
added to task duration, and (5) the total float-the amount of surplus time or
leeway allowed in scheduling tasks so that the network critical path
maintained on schedule. Boundary time calculations lead to a determination
of critical path and provide the manager with a quantitative method for
evaluating progress as tasks are completed.
Both PERT and CPM have been implemented in a wide variety of
automated tools that are available for the personal computer. Such tools are
easy to use and take the scheduling methods described previously available
to every software project manager.
//*************************************************
*********
// PROJECT PAYROLL
//*************************************************
*********
//*************************************************
*********
// INCLUDED HEADER FILES
//*************************************************
*********
#include <iostream.h>
#include <fstream.h>
#include <process.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <ctype.h>
#include <conio.h>
#include <dos.h>
//*************************************************
*********
// THIS CLASS CONTAINS ALL THE DRAWING FUNCTIONS
//*************************************************
*********
class LINES
{
public :
void LINE_HOR(int, int, int, char) ;
void LINE_VER(int, int, int, char) ;
void BOX(int,int,int,int,char) ;
void CLEARUP(void) ;
void CLEARDOWN(void) ;
} ;
//*************************************************
*********
// THIS CLASS CONTROL ALL THE FUNCTIONS IN THE MENU
//*************************************************
*********
class MENU
{
public :
void MAIN_MENU(void) ;
private :
void EDIT_MENU(void) ;
void INTRODUCTION(void) ;
} ;
//*************************************************
*********
// THIS CLASS CONTROL ALL THE FUNCTIONS RELATED TO
EMPLOYEE
//*************************************************
*********
class EMPLOYEE
{
public :
void NEW_EMPLOYEE(void) ;
void MODIFICATION(void) ;
void DELETION(void) ;
void DISPLAY(void) ;
void LIST(void) ;
void SALARY_SLIP(void) ;
private :
void ADD_RECORD(int, char[], char[],
char[], int, int, int, char[], char, char, char,
float, float) ;
void MODIFY_RECORD(int, char [], char
[], char [], char [], char, char, char, float,
float) ;
void DELETE_RECORD(int) ;
int LASTCODE(void) ;
int CODEFOUND(int) ;
int RECORDNO(int) ;
int FOUND_CODE(int) ;
void DISPLAY_RECORD(int) ;
int VALID_DATE(int, int, int) ;
//*************************************************
*********
// THIS FUNCTION CONTROL ALL THE FUNCTIONS IN THE
MAIN MENU
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION CONTROL ALL THE FUNCTIONS IN THE
EDIT MENU
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION DRAWS THE HORRIZONTAL LINE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION DRAWS THE VERTICAL LINE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION DRAWS THE BOX
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION CLEAR THE SCREEN LINE BY LINE
UPWARD
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION CLEAR THE SCREEN LINE BY LINE
DOWNWORD
//*************************************************
*********
void LINES :: CLEARDOWN(void)
{
for (int i=1; i<=25; i++)
{
delay(20) ;
gotoxy(1,i) ; clreol() ;
}
}
//*************************************************
*********
// THIS FUNCTION ADDS THE GIVEN DATA IN THE
EMPLOYEE'S FILE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION MODIFY THE GIVEN DATA IN THE
// EMPLOYEE'S FILE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION DELETE THE RECORD IN THE EMPLOYEE
FILE
// FOR THE GIVEN EMPLOYEE CODE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION RETURNS 0 IF THE GIVEN CODE NOT
FOUND
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION RETURNS RECORD NO. OF THE GIVEN
CODE
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION DISPLAYS THE LIST OF THE EMPLOYEES
//*************************************************
*********
void EMPLOYEE :: LIST(void)
{
clrscr() ;
int row = 6 , found=0, flag=0 ;
char ch ;
gotoxy(31,2) ;
cout <<"LIST OF EMPLOYEES" ;
gotoxy(30,3) ;
cout <<"~~~~~~~~~~~~~~~~~~~" ;
gotoxy(1,4) ;
cout <<"CODE NAME PHONE
DOJ DESIGNATION GRADE SALARY" ;
gotoxy(1,5) ;
cout
<<"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" ;
fstream file ;
file.open("EMPLOYEE.DAT", ios::in) ;
file.seekg(0,ios::beg) ;
while (file.read((char *) this,
sizeof(EMPLOYEE)))
{
flag = 0 ;
delay(20) ;
found = 1 ;
gotoxy(2,row) ;
cout <<code ;
gotoxy(6,row) ;
cout <<name ;
gotoxy(31,row) ;
cout <<phone ;
gotoxy(40,row) ;
cout <<dd <<"/" <<mm <<"/" <<yy ;
gotoxy(52,row) ;
cout <<desig ;
gotoxy(69,row) ;
cout <<grade ;
if (grade != 'E')
{
gotoxy(74,row) ;
cout <<basic ;
}
else
{
gotoxy(76,row) ;
cout <<"-" ;
}
if ( row == 23 )
{
flag = 1 ;
row = 6 ;
gotoxy(1,25) ;
cout <<"Press any key to continue or
Press <ESC> to exit" ;
ch = getch() ;
if (ch == 27)
break ;
clrscr() ;
gotoxy(31,2) ;
cout <<"LIST OF EMPLOYEES" ;
gotoxy(30,3) ;
cout <<"~~~~~~~~~~~~~~~~~~~" ;
gotoxy(1,4) ;
cout <<"CODE NAME
PHONE DOJ DESIGNATION GRADE SALARY"
;
gotoxy(1,5) ;
cout
<<"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" ;
}
else
row++ ;
}
if (!found)
{
gotoxy(5,10) ;
cout <<"\7Records not found" ;
}
if (!flag)
{
gotoxy(1,25) ;
cout <<"Press any key to continue..." ;
getche() ;
}
file.close () ;
}
//*************************************************
*********
// THIS FUNCTION DISPLAYS THE RECORD OF THE
EMPLOYEES
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION GIVE DATA TO ADD IN THE FILE
//*************************************************
*********
ecode = LASTCODE() + 1 ;
if (ecode == 1)
{
ADD_RECORD(ecode, "null", "null", "null",
0, 0, 0, "null", 'n', 'n', 'n', 0.0, 0.0) ;
DELETE_RECORD(ecode) ;
}
gotoxy(21,5) ;
cout <<ecode ;
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"Enter the name of the Employee" ;
gotoxy(20,7) ; clreol() ;
gets(ename) ;
strupr(ename) ;
if (ename[0] == '0')
return ;
if (strlen(ename) < 1 || strlen(ename) >
25)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7Enter correctly (Range:
1..25)" ;
getch() ;
}
} while (!valid) ;
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"Enter Address of the Employee" ;
gotoxy(20,8) ; clreol() ;
gets(eaddress) ;
strupr(eaddress) ;
if (eaddress[0] == '0')
return ;
if (strlen(eaddress) < 1 ||
strlen(eaddress) > 30)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7Enter correctly (Range:
1..30)" ;
getch() ;
}
} while (!valid) ;
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"Enter Phone no. of the Employee or
Press <ENTER> for none" ;
gotoxy(20,9) ; clreol() ;
gets(ephone) ;
if (ephone[0] == '0')
return ;
if ((strlen(ephone) < 7 && strlen(ephone)
> 0) || (strlen(ephone) > 9))
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7Enter correctly" ;
getch() ;
}
} while (!valid) ;
if (strlen(ephone) == 0)
strcpy(ephone,"-") ;
char tday[3], tmonth[3], tyear[5] ;
int td ;
do
{
valid = 1 ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"ENTER DAY OF JOINING" ;
gotoxy(13,13) ; clreol() ;
gets(tday) ;
td = atoi(tday) ;
d = td ;
if (tday[0] == '0')
return ;
} while (d == 0) ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"ENTER MONTH OF JOINING" ;
gotoxy(13,14) ; clreol() ;
gets(tmonth) ;
td = atoi(tmonth) ;
m = td ;
if (tmonth[0] == '0')
return ;
} while (m == 0) ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"ENTER YEAR OF JOINING" ;
gotoxy(13,15) ; clreol() ;
gets(tyear) ;
td = atoi(tyear) ;
y = td ;
if (tyear[0] == '0')
return ;
} while (y == 0) ;
if (d>31 || d<1)
valid = 0 ;
else
if (((y%4)!=0 && m==2 && d>28) ||
((y%4)==0 && m==2 && d>29))
valid = 0 ;
else
if ((m==4 || m==6 || m==9 || m==11) &&
d>30)
valid = 0 ;
else
if (y<1990 || y>2020)
valid = 0 ;
if (!valid)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7Enter correctly" ;
getch() ;
gotoxy(13,14) ; clreol() ;
gotoxy(13,15) ; clreol() ;
}
} while (!valid) ;
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"Enter Designation of the Employee"
;
gotoxy(20,17) ; clreol() ;
gets(edesig) ;
strupr(edesig) ;
if (edesig[0] == '0')
return ;
if (strlen(edesig) < 1 || strlen(edesig) >
15)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7Enter correctly (Range:
1..15)" ;
getch() ;
}
} while (!valid) ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"Enter Grade of the Employee
(A,B,C,D,E)" ;
gotoxy(20,18) ; clreol() ;
egrade = getche() ;
egrade = toupper(egrade) ;
if (egrade == '0')
return ;
} while (egrade < 'A' || egrade > 'E') ;
if (egrade != 'E')
{
gotoxy(5,19) ;
cout <<"House (y/n) : " ;
gotoxy(5,20) ;
cout <<"Convense (y/n) : " ;
gotoxy(5,22) ;
cout <<"Basic Salary : " ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"ENTER IF HOUSE ALLOWANCE IS
ALLOTED TO EMPLOYEE OR NOT" ;
gotoxy(22,19) ; clreol() ;
ehouse = getche() ;
ehouse = toupper(ehouse) ;
if (ehouse == '0')
return ;
} while (ehouse != 'Y' && ehouse != 'N') ;
do
{
gotoxy(5,25) ; clreol() ;
cout <<"ENTER IF CONVENCE ALLOWANCE IS
ALLOTED TO EMPLOYEE OR NOT" ;
gotoxy(22,20) ; clreol() ;
econv = getche() ;
econv = toupper(econv) ;
if (econv == '0')
return ;
} while (econv != 'Y' && econv != 'N') ;
}
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"ENTER LOAN AMOUNT IF ISSUED" ;
gotoxy(22,21) ; clreol() ;
gets(t1) ;
t2 = atof(t1) ;
eloan = t2 ;
if (eloan > 50000)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7SHOULD NOT GREATER THAN
50000" ;
getch() ;
}
} while (!valid) ;
if (egrade != 'E')
{
do
{
valid = 1 ;
gotoxy(5,25) ; clreol() ;
cout <<"ENTER BASIC SALARY OF THE
EMPLOYEE" ;
gotoxy(22,22) ; clreol() ;
gets(t1) ;
t2 = atof(t1) ;
ebasic = t2 ;
if (t1[0] == '0')
return ;
if (ebasic > 50000)
{
valid = 0 ;
gotoxy(5,25) ; clreol() ;
cout <<"\7SHOULD NOT GREATER THAN
50000" ;
getch() ;
}
} while (!valid) ;
}
gotoxy(5,25) ; clreol() ;
do
{
gotoxy(5,24) ; clreol() ;
cout <<"Do you want to save (y/n) " ;
ch = getche() ;
ch = toupper(ch) ;
if (ch == '0')
return ;
} while (ch != 'Y' && ch != 'N') ;
if (ch == 'N')
return ;
ADD_RECORD(ecode, ename, eaddress, ephone, d,
m, y, edesig, egrade, ehouse, econv, eloan, ebasic)
;
}
//*************************************************
*********
// THIS FUNCTION GIVE CODE FOR THE DISPLAY OF THE
RECORD
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION GIVE DATA FOR THE MODIFICATION OF
THE
// EMPLOYEE RECORD
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION GIVE CODE NO. FOR THE DELETION OF
THE
// EMPLOYEE RECORD
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION RETURN 0 IF THE GIVEN DATE IS
INVALID
//*************************************************
*********
//*************************************************
*********
// THIS FUNCTION PRINTS THE SALARY SLIP FOR THE
EMPLOYEE
//*************************************************
*********
//*************************************************
*********
// MAIN FUNCTION CALLING MAIN MENU
//*************************************************
*********
void main(void)
{
MENU menu ;
menu.MAIN_MENU() ;
}
TESTING
Functional Testing:
In functional testing the structure of the program is not considered.
Test cases are decided solely on the basis of requirements or
specifications of the program or module and the internals of the
module or the program are not considered for selection of test cases.
Due to its nature, functional testing is often called black box testing.
Equivalence partitioning is a technique for determining which classes
of input data have common properties. A program should behave in a
comparable way for all members of an equivalence partition. How
there are both input and output equivalence partitions; correct and
incorrect inputs also form partitions.
The equivalence partitions may be identified by using the program
specification or user documentation and by the tester using
experience, to predict which classes of input value are likely to detect
errors. For example, if an input specification states that the range of
some input values must be a 5-digit integer, that is, between 10000
and 99999, equivalence partitions might be those values less than
10000, values between 10000 and 99999 and values greater than
99999. Similarly, if four to eight values are to be input, equivalence
partitions are less than four, between four and eight and more than
eight.
In functional testing, the structure of the program is not considered.
Test cases are decided solely on the basis of the requirements or
specifications of the program or module, and the internals of the
module or the program are not considered for selection of test cases.
Due to its nature, functional testing is often called "black box testing."
In the structural approach, test cases are generated based on the
actual code of the program or module to be tested. This structural
approach is sometimes called "glass box testing."
The basis for deciding test cases in functional testing is the
requirements or specifications of the system or module. For the entire
system, the test cases are designed from the
requirements specification document for the system. For modules
created during design, test cases for functional testing are decided
from the module specifications produced during the design.
The most obvious functional testing procedure is exhaustive testing,
which as we have stated, is impractical. One criterion for generating
test cases is to generate them randomly. This strategy has little
chance of resulting in a set of test cases that is close to optimal (i.e.,
that detects the maximum errors with minimum test cases). Hence,
we need some other criterion or rule for selecting test cases. There
are no formal rules for designing test cases for functional testing. In
fact, there are no precise criteria for selecting test cases. However,
there are a number of techniques or heuristics that can be used to
select test cases that have been found to be very successful in
detecting errors. Here we mention some of these techniques.
Cause-Effect Graphing
One weakness with the equivalence class partitioning and boundary
value methods is that they consider each input separately. That is,
both concentrate on the conditions and classes of one input. They do
not consider combinations of input circumstances that may form
interesting situations that should be tested. One way to exercise
combinations of different input conditions is to consider all valid
combinations of the equivalence classes of input conditions. This
simple approach will result in an unusually large number of test
cases, many of which will not be useful for revealing any new errors.
For example, if there are n different input conditions, such that any
combination of the input conditions is valid, we will have 2 test cases.
Cause:
c1. Command is add
c2. Command is delete
c3. employee number is valid
c4. Transaction_amt. is valid
Effects:
el. Print "invalid command"
e2. Print "invalid employee-number"
e3. Print "Debit amount not valid"
e4. display
e. generate payslip
Let us illustrate this technique with a small example. Suppose that for
a bank database there are two commands allowed:
credit acct-number transaction_amount
The requirements are that if the command is credit and the acct-
number is valid, then the account is credited. If the command is debit,
the acct-number is valid, and the transaction_amount is valid (less
than the balance), then the account is debited. If the command is not
valid, the account number is not valid, or the debit amount is not
valid, a suitable message is generated. We can identify the following
causes and effects from these requirements.The cause effect of this
is shown in Figure. In the graph, the cause-effect relationship of this
example is captured. For all effects, one can easily determine the
causes each effect depends on and the exact nature of the
dependency. For example, according to this graph, the effect E5
depends on the causes c2, c3, and c4 in a manner such that the effect
E5 is enabled when all c2, c3, and c4 are true. Similarly, the effect E2 is
enabled if c3 is false.
From this graph, a list of test cases can be generated. The basic
strategy is to set an effect to I and then set the causes that enable
this condition. The condition of causes forms the test case. A cause
may be set to false, true, or don't care (in the case when the effect
does not depend at all on the cause). To do this for all the effects, it is
convenient to use a decision table. The decision table for this
example is shown in Figure
This table lists the combinations of conditions to set different effects.
Each combination of conditions in the table for an effect is a test
case. Together, these condition combinations check for various
effects the software should display. For example, to test for the effect
E3, both c2 and c4 have to be set. That is, to test the effect "Print debit
amount not valid," the test case should be: Command is debit
(setting: c2 to True), the account number is valid (setting c3 to False),
and the transaction money is not proper (setting c4 to False).
E1
C1 V
E2
C2
V
V E3
C3
V V
E5
C4
V
V
E4
SNo. 1 2 3 4 5
Cl 0 1 X x 1
C2 0 x 1 1 x
C3 x 0 1 1 1
C4 x x 0 1 1
El 1
E2 1
E3 1
E4 1
E5 1
Special Cases
It has been seen that programs often produce incorrect behavior
when inputs form some special cases. The reason is that in
programs, some combinations of inputs need special treatment, and
providing proper handling for these special cases is easily
overlooked. For example, in an arithmetic routine, if there is a division
and the divisor is zero, some special action has to be taken, which
could easily be forgotten by the programmer. These special cases
form particularly good test cases, which can reveal errors that will
usually not be detected by other test cases.
Special cases will often depend on the data structures and the
function of the module. There are no rules to determine special
cases, and the tester has to use his intuition and experience to
identify such test cases. Consequently, determining special cases is
also called error guessing.
The psychology is particularly important for error guessing. The tester
should play the "devil's advocate" and try to guess the incorrect
assumptions that the programmer could have made and the
situations the programmer could have overlooked or handled
incorrectly. Essentially, the tester is trying to identify error prone
situations. Then, test cases are written for these situations. For
example, in the problem of finding the number of different words in a
file (discussed in earlier chapters) some of the special cases can be:
file is empty, only one word in the file, only one word in a line, some
empty lines in the input file, presence of more than one blank
between words, all words are the same, the words are already sorted,
and blanks at the start and end of the file.
Incorrect assumptions are usually made because the specifications
are not complete or the writer of specifications may not have stated
some properties, assuming them to be obvious. Whenever there is
reliance on tacit understanding rather than explicit statement of
specifications, there is scope for making wrong assumptions.
Frequently, wrong assumptions are made about the environments.
However, it should be pointed out that special cases depend heavily
on the problem, and the tester should really try to "get into the shoes"
of the designer and coder to determine these cases.
Structural Testing
A complementary approach to testing is sometimes called structural
or White box or Glass box testing. The name contrasts with black box
testing because the tester can analyse the code and use knowledge
about it and the structure of a component to derive the test data. The
advantage of structural testing is that test cases can be derived
systematically and test coverage measured. The quality assurance
mechanisms, which are setup to control testing, can quantify what
level of testing is required and what has be carried out. In the
previous section, we discussed functional testing, which is concerned
with the function that the tested program is supposed to perform and
does not deal with the internal structure of the program responsible
for actually implementing that function. Thus, functional testing is
concerned with functionality rather than implementation of the
program. Various criteria for functional testing were discussed earlier.
Structural testing, on the other hand, is concerned with testing the
implementation of the program. The intent of structural testing is not
to exercise all the different input or output conditions (although that
may be a by-product) but to exercise the different programming
structures and data structures used in the program.
To test the structure of a program, structural testing aims to achieve
test cases that will force the desired coverage of different structures.
Various criteria have been proposed for this. Unlike the criteria for
functional testing, which are frequently imprecise, the criteria for
structural testing are generally quite precise as they are based on
program structures, which are formal and precise. Here we will
discuss three different approaches to structural testing: control flow-
based testing, data flow-based testing, and mutation testing.
Control Flow-Based Criteria
Before we consider the criteria, let us precisely define a control flow
graph for a program. Let the control flow graph (or simply flow graph)
of a program P be G. A node in this graph represents a block of
statements that is always executed together, i.e., whenever the first
statement is executed, all other statements are also executed. An
edge (i, j) (from node i to node j) represents a possible transfer of
control after executing the last statement of the block represented by
node i to the first statement of the block represented by node j. A
node corresponding to a block, whose first statement is the start
statement of P, is called the start node of G, and a node
corresponding to a block whose last statement is an exit statement is
called an exit node. A path is a finite sequence of nodes (n1, nz, nk), k
> I, such that there is an edge (ni, ni+1) for all nodes n; in the
sequence (except the last node nk). A complete path is a path whose
first node is the start node and the last node is an exit node.
Now, let us consider control flow-based criteria. Perhaps, the simplest
coverage criteria is statement coverage, which requires that each
statement of the program be executed at least once during testing. In
other words, it requires that the paths executed during testing include
all the nodes in the graph. This is also called the all-nodes criterion.
This coverage criterion is not very strong, and can leave errors
undetected. For example, if there is an if statement in the program
without having an else clause, the statement coverage criterion for
this statement will be satisfied by a test case that evaluates the
condition to true. No test case is needed that ensures that the
condition in the if statement evaluates to false. This is a serious
shortcoming because decisions in programs are potential sources of
errors. As an example, consider the following function to compute the
absolute value of a number:
int xyz (y)
int y;
{
if (y >= 0) y = 0 -y;
return (y)
}
This program is clearly wrong. Suppose we execute the function with
the set of test cases {y-a} (i.e., the set has only one test case). The
statement coverage criterion will be satisfied by testing with this set,
but the error will not be revealed.
A little more general coverage criterion is branch coverage, which
requires that each edge in the control flow graph be traversed at least
once during testing. In other words, branch coverage requires that
each decision in the program be evaluated to true and false values at
least once during testing. Testing based on branch coverage is often
called branch testing. The 100% branch coverage criterion is also
called the all-edges criterion. Branch coverage implies statement
coverage, as each statement is a part of some branch. In other
words, Cbranch =} Cstmt. In the preceding example, a set of test
cases satisfying this criterion will detect the error.
The trouble with branch coverage comes if a decision has many
conditions in it (consisting of a Boolean expression with Boolean
operators and and or). In such situations, a decision can evaluate to
true and false without actually exercising all the conditions. For
example, consider the following function that checks the validity of a
data item. The data item is valid if it lies between 0 and 100.
int check(y)
int y;
{
if y >=) && (y <= 200))
check = True;
else check = False;
}
The module is incorrect, as it is checking for y < 200 instead of 100
(perhaps, a typing error made by the programmer). Suppose the
module is tested with the following set of test cases: {y = 5, y = -5}.
The branch coverage criterion will be satisfied for this module by this
set. However, the error will not be revealed, and the behavior of the
module is consistent with its specifications for all test cases in this
set. Thus, the coverage criterion is satisfied, but the error is not
detected. This occurs because the decision is evaluating to true and
false because of the condition (y > 0). The condition (y < 200) never
evaluates to false during this test, hence the error in this condition is
not revealed.
This problem can be resolved by requiring that all conditions evaluate
to true and false. However, situations can occur where a decision
may not get both true and false values even if each individual
condition evaluates to true and false. An obvious solution to this
problem is to require decision/condition coverage, where all the
decisions and all the conditions in the decisions take both true and
false values during the course of testing.
Studies have indicated that there are many errors whose presence is
not detected by branch testing because some errors are related to
some combinations of branches and their presence is revealed by an
execution that follows the path that includes those branches. Hence,
a more general coverage criterion is one that requires all possible
paths in the control flow graph be executed during testing. This is
called the path coverage criterion or the all-paths criterion, and the
testing based on this criterion is often called path testing. The
difficulty with this criterion is that programs that contain loops can
have an infinite number of possible paths. Furthermore, not all paths
in a graph may be "feasible" in the sense that there may not be any
inputs for which the path can be executed. It should be clear that C
path => Cbranch.
As the path coverage criterion leads to a potentially infinite number of
paths, some efforts have been made to suggest criteria between the
branch coverage and path coverage. The basic aim of these
approaches is to select a set of paths that ensure branch coverage
criterion and try some other paths that may help reveal errors. One
method to limit the number of paths is to consider two paths as same,
if they differ only in their sub-paths that are caused due to the loops.
Even with this restriction, the number of paths can be extremely
large.
Another such approach based on the cyclomatic complexity has been
proposed namely, the test criterion. The test criterion is that if the
cyclomatic complexity of a module is V, then at least V distinct paths
must be executed during testing. We have seen that cyclomatic
complexity V of a module is the number of independent paths in the
flow graph of a module. As these are independent paths, all other
paths can be represented as a combination of these basic paths.
These basic paths are finite, whereas the total number of paths in a
module having loops may be infinite.
It should be pointed out that none of these criteria is sufficient to
detect all kind of errors in programs. For example, if a program is
missing out some control flow paths that are needed to check for a
special value (like pointer equals nil and divisor equals zero), then
even executing all the paths will not necessarily detect the error.
Similarly, if the set of paths is such that they satisfy the all-path
criterion but exercise only one part of a compound condition, then the
set will not reveal any error in the part of the condition that is not
exercised. Hence, even the path coverage criterion, which is the
strongest of the criteria we have discussed, is not strong enough to
guarantee detection of all the errors.
all user
all-defs all-p-uses
It should be quite clear that all-paths will include all-uses and all other
structure based criteria. All-uses, in turn, include all-p-uses, all defs,
and all-edges. However, all defs does not include all-edges (and the
reverse is not true). The reason is that all defs is focusing on all
definitions getting used, while all-edges is focusing on all decisions
evaluating to both true and false. For example, a decision may
evaluate to true and false in two different test cases, but the use of a
definition of a variable x may not have been exercised. Hence, the all-
defs and all-edges criteria are, in some sense, incomparable.
Inclusion does not imply that one criterion is always better than
another. At best, it means that if the test case generation strategy for
two criteria C1 and C2 is similar, and if C1 C2, then statistically
speaking, the set of test cases satisfying C1 will be better than a set
of test cases satisfying C2. The experiments reported show that no
one criterion (out of a set of control flow-based and data flow-based
criteria) does significantly better than another, consistently. However,
it does show that testing done by using all-branch or all-uses
criterion, generally, does perform better than randomly selected test
cases.
System Testing
Software is only one element of a larger computer-based system.
Ultimately, software is incorporated with other system elements and a
series of system integration and a validation test are conducted.
These tests fall outside the scope of software engineering process
and are not conducted solely by the software developer.
Mutation Testing
Mutation testing is another structural testing technique that differs
fundamentally from the approaches discussed earlier. In control flow-
based and data flow-based testing, the focus was on which paths to
execute during testing. Mutation testing does not take a path-based
approach. Instead, it takes the program and creates many mutants of
it, by making simple changes to the program. The goal of testing is to
make sure that during the course of testing, each mutant produces an
output different from the output of the original program. In other
words, the mutation-testing criterion does not say that the set of test
cases must be such that certain paths are executed; instead, it
requires the set of test cases to be such that they can distinguish
between the original program and its mutants.
Test Plan Activities During Testing
A test plan is a general document for the entire project that defines
the scope, approach to be taken, and the schedule of testing as well
as identifies the test items for the entire testing process and the
personnel responsible for the different activities of testing. The test
planning can be done well before the actual testing commences and
can be done in parallel with the coding and design phases. The
inputs for forming the test plan are: (1) project plan, (2) requirements
document, and (3) system design document. The project plan is
needed to make sure that the test plan is consistent with the overall
plan for the project and the testing schedule matches that of the
project plan. The requirements document and the design document
are the basic documents used for selecting the test units and
deciding the approaches to be used during testing. A test plan should
contain the following:
Test unit specification.
Features to be tested.
Approach for testing.
Test deliverables.
Schedule.
Personnel allocation.
One of the most important activities of the test plan is to identify the
test units. A test unit is a set of one or more modules, together with
associated data, that are from a single computer program and that
are the objects of testing. A test unit can occur at any level and can
contain from a single module to the entire system. Thus, a test unit
may be a module, a few modules, or a complete system.
Unit Testing
Unit testing compromises the set of tests performed by an individual
programmer prior to integration of the unit into a larger system. The
situation is illustrated as follows:
Coding and debugging Unit Testing Integration Testing
N P
0 2
N 1 4
2 8
10 2048
P=2N+1
Integration Testing
Bottom-up integration is the traditional strategy to integrate the
components of a software system into a functioning whole. Bottom-
up integration consists of unit testing, followed by subsystem testing,
followed by testing of the entire system. Unit testing has the goal of
discovering errors in the individual modules of the system. Modules
are tested in isolation from one another in an artificial environment
known as a test harness, which consists of the driver programs and
data necessary to exercise the modules. Unit testing should be as
exhaustive as possible to ensure that each representative handled by
each module has been tested. Unit testing is eased by a system
structure that is composed of small, loosely coupled modules.
A subsystem consists of several modules that communicate with
each other through well-defined interfaces. Normally, a subsystem
implements a major segment operation of the interfaces between
modules in the subsystem. Both control and of subsystem testing:
lower level subsystems are successively combined to form higher-
level subsystems. In most software systems, exhaustive testing of
subsystem capabilities is not feasible due to the combinational
complexity of the module interfaces; therefore, test cases must be
carefully chosen to exercise the interfaces in the desired manner.
System testing is concerned with subtleties in the interfaces, decision
logic, control flow, recovery procedures, throughput, capacity, and
timing characteristics of the entire system. Careful test planning is
required to determine the extent and nature of system testing to be
performed and to establish criteria by which the results will be
evaluated.
Disadvantages of bottom-up testing include the necessity to write and
debug test harness for the modules and subsystems, and the level of
complexity that results from combining modules and subsystems into
larger and larger units. The extreme case of complexity results when
each module is unit tested in isolation and big bang approach to
integration testing. The main problem with big-bang integration is the
difficulty of isolating the sources of error.
Test harnesses provide data environments and calling sequences for
the routines and subsystems that are being tested in isolation. Test
harness preparation can amount to 50 per cent or more of the coding
and debugging effort for a software product.
Top-down integration starts with the main routine and one or two
immediately subordinate routines in the system structure. After
this top-level, when skeleton has been thoroughly tested, it
becomes the test harness for its immediately subordinate routines.
Top-down integration requires the use of program stubs to simulate
the effect of lower-level routines that are called by those being
tested.
Regression Testing
When some errors occur in a program then these are rectified. For
rectification of these errors, changes are made to the program. Due
to these changes some other errors may be incorporated in the
program. Therefore, all the previous test cases are tested again.
This type of testing is called regression testing.
In a broader context, successful tests (of any kind) result in the
discovery of errors, and errors must be corrected. Whenever software
is corrected, some aspect of the software configuration (the program,
its documentation, or the data that supports it) is changed.
Regression testing is the activity that helps to ensure that changes
(due to testing or for other reasons) do not introduce unintended
behavior or additional errors.
Regression testing may be conducted manually, by re-executing a
subset of all test cases or using automated capture/playback tools.
Capture/playback tools enable the software engineer to capture test
cases and results for subsequent playback and comparison.
The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
A representative sample of tests that will exercise all software
functions.
Additional tests that focus on software functions that are likely to
be affected by the change.
Tests that focus on the software components that have been
changed.
As integration testing proceeds, the number of regression tests
can grow quite large.
Therefore, the regression test suite should be designed to include
only those tests that address one or more classes of errors in each of
the major program functions. It is impractical and inefficient to re-
execute every test for every program function once a change has
occurred.
Levels of Testing
Now let us turn over attention testing process. We have seen that
faults can occur during any phase in the software development cycle.
Verification is performed on the output of each phase, but some faults
are likely to remain undetected by these methods. These faults will
be eventually reflected in the code. Testing is usually relied on to
detect these faults, in addition to the faults introduced during the
coding phase itself. Due to this, different levels of testing are used in
the testing process; each level of testing aims to test different aspects
of the system.
Client Acceptance
Needs Testing
Requirements System
Testing
Design Integration
Testing
Code Unit
Testing
The basic levels are unit testing, integration testing, testing system
and acceptance testing. These different levels of testing attempt to
detect different types of faults. The relation of the faults introduced in
different phases, and the different levels of testing as shown in figure
6.8.
The first level of testing is called unit testing. In this, different
modules are tested against the specifications produced during design
for the modules. Unit testing is, essentially, for verification of the code
produced during the coding phase, hence the goal is to test the
internal logic of the modules. It is typically done by the programmer of
the module. A module is considered for integration and use by others
only after it has been unit tested satisfactorily. Due to its close
association with coding, the coding phase is frequently called coding
and unit testing. As the focus of this testing level is on testing the
code, structural testing is best suited for this level. In fact, as
structural testing is not very suitable for large programs, it is used
mostly at the unit testing level.
The next level of testing is often called integration testing. In this,
many unit-tested modules are combined into subsystems, which are
then tested. The goal here is to see if the modules can be integrated
properly. Hence, the emphasis is on testing interfaces between
modules. This testing activity can be considered testing the design.
The next levels are system testing and acceptance testing. Here the
entire software system is tested. The reference document for this
process is the requirements document, and the goal is to see if the
software meets its requirements. This is essentially a validation
exercise, and in many situations, it is the only validation activity.
Acceptance testing is sometimes performed with realistic data of the
client to demonstrate that the software is working satisfactorily.
Testing here focuses on the external behavior of the system; the
internal logic of the program is not emphasized. Consequently,
mostly functional testing is performed at these levels.
OPERATIONAL INSTRUCTION
FOR THE USER
1. If the computer is off, turn the power switch of the computer and the
printer.
2. The System will check the RAM for defects, and also looks at the
connections to the Keyboard, disk drive etc, to see if they are
functional.
3. When the system is ready it will BOOT or load the operating
system into the memory from the hard disk.
4. Copy the floppy(i.e. A: Drive) on the hard disk(i.e. C: Drive).This will
copy all the required files from A: drive to C: drive.
5. Bank.exe will display a Password Screen for Authorization and then
the Main Screen Menu.
6. Before the user exit from the Main Menu he/she can try all the
required options.
7. Exit from Main Menu with the selection of option EXIT in Main
Menu.
8. This project is a program written in TURBO C++ for PAYROLL
Management System. Using this project user or f actory or other
department will be able to maintain the record of the customer that are
having department.
INSTALLATION PROCEDURE
The following steps are used for installation of PAYROLL Management
System application on the user site. The installation procedure is given in
steps.
1. Create a Directory in the Hard Disk or C: Drive with the any name.
2. Insert the floppy disk in A: drive that contain the software files i.e.
EXE File, DAT File(Data base File), Header file & CPP files
3. Copy all files from A: Drive into the C: Drive into a specified
directory
4. Run the PAYROLL.Exe File. This will lead to start the Bank
Management System software.
5. There is no need of Developing th`e Software like Turbo C++ because
Exe will is self executable with name.
6. In order to Start the project or application immediately after
BOOTING make the directory entry in Autoexec.bat file and write
the name of payroll.exe and save the file.
BIBLIOGRAPHY
Approach.