System Design Page
System Design Page
Design
The most challenging phase of the system life cycle is system design. The term design
describes a final system and the process by which it is developed. It refers to the technical
specifications that will be applied in implementing the candidate system. It also includes the
System design is a solution, a “how to” approach the creation of a new system. This
important phase is composed of several steps. It provides understanding and procedural details
necessary for implementing the system recommended in the feasibility study. Emphasis is on
The first step is to determine how the output is to be produced and int what format.
Samples of the output and input are also presented. Second, input data and master files (database)
have to be designed to meet the requirements of the proposed output. The operational (processing)
phases are handled through program construction and testing, including a list of programs needed
documentation. Finally, details related to justification of the system and an estimate of the impact
of the candidate system on the user and the organization are documented and evaluated by
Output Design
Input Design
Data Design
Process Design
Output Design
Computer output is the most important and direct source of information to the user.
Efficient, intelligible output design should improve the system’s relationships with the user and
help in decision making. A major form of output is hard copy from the printer.
Output from computer system is required to communicate the result of processing to users.
Designing computer output should process in an organized. Well thought out manner. The right
output must be developed while ensuring that each output element is designed so that the user will
find the system easy to use effectively. The term output applies to any information by an
information system, whether printed or displayed. While designing computer outputs the following
In addition to deciding on the output device, the systems analyst must consider the print
format and the editing for the final printout. The task of output preparation is critical, requiring
skill and ability to align user requirements with the capabilities of the system in operation. The
Name or title.
In online applications, the layout sheet for displayed output is similar to the layout chart
used for designing input. In these cases, the output forms are similar to the input forms. Other type
of applications output forms like reports used to make decisions must be designed carefully. The
Input Design
Inaccurate input data are the most common cause of errors in data processing. Errors
entered by data entry operators can be controlled by input design. Input design is the process of
converting user-originated inputs to a computer-based format. In the system design phase, the
expanded data flow diagram identifies logical data flows, data stores, sources and destinations. A
system flowchart specifies master files (database), transaction files and computer programs.
Input Media:
In this project, earlier stages identified the data that is input to the transactions. The next
step is what media should be used for the input. Since this is an online data entry project we need
computer based online forms as the media for input entry. There are three approaches for data
entry with forms: menu based formatted forms, and prompts. We adopted the formatted form
approach for entering data. A formatted form is a preprinted form or a template that requests the
user to enter data in appropriate locations. It is a fill-in-blank type. The form is flashed on the
screen as a unit. The cursor is usually positioned at the first blank. After the user responds by
filling in the appropriate information, the cursor automatically moves to the next line, and so on
Form Types:
There are three types of forms classified by what it does in the system. They are: action
forms – to perform some action such as storing, modifying, and deleting data, memory forms – to
perform extraction and display operations on existing historical data, and report forms – to
generate decision support data from existing records. We used reports as output forms. As an input
Form Layout:
When form is designed, a list is prepared of all the items to be included on the form and the
maximum space to be reserved. The form user to make sure it has the required details should check
the list.
Title
Data Zoning
Rules and Captions
Design Considerations:
In designing these forms we taken care several attributes that are mentioned below:
Physical factors.
- Field positions.
Use of instructions.
Form Title
- Online help for data entry, status info.
The following diagram describes the sample form layout we used to design forms in our project.
Record
Data Zone
Navigation
Action Commands
Online Help
Database Design
Database design is the process of developing database structures to hold data to cater to
user requirements. The final design must satisfy user needs in terms of completeness, integrity,
performance and other factors. For a large enterprise, the database design will turn out to be an
extremely complex task leaving a lot to the skill and experience of the designer. A number of tools
and techniques, including computer-assisted techniques, are available to facilitate database design.
The primary input to the database design process is the organizations’ statement of
requirements. Poor definition of these requirements is a major cause of poor database design,
resulting in databases of limited scope and utilities which are unable to adopt to changes.
The major step in database design is to identify the entities and relationships that reflect the
organizations’ data, naturally. The objective of this step is to specify conceptual structure of the
There are several methodologies to model the data logically. We adopted ER modeling as
our data modeling technique. ER model is technique for analysis and logical modeling of systems
data requirements. It uses three basic concepts: entities, attributes and relations.
Entity:
Entity is a distinguishable object. These entities are classified into regular entities and weak
entities. A weak entity is an entity that is existence dependent on some other entity i.e. it does not
exist if that other entity does not exist. A regular entity is that it is
Attribute:
Entities have properties known as attributes. All entities of a given type have certain kinds
of properties in common. Each kind of property draws its value from a corresponding value set.
Properties can be of various types: Simple or composite, key, single or multi, missing, and base or
Attribute
Relation:
relationship can be one – one, one – many, and many - many. Cardinality of a relationship refers
Relationship
In our project we have identified entities, attributes for those entities, and relationships between
Entities:
Administrator
Department
Designation
Employee
Client
Project
Project assigned
Daily time sheet
Candidate
Appointment
Post interview
Attributes:
Entity Attributes
Administrator adminid, firstname, lastname, username, adminpwd
,addeddate
Department DepartmentID, DepartmentCode, DepartmentName,
CurrentStrength, MaxNoEmp, MinSal, MaxSal,
MinAge, MaxAge
Designation DesignationID, DesignationCode, DesignationName
Relationships:
Administrator controls Departments
username
admini currentrengt
d departmentname h
lastname
firstnam
e departmentcod maxnoem
e p
departmentid
assign designationcod
ed has Designation e
addeddat
e
De
als
De wit
als h Project assigned assignmentid
designationname
wit
h
employeei
dailytimesheeti d employeeid isactive
d
projectid
projectid designationi
Daily time sheet d
minute
s entrydate
hours
firstnam lastname
adminid e
Administrator
usernam adminpw
e d
addeddat
candidateid e department
sele
ct
candidatecod designation
e
resumedate college
Candidate
firstname stream
lastname percentage
dob location
Face
interview
sex nationality
address hobby
phoneno skill
emailid experience
postinterviewid achievement
s
candidateid offeredsalary
Post Interview
postinterviewcode negotiated
interviewlevel selectionstatus
description interviewercode
If
select interviewername
postappfor ed
Appointment
interviewerdesignatio
appointmentid n
candidateid authority
appointmentno interviewdate
probationperiod
issuedate salary
joiningdat
e
Data Dictionary:
A data dictionary is a catalogue – a repository – of the elements in a system. As the name
suggests, these elements center around data the way they are structured to meet user requirements
and organization needs. In a data dictionary you will find a list of all the elements composing the
Normalization:
Normalization is the process of refining the data model built by the ER diagram. The
normalization technique, logically groups the data over the number of tables, with minimum
redundancy of data. The entities or tables resulting from normalization contain data items, with
The goal of relational database design is to generate a set of relation schemes that allow us
to store information with minimum redundancy of data and allow us to retrieve information easily
and efficiently. The approach followed is to design schemas that are in an appropriate form one of
next step is to examine the database for redundancy and if necessary, change them to non-
redundant forms. This non-redundant model is then converted into a database definition, which
achieves the objective of the database design phase.We defined database from the above ER model
by normalizing it to 3rd normal form. We will show the definitions of those database tables later at
Process Design
Structured design is a data flow based methodology. The approach begins with a system
specification that identifies inputs and outputs and describes the functional aspects of the system.
The next step is the definition of the modules ands their relationships to one another in a form
called a structure chart, using a data dictionary, DFD, and other structured tools.
Structured design partitions a program into small, independent modules. They are arranged
in a hierarchy that approximates a model of the business area and is organized in a top – down
manner.
or automated – including the processes, stores of data, and delays in the system. Data flow
diagrams are the central tools and the basis from which other components are developed. The
transformation of data from input to output, through processes, may be described logically and
Leave approval
Request Data HRM
Administr Employee
System
ator
Report Data Leave apply
Data
Data Store
Request Data
Leave approval
Report Data
Leave apply
Employee Administrator
Data
Data Store
Level one DFD for Administrator:
Project
Leave Management Recruitmen
1.4 1.5 t
1.6
Assignment
HRMStbl
-Assignment
Quotation details
Dept. info Employee data
Quotation status
Administrator
Dept. status
Report data
Designation Employee
Dept. Details
Employee
Fresher-
Details HRMStblFreshers
LevelHRMStblDepartme-
one DFD forapply
Leave Employee:
HRMStblDesigna Time HRMStblEmploy-
details
nt -tion ee
Employee
Leave approval
Time status
Leave application
Leave Time
Managemen Managemen
t t
2.1 2.2
Time sheet
HRMStblLeaveA HRMStblDailyTi-
pplication meSheet
Modules:
Well-structured designs improve the maintainability of a system. A structured system is
one that is developed from the top down and modular, that is, broken down into manageable
components. In this project we modularized the system so that they have minimal effect on each
other.
– Administrative Module deals with department ,employee, client and almost all the
and even to manage the project. It is the main part of this software.
– Profile Management deals with creation of profiles for new employees along with
updating existing profiles. An employee can be searched on the basis of their id and
names.
– Employee Recruitment deals with registration of new applicants, short listing of
applicants for technical rounds then to HR rounds and at last it produces the final
– Payroll Management is our final module which deals with Employee agreement
Functional Flowchart:
A System consists of many different activities or processes. We know the relation between
the processes that one process will contain several individual processes. We often show these
Human Resource
Management
System
Administrator Employee
Employee Employee Fresher
Department Designation Employee Recruitment
Administrator Leave Project
Master
Management Management Management
Employee
Management
Time Leave
Management Management
Recruitment
Leave
Management
Project
Management
Project Project
Assignment
Use–Case Diagram:
Use – case diagram shows us the way how we’ll interact with the software as a normaluser. The
Add new
Designation Master Project Status
Reset
Fresher
Search
Employee Search Add new
Reset
Add new
Search
Reset
Reset
Add new
Add new
User Training
It focuses on two factors:
User capabilities
Nature of the system being installed
The user may range from naive to sophisticated users .Naive users have fear towards exposure to
new system. Therefore, formal user training is required with some training aids like:
User manual
User-friendly screen
Data dictionary
Proper flow of system
Post Implementation
Operational systems are quickly taken for granted. Every system requires periodic evaluation after
implementation. A post-implementation review measures the system’s performance against
predefined requirements. Unlike system testing, this determines where the system fails so that
necessary adjustments can be. Made, a post-implementation review determines how the system
continues to meet performance specifications. It is after the fact after the design and con-versions
are complete. It also provides information to determine whether major redesign is necessary. A
post implementation review is an evaluation of a system in terms of the extent to which the system
accomplishes stated objectives and actual project costs exceed initial estimates. It is usually review
of major problems that need converting and those that surfaced during the implementation phase.
The primary responsibility for initiating the review lies with the user.
Testing
Introduction
Software Testing is the process of executing a program or system with the intent of finding
errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or
system and determining that it meets its required results Software is not unlike other physical
processes where inputs are received and outputs are produced. Where software differs is in the
manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways.
By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for
software is generally infeasible unlike most physical systems; most of the defects in software are
design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear --
generally it will not change until upgrades, or until obsolescence. So once the software is Shipped,
the design defects -- or bugs -- will be buried in and remain latent until activation. Software bugs
will almost always exist in any software module with moderate size: not because programmers are
careless or irresponsible, but because the complexity of software is generally intractable -- and
humans have only limited ability to manage complexity.
It is also true that for any complex systems, design defects can never be completely ruled
out. Discovering the design defects in software is equally difficult, for the same reason of
complexity. Because software and any digital systems are not continuous, testing boundary values
are not sufficient to guarantee correctness. All the possible values need to be tested and verified,
but complete testing is infeasible. Exhaustively testing a simple program to add only two integer
inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were
performed at a rate of thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from the real world are
involved, the problem will get worse, because timing and unpredictable environmental effects and
human interactions are all possible input parameters under consideration.
The testing activities are done in all phases of the lifecycle in an iterative software development
approach. However, the emphasis on testing activities varies in different phases. This procedure
explains the focus of testing in inception, elaboration, construction and transition phases. In the
inception phase most of requirements capturing is done and the test plan is developed. In
elaboration phase most of design is developed, and test cases are developed. Construction phase
mainly focuses on development of components and units, and unit testing is the focus in this phase.
Transition phase is about
Deploying software in the user community and most of the system testing and acceptance testing is
done in this phase.
Purpose
The main purposes of this procedure are:
To carry out comprehensive testing of the system/product and its individual components in
order to ensure that the developed system/product conforms to the user requirements/ design.
To verify the proper integration of all components of the software.
To verify that all requirements have been correctly implemented.
To identify and ensure defects are addressed prior to the deployment of the software.
Test Planning
Initial test plan addresses system test planning, and over the elaboration, construction and
transition phases this plan is updated to cater other testing requirements of these phases, like, unit
& integration testing.
Scope of testing
Methodology to be used for testing
Types of tests to be carried out
Resource & system requirements
A tentative Test Schedule
Identification of various forms to be used to record test cases and test results Testing is
usually performed for the following purposes:
To improve quality
Quality means the conformance to the specified design requirement. Being correct, the minimum
requirement of quality, means performing as required under specified circumstances. Debugging, a
narrow view of software testing, is performed heavily to find out design defects by the
programmer. The imperfection of human nature makes it almost impossible to make a moderately
complex program correct the first time. Finding the problems and get them fixed, is the purpose of
debugging in programming phase.
Just as topic Verification and Validation indicated, another important purpose of testing is
verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the
V&V process. Testers can make claims based on interpretations of the testing results, which either
the product works under certain situations or it does not work. We can also compare the quality
among different products under the same specification, based on results from the same test.
Testing Methods Used For Project
There is a plethora of testing methods and testing techniques, serving multiple purposes in
different life cycle phases. Classified by purpose, software testing can be divided into: Correctness
testing, performance tests, reliability testing and security testing. Classified by life-cycle phase,
software testing can be classified into the following categories: requirements phase testing, design
phase testing, program phase testing, evaluating test results, installation phase testing, acceptance
testing and maintenance testing. By scope, software testing can be categorized as follows: unit
testing, component testing, integration testing, and system are testing.
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness
testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may
or may not know the inside details of the software module under test, e.g. control flow, data flow,
etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing
software. We must note that the black box and white-box ideas are not limited in correctness
testing only.
Black-box testing
The black-box approach is a testing method in which test data are derived from the specified
functional requirements without regard to the final program structure It is also termed data-driven,
input/output driven or requirements-based testing. Because only the functionality of the software
module is of concern, black box testing also mainly refers to functional testing -- a testing method
emphasized on executing the functions and examination of their input and output data. The tester
treats the software under test as a black box -- only the inputs, outputs and specification are visible
and the functionality are determined by observing the outputs to corresponding Inputs. In testing,
various inputs are exercised and the outputs are compared against specification to validate the
correctness. All test cases are derived from the specification. No implementation details of the
code are considered. It is obvious that the more we have covered in the input space, the more
problems we will find and therefore we will be more confident about the quality of the software.
Ideally would be tempted to exhaustively test the input space. But as stated above, exhaustively
testing the combinations of valid inputs will be impossible for most of the programs, let alone
considering invalid inputs, timing, sequence, and resource variables. Combinatorial Explosion is
the major roadblock in functional testing. To make things worse, we can never be sure whether the
specification is either correct or complete. Due to limitations of the language used in the
specifications (usually natural language), ambiguity is often inevitable. Even if we use some type
of formal or restricted language, we may still fail to write down all the possible cases in the
specification. Sometimes, the specification itself becomes an intractable problem: it is not possible
to specify precisely every situation that can be encountered using limited words. And people can
seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not,
what they want after they have been finished. A specification problem contributes approximately
30 percent of all bugs in software. The research in black box testing mainly focuses on how to
maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not
possible to exhaust the input space, but it is possible to exhaustively test a subset of the input
space. Partitioning is one of the common techniques. If we have partitioned the input space and
assume all the input values in a partition is equivalent, then we only need to test one representative
value in each partition to sufficiently cover the Whole input space. Domain testing partitions the
input domain into regions, and considers the input values in each domain an equivalent class.
Domains can be exhaustively tested and covered by selecting a representative value(s) in each
domain. Boundary values are of special interest. Experience shows that test cases that explore
boundary conditions have a higher payoff than test cases that do not. Boundary value analysis
requires one or more boundary values selected as representative test cases. The difficulties with
domain testing are that incorrect domain definitions in the specification cannot be efficiently
discovered. Good partitioning requires knowledge of the software structure. A good testing plan
will not only contain black-box testing, but also white-box approaches, and combinations of the
two.
White-box testing
Contrary to black box testing, software is viewed as a white-box, or glass-box in white-box testing,
as the structure and flow of the software under test are visible to the tester. Testing plans are made
according to the details of the software implementation, such as programming language, logic, and
styles. Test cases are derived from the program structure. White-box testing is also called glass-
box testing, logic-driven testing or design-based testing. There are many techniques available in
white-box testing, because the problem of intractability is eased by specific knowledge and
attention on the structure of the software under test. The intention of exhausting some aspect of the
software is still strong in white-box testing, and some Degree of exhaustion can be achieved, such
as executing each line of code at least once (statement coverage), traverse every branch statements
(branch coverage), or cover all the possible combinations of true and false condition predicates
(Multiple condition coverage). Control-flow testing, loop testing, and data-flow testing, all maps
the corresponding flow structure of the software into a directed graph. Test cases are carefully
selected based on the criterion that all the nodes or paths are covered or traversed at least once. By
doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed
at all, which cannot be discovered by functional testing. In mutation testing, the original program
code is perturbed and many mutated programs are created, each contains one fault. Each faulty
version of the program is called a mutant. Test data are selected based on the effectiveness of
failing the mutants. The more mutants a test case can kill, the better the test case is considered. The
problem with Mutation testing is that it is too computationally expensive to use. The boundary
between black-box approach and white-box approach is not clear-cut. Many testing strategies
mentioned above, may not be safely classified into black box testing or white-box testing. It is also
true for transaction-flow testing, syntax testing, finite-state testing, and many other testing
strategies not discussed in this text. One reason is that all the above techniques will need some
knowledge of the specification of the software under test. Another reason is that the idea of
specification itself is broad -- it may contain any requirement including the structure, Programming
language, and programming style as part of the specification content. We may be reluctant to
consider random testing as a testing technique. The test case selection is simple and
straightforward: they are randomly chosen. Study in indicates that random testing is more cost
effective for many programs. Some very subtle errors can be discovered with low cost. And it is
also not inferior in coverage than other carefully designed testing techniques. One can also obtain
reliability estimate using random testing results based on operational profiles. Effectively
combining random testing with other testing techniques may yield more powerful and cost-
effective testing strategies.
Performance testing
Not all software systems have specifications on performance explicitly. But every system will have
implicit performance requirements. The software should not take infinite time or infinite resource
to execute. "Performance bugs" sometimes are used to refer to those design problems in software
that cause the system performance to degrade. Performance has always been a great concern and a
driving force of computer evolution. Performance evaluation of a software system usually
includes: resource usage, throughput, and stimulus-response time and queue lengths detailing the
average or maximum number of tasks waiting to be serviced by selected resources. Typical
resources that need to be considered include network bandwidth requirements, CPU cycles, disk
space, disk access operations, and memory usage. The goal of performance testing can be
Performance bottleneck identification, performance comparison and evaluation, etc. The typical
method of doing performance testing is using a benchmark -- a program, workload or trace
designed to be representative of the typical system usage.
Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is related to
many aspects of software, including the testing process. Directly estimating software reliability by
quantifying its related factors can be difficult. Testing is an effective sampling method to measure
software reliability. Guided by the operational profile, software testing (usually black-box testing)
can be used to obtain failure data, and an estimation model can be further used to analyze the data
to estimate the present reliability and predict future reliability. Therefore, based on the estimation,
the developers can decide whether to release the software, and the users can decide whether to
adopt and use the software. Risk of using software can also be assessed based on reliability
information. Advocates that the primary goal of testing should be to measure the dependability of
tested software. There is agreement on the intuitive meaning of dependable software: it does not
fail in unexpected or catastrophic ways. Robustness testing and stress testing are variances of
reliability testing based on this simple criterion. The robustness of a software component is the
degree to which it can function correctly in the presence of exceptional inputs or stressful
environmental conditions. Robustness testing differs with correctness testing in the sense that the
functional correctness of the software is not of concern. It only watches for robustness problems
such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple;
therefore robustness testing can be made more portable and scalable than correctness testing. This
research has drawn more and more interests recently, most of which uses commercial operating
systems as their target, such as the work in .Stress testing, or load testing, is often used to test the
whole system rather than the software alone. In such tests the software or system are exercised
with or beyond the Specified limits. Typical stress includes resource exhaustion, bursts of
activities, and sustained high loads.
Security testing
Software quality, reliability and security are tightly coupled. Intruders to open security holes can
exploit flaws in software. With the development of the Internet, software security problems are
becoming even more severe. Many critical software applications and services have integrated
security measures against malicious attacks.
Maintenance
Definition
Maintenance is very important task & is poorly managed. Times spent and effort required in
maintaining software and keeping it operational takes about 40 % to 70% of the total cost of the
life cycle.
“Software maintenance is the activity that includes error corrections, enhancements of capabilities,
deletion of obsolete capabilities and optimization.” Basically, any work done to change the
software after it is in operation is considered to be maintenance. Its purpose is to preserve the value
of the software.
Categories
Corrective Maintenance
It means modifications made to the software to correct the defects. Defects can result from design
errors, logic errors, coding errors, data processing errors and system performance errors.
Adaptive Maintenance
Perfective Maintenance
Process
The process of maintenance for given software can be divided into four stages as follows:
Models
The models that present for the maintenance of the Software are –
Quick-Fix Model
Iterative Enhancement Model
Reuse Oriented Model
Boehm’s Model
Boehm’s Model
This model is based on a closed loop of activities, which involve economic principles as these help
in improving productivity in maintenance. The basic motive in this model is that “the whole
process of maintenance is driven or initiated by decision making done by management who studies
the objectives against the constraints present.”
Management Decisions
Proposed Approved Changes
Changes
Change Implementations
Evaluation
New Versions of
Results the Software
Obtained Software in Use
COST INCURRED OR COST MODEL USED
COCOMO was first published in 1981 Barry J. Boehm's Book Software engineering economics for
estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at
TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981.
The study examined projects ranging in size from 2000 to 100,000 lines of code, and programming
languages ranging from assembly to PL/I. These projects were based on the waterfall model of
software development which was the prevalent software development process in 1981.References
to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally
published in 2001 in the book Software Cost Estimation with COCOMO II. COCOMO II is the
successor of COCOMO 81 and is better suited for estimating modern software development
projects. It provides more support for modern software development processes and an updated
project database. The need for the new model came as software development technology moved
from mainframe and overnight batch processing to desktop development, code reusability and the
use of off-the-shelf software components. This article refers to COCOMO 81.COCOMO consists
of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is
good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is
limited due to its lack of factors to account for difference in project attributes (Cost Drivers).
Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases.
Basic COCOMO
Basic COCOMO is a static, single-valued model that computes software development effort (and
cost) as a function of program size expressed in estimated lines of code. COCOMO applies to three
classes of software projects:
Organic projects - are relatively small, simple software projects in which small teams with
good application experience work to a set of less than rigid requirements.
Semi-detached projects - are intermediate (in size and complexity) software projects in
which teams with mixed experience levels must meet a mix of rigid and less than rigid
requirements.
Embedded projects - are software projects that must be developed within a set of tight
hardware, software, and operational constraints.
E=ab(KLOC)bb
D=cb(E)db
P=E/D
where E is the effort applied in person-months, D is the development time in chronological
months, KLOC is the estimated number of delivered lines of code for the project
(expressed in thousands), and P is the number of people required. The coefficients ab, bb, cb
and db are given in the following table.