SE
SE
SE
NOTE: Some answers have been simplified or shortened. For detailed answers please refer pdf notes.
UNIT I
QUESTIONS CARRYING TWO MARKS
7) What is the problem of change and rework in a software.[can write in own words]
Just uncovering requirements that were not clear earlier is not only the reason for change and rework.
Software development of large and complex systems can take a few years. As the time passes, the needs of
the clients change. This change of needs during development also leads to rework. This change and rework
is a major contributor to software crisis.
Page | 1
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 2
SOFTWARE ENGINEERING [BCA] SDMCBM
Software is Expensive: The main reason for the high cost of software is that software development is still
labor intensive. This implies that software that can cost more than a million dollars can run on hardware
that costs at most tens of thousands of dollars. This example clearly shows that not only is software very
expensive, it indeed forms the major component of the total automated system, with the hardware
forming a very small component.
Late and Unreliable: The software industry has gained a reputation of not being able to deliver on time
and within the budget. A project where budget and the schedule are totally out of control is called a
runaway project. Unreliability means, the software does not do what it is supposed to do or does
something it is not supposed to do.
Maintenance and Rework: Once the software is delivered to the customer, it enters into maintenance
phase. All systems need maintenance. Software needs to be maintained not because of some of its
components wear out and need to be replaced; but because there are often some residual errors
remaining in the system that must be removed as they are discovered. These errors once discovered, need
to be removed, leading to software getting changed. This is sometimes called as corrective maintenance.
The software must adapt some new qualities to fit to the new environment. The maintenance due to this is
called adaptive maintenance.
Clients frequently discover additional requirements they had not specified earlier. This leads to
requirements getting changed when the development may have proceeded to the coding or to the testing
phase. This leads to rework. The requirement, the code all have to be changed to accommodate the new or
changed requirements.
2) Define Software Engineering. Explain the various problems faced in software engineering. (5)
according to IEEE; Software Engineering is a systematic approach to the development, operation,
maintenance and retirement of the software. There is another definition for s/w engineering, which states
that “Software engineering is an application of science and mathematics by which the capabilities of
computer equipments are made useful to man via computer programs, procedures and associated
documentation”.
Page | 3
SOFTWARE ENGINEERING [BCA] SDMCBM
To control software crisis, some methodical approach is needed for software development.
according to IEEE; Software Engineering is a systematic approach to the development, operation,
maintenance and retirement of the software.
i) Scale: A fundamental problem of software engineering is the problem of scale. Developing a
very large system requires a very different set of methods compared to developing a small
system. In other words, the methods that are used for developing small systems generally do
not scale up to large systems.
ii) Quality and Productivity : Like all engineering disciplines, software engineering is driven by
three major components cost, schedule and quality. The cost of developing a system is the cost
of resources used for the system, which in the case of software are, the manpower, hardware,
software and other support resources. The manpower component is predominant as the
software development is highly labor-intensive. Schedule is an important factor in many
projects. For some business systems, it is required to build a software with small cycle of time.
Due to these types of applications, where a reduced cycle time is highly desirable even if the
cost becomes higher. One of the major factor software during any production discipline is
quality. Clearly, the developing methods that produce high quality software is another
fundamental goal of software engineering. We can view the quality of a software product
having three dimensions: Product Operation, Product Transition and Product Revision.
iii) Consistency and Repeatability: Consistency of performance is an important factor for any
organization; it allows an organization to predict the project's outcome with reasonable
accuracy and improve its processes to produce higher-quality products. To achieve consistency,
some standardized procedures must be followed.
iv) Change : As businesses change, they require that the software supporting to change. Overall, as
the world changes faster, software has to change faster. Rapid change has a special impact on
software. As software is easy to change due to its lack of physical properties that may make
changing harder, the expectation is much more from software for change. Therefore, one
challenge for software engineering is to accommodate and embrace change.
Page | 4
SOFTWARE ENGINEERING [BCA] SDMCBM
The Product operation deals with the quality factors such as correctness reliability and efficiency. Product
transition deals with quality factors such as portability, interoperability. Product revision deals with aspects
related to modification of programs, including factors like maintainability and testability.
Correctness is the extent to which a program satisfies its specifications.
Reliability is the property that defines how well the software meets its requirements.
Efficiency is the factor in all issues rating to the execution of the software. It includes considerations such
as response time, memory requirements and throughput.
Usability is the effort required to learn and operate the software properly; it is an important property that
emphasizes human aspects of the system.
Maintainability is the effort required to locate and fix errors in the programs. Testability is the effort
required to test and ensure that the system or module performs the intended operation.
Flexibility is the effort required to modify an operational program ( to enhance the functionality).
Portability is the effort required to transfer the software from one hardware configuration to another.
Reusability is the extent to which parts of a software can be used in other related applications.
Inter-operability is the effort required to couple the system with other systems.
A phased development process allows proper checking for quality and progress at some different points
during the development. Without this, one would have to wait until the end to see the software has been
produced.
Different phases can have different activities. However, in general, we can say that, any problem solving in
software engineering must consist of these activities:
➢ Requirement Analysis: Requirement analysis is done in order to understand the problem to be solved.
The requirement analysis emphasizes identifying „what‟ is needed from the system, and not „how‟ the
system will achieve the goals.
➢ Software Design: The purpose of design phase is to plan a solution for the problem specified by the
requirement document. Design phase takes us towards „how‟ to satisfy the needs. The output of this
phase is the design document which is the blue-pint or plan for the solution and used later during
implementation, testing and maintenance.
Page | 5
SOFTWARE ENGINEERING [BCA] SDMCBM
Design activity is divided into two phases- System design and detailed design. System design aims to
identify the module that should be included in the system. How these modules will interact with each
other to produce desired results. During detailed design, the internal logic of each of the modules specified
during system design is decided.
➢ Coding: The coding phase affects both testing and maintenance profoundly. Well-written code can
reduce the testing and maintenance effort. Because the testing and maintenance costs of software are
much higher than the coding cost, the goal of coding should be to reduce the testing and maintenance
effort. Simplicity and clarity should be strived for during the coding phase.
➢ Testing: The starting point of testing is unit testing. Here each module is tested separately. This test is
often performed by the coder after coding the module. The purpose is to execute different parts of the
modules to detect coding errors. After this, the module is integrated to form sub-systems and then to form
the entire system. During integration of modules, integration testing is done to detect design errors. After
the system is put together, system testing is performed. Here, the system is tested against the
requirements to see whether all the requirements are met or not. Finally, acceptance testing is performed
by giving user‟s real-world data to demonstrate to the user.
➢ Managing the Process : Development process does not specify how to allocate resources to different
activities in a process. It also will not specify schedule for each activity, how to divide work within a phase,
how to ensure that each phase is being done properly etc. Without properly handling these issues, it is
unlikely that cost and quality objectives can be met. These types of issues are properly handled by project
management.
➢ Predictability
Predictability of a process determines how accurately the outcome of following a process in a project can
be predicted before the project is completed. Predictability is a fundamental property of any process.
Effective management of quality assurance activities largely depend on the predictability of the process. A
predictable process is also said to be under statistical control.
Page | 6
SOFTWARE ENGINEERING [BCA] SDMCBM
Requirements 10%
Design 10%
Coding 30%
Testing 50%
➢ Support Change
As organizations and businesses change, the software supporting the business has to change. Hence, any
model that builds software and makes change very hard will not be suitable in many situations. Change
also takes place while development is going on. After all, the needs of the customer may change during the
course of the project. And if the project is of any significant duration, considerable changes can be
expected.
➢ Early Defect Removal and Defect Prevention
If there is a greater delay in detecting the errors, it becomes more expensive to correct them. An error that
occurs in the requirement phase if corrected during acceptance testing, can cost about 100 times more
than correcting the error in the requirement phase. Hence it is better to provide support for defect
prevention.
6) With the help of a diagram explain the working of the waterfall model. (5)
Waterfall model is the simplest model which states that the phases are organized in a linear order. In this
model, a project begins with feasibility analysis. On successfully demonstrating the feasibility of a project,
the requirement analysis and project planning begins. The design starts after the requirement analysis is
complete and the coding begins after the design is complete, once the programming is complete, the code
is integrated and testing is done. On successful completion of testing, the system is installed. After this, the
regular operations and maintenance take place as shown in the figure (next page).
Each phase begins soon after the completion of the previous phase. Verification and validation activities
are to be conducted to ensure that the output of a phase is consistent with the overall requirements of the
system. At the end of every phase there will be an output. Outputs of earlier phases can be called as work
products and they are in the form of documents like requirement document and design document. The
output of the project is not just the final program along with the user manuals but also the requirement
document, design document, project plan, test plan and test results.
Page | 7
SOFTWARE ENGINEERING [BCA] SDMCBM
1. Waterfall model assumes that requirements of a system can be frozen before the design begins. It is
difficult to state all the requirements before starting a project.
2. Freezing the requirements usually requires choosing the hardware. A large project might take a few
years to complete. If the hardware stated is selected early then due to the speed at which the hardware
technology is changing, it will be very difficult to accommodate the technological changes.
3. Waterfall model stipulates that the requirements be completely specified before the rest of the
development can proceed. In some situations, it might be desirable to produce a part of the system and
then later enhance the system. This can’t be done if waterfall model is used.
4. It is a document driven model which requires formal documents at the end of each phase. This approach
is not suitable for interactive applications.
5. In an interesting analysis it is found that, the linear nature of the life cycle leads to “blocking states” in
which some project team members have to wait for other team members to complete the dependent task.
The time spent in waiting can exceed the time spent in productive work.
8) Explain the working of an iterative enhancement model with the help of a diagram. (5)
This model tries to combine the benefits of both prototyping and waterfall model. The basic idea is,
software should be developed in increments, and each increment adds some functional capability to the
system. This process is continued until the full system is implemented. An advantage of this approach is
that, it results in better testing because testing each increment is likely to be easier than testing the entire
system. As prototyping, the increments provide feedback from the client, which will be useful for
implementing the final system. It will be helpful for the client to state the final requirements.
Here a project control list is created. It contains all tasks to be performed to obtain the final
implementation and the order in which each task is to be carried out. Each step consists of removing the
next task from the list, designing, coding, testing and implementation and the analysis of the partial system
obtained after the step and updating the list after analysis. These three phases are called design phase,
implementation phase and analysis phase. The process is iterated until the project control list becomes
empty. At this moment, the final implementation of the system will be available.
The first version contains some capability. Based on the feedback from the users and experience with the
current version, a list of additional features is generated. And then more features are added to the next
versions. This type of process model will be helpful only when the system development can be broken
down into stages.
The goal of prototyping is to overcome the limitations of waterfall model. Here a throwaway prototype is
built to understand the requirements. This prototype is developed based on the currently known
requirements. Development of the prototype undergoes design, coding and testing, but each of these
phases is not done very thoroughly of formally. By using the prototype, the client can get actual feel of the
system because the interaction with the prototype can enable the client to better understand the system.
This results in more stable requirements that change less frequently. Prototyping is very much useful if
there is no manual process or existing systems which help to determine the requirements.
Initially, primary version of the requirement specification document will be developed and the end-users
and clients are allowed to use the prototype. Based on their experience with the prototype, they provide
feedback to the developers regarding the prototype. They are allowed to suggest changes if any. Based on
the feedback, the prototype is modified to incorporate the changes suggested. Again clients are allowed to
use the prototype. This process is repeated until no further change is suggested. This model is helpful
when the customer is not able to state all the requirements. Because the prototype is throwaway, only
minimum documentation is needed during prototyping. For example design document and test plan etc.
are not needed for the prototype.
Page | 9
SOFTWARE ENGINEERING [BCA] SDMCBM
10) Explain the spiral model with the help of a diagram. (5)[diagram too complicated to draw, so
skipped, refer pdf notes for it]
As the name suggests, the activities of this model can be organized like a spiral that has many cycles. The
radial dimension represents the cumulative cost incurred in accomplishing the steps done so far, and the
angular dimension represents the progress made in completing each cycle of the spiral.
Each cycle in the spiral begins with the identification of objectives for that cycle; the different alternatives
that are possible for achieving the objectives and the constraints that exist. This is the first quadrant of the
cycle. The next step is to evaluate different alternatives based on the objectives and constraints. The focus
is based on the risks. Risks reflect the chances that some of the objectives of the project may not be met.
Next step is to develop strategies that resolve the uncertainties and risks. This step may involve activities
like prototyping. The risk-driven nature of the spiral model allows it to suit for any applications. The
important feature of spiral model is that, each cycle of spiral is completed by a review that covers all the
products developed during that cycle; including plans for the next cycle.
In a typical application of spiral model, one might start with an extra round-zero, in which the feasibility of
the basic project objectives is studied. In round-one a concept of operation might be developed. The risks
are typically whether or not the goals can be met within the constraints. In round-2, the top-level
requirements are developed. In succeeding rounds the actual development may be done. In a project,
where risks are high, this model is preferable.
Advantages of Spiral Model
1) This model has a realistic approach because software evolves as the model progresses.
2) Because prototypes are also involved it is easy for risk analysis.
3) If used properly, risks can be reduced before they become problematic.
Disadvantages of Spiral Model
1) It is difficult to convince the customers that the evolutionary approach is controllable.
2) It demands considerable risk-assessment expertise and depends heavily on this expertise for the
success.
12) Briefly explain the various activities of software configuration management process. (5)
Changes continuously take place in a software project—changes due to the evolution of work products as
the project proceeds, changes due to defects (bugs) being found and then fixed, and changes due to
requirement changes. All these are reflected as changes in the files containing source, data, or
documentation. Configuration management (CM) or software configuration management (SCM) is the
discipline for systematically controlling the changes that take place during development. The IEEE defines
SCM as "the process of identifying and defining the items in the system, controlling the change of these
items throughout their life cycle, recording and reporting the status of items and change requests, and
verifying the completeness and correctness of items". Though all three are types of changes, changes due
to product evolution and changes due to bug fixes can be, in some sense, treated as a natural part of the
project itself which have to be dealt with even if the requirements do not change.
Page | 10
SOFTWARE ENGINEERING [BCA] SDMCBM
Software configuration management is a process independent of the development process largely because
most development models look at the macro picture and not on changes to individual files. CM is essential
to satisfy one of the basic objectives of a project—delivery of a high-quality software product to the client.
➢ CM Functionality
To better understand CM, let us consider some of the functionality that a project requires from the CM
process.
• Give latest version of a program. Suppose that a program has to be modified. Clearly, the modification
has to be carried out in the latest copy of that program; otherwise, changes made earlier may be lost. A
proper CM process will ensure that latest version of a file can be obtained easily.
• Undo a change or revert back to a specified version. A change is made to a program, but later it becomes
necessary to undo this change request. Similarly, a change might be made to many programs to implement
some change request and later it may be decided that the entire change should be undone. The CM
process must allow this to happen smoothly.
• Prevent unauthorized changes or deletions. A programmer may decide to change some programs, only to
discover that the change has adverse side effects. The CM process ensures that unapproved changes are
not permitted.
• Gather all sources, documents, and other information for the current system. All sources and related files
are needed for releasing the product. The CM process must provide this functionality. All sources
and related files of a working system are also sometimes needed for reinstallation.
➢ CM Mechanisms
The main purpose of CM is to provide various mechanisms that can support the functionality needed by a
project to handle the types of scenarios discussed above that arise due to changes. The mechanisms
commonly used to provide the necessary functionality include the following
• Configuration identification and baselining
• Version control or version management
• Access control
A Software configuration item (SCI), or item is a document or an artifact that is explicitly placed under
configuration control and that can be regarded as a basic unit for modification. As the project proceeds,
hundreds of changes are made to these configuration items. Without periodically combining proper
versions of these items into a state of the system, it will become very hard to get the system from the
different versions of the many SCIs. For this reason, baselines are established. A baseline, once established,
captures a logical state of the system, and forms the basis of change thereafter. A baseline also forms a
reference point in the development of a system. A baseline essentially is an arrangement of a set of SCIs.
That is, a baseline is a set of SCIs and the relationship between them.
An SCI X is said to depend on another SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request, though, might require changes
be made to some SCIs; the dependency of other SCIs on the ones being changed might require that other
SCIs also need to be changed. Clearly, the dependency between the SCIs needs to be properly understood
and documented.
Page | 11
SOFTWARE ENGINEERING [BCA] SDMCBM
Most CM systems also provide means for access control. To understand the need for access control, let us
understand the life cycle of an SCI. Typically, while an SCI is under development and is not visible to other
SCIs, it is considered being in the working state. An SCI in the working state is
not under SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for
it to be used by others, the SCI is given for review, and the item enters the state "under review." Once an
item is in this state, it is considered as "frozen," and any changes made to a private
copy that the developer may have made are not recognized. After a successful review the SCI is entered
into a library, after which the item is formally under SCM. The basic purpose of this review is to make sure
that the item is of satisfactory quality and is needed by others, though the exact nature of
review will depend on the nature of the SCI and the actual practice of SCM.
➢ CM Process
The CM process defines the set of activities that need to be performed to control change. As with most
activities in project management, the first stage in the CM process is planning. Then the process has to be
executed, generally by using some tools. Finally, as any CM plan requires some discipline
from the project personnel in terms of storing items in proper locations, and making changes properly,
monitoring the status of the configuration items and performing CM audits are therefore other activities in
the CM process.
Planning for configuration management involves identifying the configuration items and specifying the
procedures to be used for controlling and implementing changes to these configuration items. Identifying
configuration items is a fundamental activity in any type of CM.
To facilitate proper naming of configuration items, the naming conventions for CM items are decided
during the CM planning stages. In addition to naming standards, version numbering must be planned.
When a configuration item is changed, the old item is not replaced with the new copy; instead, the old
copy is maintained and a new one is created. This approach results in multiple versions of an item, so
policies for version number assignment are needed. If a CM tool is being used, then sometimes the
tool handles the version numbering.
The configuration controller (CC) is responsible for the implementation of the CM plan. Where there are
large teams or where two or more teams/groups are involved in the development of the same or different
portions of the software or interfacing systems, it may be necessary to have a configuration control board
(CCB). This board includes representatives from each of the teams. A CCB (or a CC) is considered essential
for CM, and the CM plan must clearly define the roles and responsibilities of the CC/CCB. These duties will
also depend on the type of file system and the nature of CM tools being used.
Page | 12
SOFTWARE ENGINEERING [BCA] SDMCBM
the development of a system. A baseline essentially is an arrangement of a set of SCIs. That is, a baseline is
a set of SCIs and the relationship between them.
An SCI X is said to depend on another SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request, though, might require changes
be made to some SCIs; the dependency of other SCIs on the ones being changed might require that other
SCIs also need to be changed. Clearly, the dependency between the SCIs needs to be properly understood
and documented. Most CM systems also provide means for access control. To understand the need for
access control, let us understand the life cycle of an SCI. Typically, while an SCI is under development and is
not visible to other SCIs, it is considered being in the working state. An SCI in the working state is not under
SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for it to be
used by others, the SCI is given for review, and the item enters the state "under review." Once an item is in
this state, it is considered as "frozen," and any changes made to a private copy that the developer may
have made are not recognized. After a successful review the SCI is entered into a library, after which the
item is formally under SCM. The basic purpose of this review is to make sure that the item is of satisfactory
quality and is needed by others, though the exact nature of review will depend on the nature of the SCI
and the actual practice of SCM.
Page | 13
SOFTWARE ENGINEERING [BCA] SDMCBM
Most CM systems also provide means for access control. To understand the need for access control, let us
understand the life cycle of an SCI. Typically, while an SCI is under development and is not visible to other
SCIs, it is considered being in the working state. An SCI in the working state is
not under SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for
it to be used by others, the SCI is given for review, and the item enters the state "under review." Once an
item is in this state, it is considered as "frozen," and any changes made to a private
copy that the developer may have made are not recognized. After a successful review the SCI is entered
into a library, after which the item is formally under SCM. The basic purpose of this review is to make sure
that the item is of satisfactory quality and is needed by others, though the exact nature of
review will depend on the nature of the SCI and the actual practice of SCM.
Page | 14
SOFTWARE ENGINEERING [BCA] SDMCBM
The initial process (level 1) is essentially an ad hoc process that has no formalized method for any activity.
Basic project controls for ensuring that activities are being done properly, and that the project plan is being
adhered to, are missing. Success in such organizations depends solely on the quality and capability of
individuals. The process capability is unpredictable as the process constantly changes. Organizations at this
level can benefit most by improving project management, quality assurance, and change control.
In a repeatable process (level 2), policies for managing a software project and procedures to implement
those policies exist. That is, project management is well developed in a process at this level. Some of the
characteristics of a process at this level are: project commitments are realistic and based on past
experience with similar projects, cost and schedule are tracked and problems resolved when they arise,
formal configuration control mechanisms are in place, and software project standards are defined and
followed. Essentially, results obtained by this process can be repeated as the project
planning and tracking is formal.
At the defined level (level 3) the organization has standardized a software process, which is properly
documented. A software process group exists in the organization that owns and manages the process. In
the process each step is carefully defined with verifiable entry and exit criteria, methodologies
for performing the step, and verification mechanisms for the output of the step. In this process both the
development and management processes are formal.
At the managed level (level 4) quantitative goals exist for process and products. Data is collected from
software processes, which is used to build models to characterize the process. Hence, measurement plays
an important role in a process at this level. Due to the models built, the organization has
a good insight of the process capability and its deficiencies. The results of using such a process can be
predicted in quantitative terms.
At the optimizing level (level 5), the focus of the organization is on continuous process improvement. Data
is collected and routinely analyzed to identify areas that can be strengthened to improve quality or
productivity. New technologies and tools are introduced and their effects measured in an
effort to improve the performance of the process. Best software engineering and management practices
are used throughout the organization.
Page | 15
SOFTWARE ENGINEERING [BCA] SDMCBM
UNIT - II
QUESTIONS CARRYING TWO MARKS
4) What are data source and sink? How to represent them in DFD?
5) Give any two symbols used in DFD with their purpose.
The processes are shown by named circles and data flows are represented by named arrows entering or
leaving the bubbles.
Page | 16
SOFTWARE ENGINEERING [BCA] SDMCBM
1. Correct
2. Complete
3. Unambiguous
4. Verifiable.
5. Consistent
6. Ranked for important/stability
7. Modifiable
8. Traceable
1. Functionality
2. Performance
3. Design constraints imposed on an implementation
4. External interface
Page | 17
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 18
SOFTWARE ENGINEERING [BCA] SDMCBM
Another important purpose of development an SRS is helping the clients understand their own needs. As
we mentioned earlier, for software system that are not just automating manual existing system,
requirements have to be visualized and created. Even where the primary goal is to automate an existing
manual system, new requirements emerge as the introduction of software offers new potential for
features, such as providing new services, performance activities in a different manner, and collection data
that were either impossible or infeasible without a software system. In order to satisfy the client, which is
the basic quality objective of software development, the client has to be made aware of these potentials
and aided in visualizing and conceptualizing the need and requirements of his organization. The process of
developing an SRS usually helps in this, as it forces the client and the users to think, visualize, interact, and
discuss with others (include the requirement analyst) to identify the requirements.
The requirements phase typically consists of three basic activities: problem or requirement analysis,
requirement specification, and requirements validation. The first aspect, deals with understanding the
problem, the goals, the constraints, etc. Problem analysis starts with some general “statement of need” or
a high level “problem statement”. During analysis the problem domain and the environment are modeled
in an effort to understand the system behavior, constraints on the system, its inputs and outputs, etc. The
basic purpose of this activity is to obtain a thorough understanding of what the software needs to provide.
The understanding obtained by problem analysis forms the basis of the second activity-requirements
specification.
As analysis produces large amounts of information and knowledge with possible redundancies; properly
organizing and describing the requirements is an important goal of this activity. The final activity focuses
on validating that what has been specifies in the SRS are indeed all the requirements of the software and
making sure that the SRS is of good quality. The requirements process terminates with the production of
the validated SRS. In most real systems, there is considerable overlap and feedback between these
activities. So, some parts of the system are analyzed and then specified while the analysis of the other
parts is going on.
As shown in the figure, from the specification activity we may go back to the analysis activity. This happens
as frequently some parts of the problem are analyzed and then specified before other parts are analyzed
and specified. Furthermore, the process of specification frequently shows shortcomings in the knowledge
of the problem, thereby necessitating further analysis. Once the specification is “complete” it goes through
the validation activity. This activity may reveal problems in the specification itself, which requires going
back to the specification step, or may reveal shortcomings in the understanding of the problem, which
requires going back to the analysis activity.
Page | 19
SOFTWARE ENGINEERING [BCA] SDMCBM
In this DFD there is one basic input data flow, the weekly timesheet, which originates from the source
worker. The basic output is the paycheck, the sink for which is also the worker. In this system, first the
employee's record is retrieved, using the employee ID, which is contained in the timesheet. From the
employee record, the rate of payment and overtime are obtained. These rates and the regular and over-
time hours (from the timesheet) are used to compute the pay. After the total pay is determined, taxes are
deducted. To compute the tax deduction, information from the tax-rate file is used. The amount of tax
deducted is recorded in the employee and company records. Finally, the paycheck is issued for the net pay.
The amount paid is also recorded in company records. All external file such as employee record, company
record, and tax rates are shown as a labeled straight line. The need for multiple data flows represented by
a process is represented by a “*” between the data flows. This symbol represents the AND relationship.
In the DFD, for the process "weekly pay" the data flow "hours" and "pay rate" both are needed, as shown
in the DFD. Similarly, the OR relationship is represented by a "+" between the data flows.
Page | 20
SOFTWARE ENGINEERING [BCA] SDMCBM
• A SRS is correct if every requirement included in SRS represents something required in the final
system. An SRS is complete if everything the software is supposed to do and the responses of the
software to all classes of input data are specified in the SRS. Completeness and correctness go hand-in-
hand.
• An SRS is unambiguous if and only if every requirement stated one and only one interpretation.
Requirements are often written in natural language, which are inherently ambiguous. If the
requirements are specified using natural language, the SRS writer should ensure that there is no
ambiguity. One way to avoid ambiguity is to use some formal requirement specification language. The
major disadvantage of using formal languages is large effort is needed to write an SRS and increased
difficulty in understanding formally stated requirements especially by clients.
• A SRS is verifiable if and only if every stored requirement is verifiable. A requirement is verifiable if
there exists some cost effective process that can check whether the final software meets that
requirement. Un-ambiguity is essential for verifiability. Verification of requirements is often done
through reviews.
• A SRS is consistent if there is no requirement that conflicts with another. This can be explained with
the help of an example: suppose that there is a requirement stating that process A occurs before
process B. But another requirement states that process B starts before process A. This is the situation
of inconsistency. Inconsistencies in SRS can be a reflection of some major problems.
• Generally, all the requirements for software need not be of equal importance. Some are critical.
Others are important but not critical. An SRS is ranked for importance and/or stability if for each
requirement the importance and the stability of the requirement are indicated. Stability of a
requirement reflects the chances of it being changed. Writing SRS is an iterative process.
• An SRS is modifiable if its structure and style are such that any necessary change can be made easily
while preserving completeness and consistency. Presence of redundancy is a major difficulty to
modifiability as it can easily lead to errors. For example, assume that a requirement is stated in two
places and that requirement later need to be changed. If only one occurrence of the requirement is
modified, the resulting SRS will be inconsistent
• An SRS is traceable if the origin of each requirement is clear and if it facilitates the referencing of each
requirement in future development. Forward traceability means that each requirement should be
traceable to some design and code elements. Backward traceability requires that it is possible to trace
the design and code element to the requirements they support.
Page | 21
SOFTWARE ENGINEERING [BCA] SDMCBM
2. Performance Requirements
This part of an SRS specifies the performance constraints on the software system. All the requirements
relating to the performance characteristics of the system must be clearly specified. There are two types of
performance requirements : Static and Dynamic.
Static requirements are those that do not impose constraint on the execution characteristics of the system.
Dynamic requirement specify constraint on the execution behaviour of the system.
3. Design constraints
There are a number of factors in the client’s environment that may restrict the choices of the designer.
Such factors include standards that must be followed, resource limits operating environment, reliability
and security requirements, and policies that may have an impact on the design of the system. An SRS
should identify and specify all such constraints.
➢ Standard compliance : This specifies the requirements for the standards that the system must follow.
➢ Hardware Limitations : The software may have to operate on some existing or pre-determined
hardware, thus imposing restrictions on the design. Hardware limitations can include the type of machines
to be used, operating system available on the system, languages supported.
➢ Reliability and Fault Tolerance : Fault tolerance requirements often make the system more complex
and expensive. Reliability requirements are very important for critical applications.
➢ Security : Security requirements place restrictions on the use of certain commands control access data,
provide different kinds of access requirements for different people, require the use of passwords and
cryptography techniques and maintain a log of activities in the system.
1. Structured English
Natural languages have been widely used for specifying requirements. The major advantage of using a
natural language is that both client and supplier understand the language. Initially, since the software
systems were small, requirements were verbally conveyed using the natural language. Later, as software
requirements grew more complex, requirements were specified in a written form, rather than orally, but
the means for the expression stayed the same.
The use of natural languages has some drawbacks:-
o By the very nature of a natural language, written requirements will be imprecise and ambiguous
o Efforts to be more precise and complete result in voluminous requirement specification documents, as
natural languages are quite verbose
Page | 22
SOFTWARE ENGINEERING [BCA] SDMCBM
Due to these drawbacks there is an effort to move from natural languages to formal languages for
requirement specification. However, natural languages are still widely used and are likely to be used in the
near future.
2. Regular Expressions
Regular expressions are used to specify the structure of symbol strings formally. String specifications are
useful for specifying such things as input data, command sequence and contents of a message. Regular
expressions are useful for such cases. Regular expressions can be considered as grammar for specifying the
valid sequences in a language and can be automatically processed. They are routinely used in compiler
construction for recognition of symbols and tokens.
There are few basic constructs allowed in regular expressions:-
1. Atoms : the basic symbol or alphabet of a language.
2. Composition : formed by concatenation two regular expressions.
3. Alternation : Specifies the either/or relationship.
4. Closure : specifies the repeated occurrence of a regular expression.
Page | 23
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 24
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 25
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 26
SOFTWARE ENGINEERING [BCA] SDMCBM
Two modules are considered independent if one can function completely without the presence of other.
Obviously, if two modules are independent, they are solvable and modifiable separately. However, all the
modules in a system cannot be independent of each other, as they must interact so that together they
produce the desired external behavior of the system.
The more connections between modules, the more dependent they are in the sense that more knowledge
about one module is required to understand or solve the other module. Hence, the fewer and simpler the
connections between modules, the easier it is to understand one without understanding the other.
Coupling between modules is the strength of interconnection between modules or a measure of
independence among modules.
To solve and modify a module separately, we would like the module to be loosely coupled with other
modules. The choice of modules decides the coupling between modules. Coupling is an abstract concept
and is not easily quantifiable. So, no formulas can be given to determine the coupling between two
modules. However, some major factors can be identified as influencing coupling between modules.
Among them the most important are the type of connection between modules, the complexity of the
interface, and the type of information flow between modules. Coupling increase with the complexity and
obscurity of the interface between modules. To keep coupling low we would like to minimize the number
of interfaces per module and the complexity of each interface. An interface of a module is used to pass
information to and from other modules. Complexity of the interface is another factor affecting coupling.
The more complex each interface is, higher will be the degree of coupling. The type of information flow
along the interfaces is the third major factor-affecting coupling. There are two kinds of information that
Page | 27
SOFTWARE ENGINEERING [BCA] SDMCBM
can flow along an interface: data or control, Passing or receiving control information means that the action
of the module will depend on this control information, which makes it more difficult to understand the
module and provide its abstraction. Transfer of data information means that a module passes as input
some data to another module and gets in return some data as output.
• Coincidental
• Logical
• Temporal
• Procedural
• Communicational
• Sequential
• Functional
• Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of breaking the
program into smaller modules for the sake of modularization. Because it is unplanned, it may serve
confusion to the programmers and is generally not-accepted.
• Logical cohesion - When logically categorized elements are put together into a module, it is called logical
cohesion.
• Temporal Cohesion - When elements of module are organized such that they are processed at a similar
point in time, it is called temporal cohesion.
• Procedural cohesion - When elements of module are grouped together, which are executed sequentially
in order to perform a task, it is called procedural cohesion.
• Communicational cohesion - When elements of module are grouped together, which are executed
sequentially and work on same data (information), it is called communicational cohesion.
• Sequential cohesion - When elements of module are grouped because the output of one element serves
as input to another and so on, it is called sequential cohesion.
• Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly expected.
Elements of module in functional cohesion are grouped because they all contribute to a single well-defined
function. It can also be reused.
Page | 28
SOFTWARE ENGINEERING [BCA] SDMCBM
For example, if the invocation of modules C and D in module A depends on the outcome of some decision,
that is represented by a small diamond in the box for A, with the arrows joining C and D coming out of this
diamond, as shown in Figure.
11) Explain the different types of modules used in Structure Chart. (5)
Modules in a system can be categorized into a few classes. Some modules obtain information from their
subordinates and then pass it to their superiordinate. This kind of module is an input module. Similarly,
there are output modules that take information from their superiordinate and pass it on to its
subordinates. As the name suggests, the input and output modules are typically used for input and output
of data The input modules get the data from the sources and get it ready to be processed, and the output
modules take the output produced and prepare it for proper presentation to the environment.
Then some modules exist solely for the sake of transforming data into some other form. Such a module is
called a transform module. Most of the computational modules typically fall in this category. Finally, there
are modules whose primary concern is managing the flow of data to and from different subordinates. Such
modules are called coordinate modules. The structure chart representation of the different types of
modules is shown in Figure 5.3. A module can perform functions of more than one type of module.
3) First-Level Factoring
Having identified the central transforms and the most abstract input and output data items, we are ready
to identify some modules for the system. We first specify the main module, whose purpose is to invoke the
subordinates. The main module is therefore coordinated. For each of the most abstract input data items,
an immediate subordinate module to the main module is specified. Each of these modules is an input
module, whose purpose is to deliver to the main module the most abstract data item for which it is
created.
Similarly, for each most abstract output data item, a subordinate module that is an output module that
accepts data from the main module is specified. Each of the arrows connecting these input and output
subordinate modules is labeled with the respective abstract data item flowing in the proper direction.
Finally, for each central transform, a module subordinate to the main one is specified. These modules will
be transformed modules, whose purpose is to accept data from the main module, and then return the
appropriate data to the main module. The data items coming to a transform module from the main
module are on the incoming arcs of the corresponding transform in the data flow diagram. The data items
returned are on the outgoing arcs of that transform. Note that here a module is created for a transform,
while input/output modules are created for data items.
Page | 30
SOFTWARE ENGINEERING [BCA] SDMCBM
The strategy requires the designer to exercise sound judgment and common sense. The basic objective is
to make the program structure reflect the problem as closely as possible. Here we mention some heuristics
that can be used to modify the structure, if necessary.
Module size is often considered the indication of module complexity. In terms of the structure of the
system, very large modules may not be implementing a single function and can therefore be broken into
many modules, each implementing a different function. On the other hand, modules that are too small
may not require any additional identity and can be combined with other modules.
However, the decision to split a module or combine different modules should not be based on size alone.
Cohesion and coupling of modules should be the primary guiding factors. A module should be split into
separate modules only if the cohesion of the original module was low, the resulting modules have a higher
degree of cohesion, and the coupling between modules doesn’t increase. Similarly, two or more modules
should be combined only if the resulting module has a high degree of cohesion and the coupling of the
Page | 31
SOFTWARE ENGINEERING [BCA] SDMCBM
resulting module is not greater than the coupling of the sub-modules. In general, if the module should
contain LOC between 5 and 100. Above 100 and less than 5 LOC is not desirable.
Another factor to be considered is “fan-in” and “fan-out” of modules. Fan-in of a module is the number of
arrows coming towards the module indicating the number of superordinates. Fan-out of a module is the
number of arrows going out of that module; indicating the number of subordinates for that module. A
very-high fan-out is not desirable as it means that the module has to control and co-ordinate too many
modules. Whenever possible, fan-in should be maximized. In general, the fan-out should not be more than
6.
Another important factor that should be considered is the correlation of the scope of effect and scope of
control. The scope of effect of a decision is collection of all the modules that contain any processing that is
conditional that decision or whose invocation is dependent on the outcome of the decision; The scope of
control of a module is the module itself and all its subordinates. The system is usually simpler when the
scope of effect of a decision is a subset of the scope of control of the module in which decision is located.
Page | 32
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 33
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 34
SOFTWARE ENGINEERING [BCA] SDMCBM
minmax (infile)
ARRAY a
DO UNTIL end of input
READ an item to a
ENDDO
max, min := first item of a
DO FOR each item in a
IF max < item THEN set max to item
IF min > item THEN set min to item
ENDDO
END
PDL description of the minmax program.
Notice that in the PDL program we have the entire logic of the procedure, but little about the details of
implementation in a particular language. To implement this in a language, each of the PDL statements will
have to be converted into programming language statements. With PDL, a design can be expressed in
whatever level of detail that is suitable for the problem. One way to use PDL is to first generate a rough
outline of the entire solution at a given level of detail. When the design is agreed on at this level, more
The structured outer syntax of PDL also encourages the use of structured language constructs while
implementing the design. The basic constructs of PDL are similar to those of a structured language.
PDL provides IF construct which is similar to the if-then-else construct of Pascal. Conditions and the
statements to be executed need not be stated in a formal language. For a general selection, there is a CASE
statement. Some examples of The DO construct is used to indicate repetition. The construct is indicated by:
DO iteration-criteria
one or more statements
ENDDO
The iteration criteria can be chosen to suit the problem, and unlike a formal programming language, they
need not be formally stated. Examples of valid uses are:
A procedure is a finite sequence of well-defined steps or operations, each of which requires a finite
amount of memory and time to complete.
The starting step in the design of algorithms is statement of the problem. The problem for which an
algorithm is being devised has to be precisely and clearly stated and properly understood by the person
responsible for designing the algorithm. The next step is development of a mathematical model where one
has to select the mathematical structures that are best suited for the problem.
Page | 35
SOFTWARE ENGINEERING [BCA] SDMCBM
The next step is the design of the algorithm- the data structure and program structure are decided. Once
the algorithm is designed, correctness should be verified.
The most common method for designing algorithms or the logic for a module is to use the stepwise
refinement technique.
The stepwise refinement technique breaks the logic design problem into a series of steps, so that the
development can be done gradually.
The process starts by converting the specifications of the module into an abstract description of an
algorithm containing a few abstract statements.
In each step, one or several statements in the algorithm developed so far are decomposed into more
detailed instructions.
The successive refinement terminates when all instructions are sufficiently precise that they can easily be
converted into programming language statements. The stepwise refinement technique is a top-down
method for developing detailed design.
The finite state modeling of objects is an aid to understand the effect of various operations defined on the
class on the state of the object. To develop the logic of operations, regular approaches for algorithm
development can be used. The model can also be used to validate if the logic for an operation is correct.
Page | 36
SOFTWARE ENGINEERING [BCA] SDMCBM
Design Walkthroughs
A design walkthrough is a manual method of verification. The definition and use of walkthroughs change
from organization to organization. A design walkthrough is done in an informal meeting called by the
designer or the leader of the designer's group. The walkthrough group is usually small and contains, along
with designer, the group and/or another designer of the group.
The designer might just get together with a colleague for the walkthrough or the group leader might
require the designer to have the walkthrough with him. In a walkthrough the designer explains the logic
step by step, and the members of the group ask questions, point out possible errors or seek clarification. A
beneficial side effect of walkthroughs is that in the process of articulating and explaining the design in
detail, the designer himself can uncover some of the errors. Walkthroughs are essentially a form of peer
review. Due to its informal nature, they are usually not as effective as the design review.
It should be kept in mind that the aim of the meeting is to uncover design errors, not to fix them. Fixing is
done later. Also, the psychological frame of mind should be healthy, and the designer should not be put in
a defensive position. The meeting should end with a list of action items, to be acted on later by the
designer.
Consistency Checkers
Design reviews and walkthrough are manual processes; the people involved in the review and walkthrough
determine the error in the design .If the design is specified in PDL or some other formally defined design
language, it is possible to detect some design defects by using consistency checkers.
Consistency checkers are essentially compilers that take as input the design specified in a design language
(PDL in our case). Clearly, they cannot produce executable code because the inner syntax of PDL allows
natural language and many activities are specified in the natural language. A consistency checker can
ensure that any modules invoke or used by a given module actually exist in the design and that the
interface used by the called is consistent with the interface definition of the called module.
6) What are the activities that are undertaken during critical design review? (5)
The purpose of critical design review is to ensure that the detailed design satisfies the specification laid
down by system design. It is very desirable to detect and remove design error, as the cost of removing
them later can be considerably more than the cost of removing them at design time. Detecting errors in
detailed design is the aim of critical design review.
The critical design review process is similar to the other reviews, in that groups of people get together to
discuss the design with the aim of revealing design errors or undesirable properties. The review groups
include, besides the author of the detailed design, a member of the system design team, the programmer
responsible for ultimately coding the module(s) under review, and an independent software quality
engineer.
Page | 37
SOFTWARE ENGINEERING [BCA] SDMCBM
It should be kept in mind that the aim of the meeting is to uncover design errors, not to fix them. Fixing is
done later. Also, the psychological frame of mind should be healthy, and the designer should not be put in
a defensive position. The meeting should end with a list of action items, to be acted on later by the
designer.
A Sample Checklist
Does each of the modules in the system design exist in detailed design?
Are there analyses to demonstrate the performance requirement can be met?
Are all the assumptions explicitly stated and are they acceptable?
Are all relevant aspects of system design reflected in detailed design?
7) Write a note on
i. Design Walkthroughs ii. Consistency Checkers (6)
Design Walkthroughs: A design walkthrough is a manual method of verification. The definition and use of
walkthroughs change from organization to organization. A design walkthrough is done in an informal
meeting called by the designer or the leader of the designer's group. The walkthrough group is usually
small and contains, along with designer, the group and/or another designer of the group.
Consistency Checkers : Consistency checkers are essentially compilers that take as input the design
specified in a design language (PDL in our case). Clearly, they cannot produce executable code because the
inner syntax of PDL allows natural language and many activities are specified in the natural language. A
consistency checker can ensure that any modules invoke or used by a given module actually exist in the
design and that the interface used by the called is consistent with the interface definition of the called
module.
A program has a static structure as well as a dynamic structure. The static structure is the structure of the
text of the program, which is usually just a linear organization of statements of the program, The dynamic
structure of the program is the sequences of statements executed during the execution of the program. In
other words, both the static structure and the dynamic behavior are sequences of statements; where the
sequence representing the static structure of a program is fixed, the sequence of statements it executes
can change from execution to execution.
It will be easier to understand the dynamic behavior if the structure in the dynamic behavior resembles the
static structure. The closer the correspondence between execution and text structure, the easier the
program is to understand, and the more different the structure during execution, the harder it will be to
argue about the behavior from the program text. The goal of structured programming is to ensure that the
static structure and the dynamic structures are the same. That is, the objective of structured programming
is to write programs so that the sequence of statements executed during the execution of a program is the
same as the sequence of statements in the text of that program. As the statements in a program text are
linearly organized, the objective of structured programming becomes developing programs whose control
flow during execution is linearized and follows the linear organization of the program text. Clearly, no
meaningful program can be written as a sequence of simple statements without any branching or
repetition. In structured programming, a statement is not a simple assignment statement, it is a structured
Page | 38
SOFTWARE ENGINEERING [BCA] SDMCBM
statement. The key property of a structured statement is that it has a single-entry and a single-exit, That is,
during execution, the execution of the (structured) statement starts from one defined point and the
execution terminates at one defined point. With single-entry and single-exit statements, we can view a
program as a sequence of (structured) statements. And if all statements are structured statements, then
during execution, the sequence of execution of these statements will be the same as the sequence in the
program text. Hence, by using single-entry and single-exit statements, the correspondence between the
static and dynamic structures can be obtained. The most commonly used single-entry and single-exit
statements are:
Selection: if B then S1 else S2
if B then SI
Iteration: While B do S
Repeat S until B
It can be shown that these three basic constructs are sufficient to program any conceivable algorithm.
Modem languages have other such constructs that help linearize the control flow of a program, which
makes it easier to understand a program. Hence, programs should be written so that, as far as possible,
single-entry, single-exit control constructs is used. The basic goal, as we have tried to emphasize, is to
make the logic of the program simple to understand. The basic objective of using structured constructs is
to linearize the control flow so that the execution behavior is easier to understand. In linearized control
flow, if we understand the behavior of each of the basic constructs properly, the behavior of the program
can be considered a composition of the behaviors of the different statements. Overall, it can be said that
structured programming, in general, leads to programs that are easier to understand than unstructured
programs.
For example, a ledger in an accountant's office has some defined uses: debit, credit, check the current
balance, etc. An operation where all debits are multiplied together and then divided by the sum of all
credits is typically not performed. So, any information in the problem domain typically has a small number
of defined operations performed on it.
Information hiding can reduce the coupling between modules and make the system more maintainable.
With information hiding, the impact on the modules using the data needs to be evaluated only when the
data structure or its access functions are changed. Otherwise, as the other modules are not directly
accessing the data, changes in these modules will have little direct effect on other modules using the data.
Also, when a data structure is changed, the effect of the change is generally limited to the access functions
if information hiding is used. Otherwise, all modules using the data structure may have to be changed.
Information hiding is also an effective tool for managing the complexity of developing software. As we
have seen, whenever possible, problem partitioning must be used so that concerns can be separated and
different parts solved separately.
Page | 39
SOFTWARE ENGINEERING [BCA] SDMCBM
11) Explain common errors that occur during Coding (5) [for detailed ans refer notes]
➢ Memory Leaks
A memory leak is a situation where the memory is allocated to the program which is not freed
subsequently. This error is a common source of software failures which occurs frequently in the languages
which do not have automatic garbage collection (like C, C++). A software system with memory leaks keeps
consuming memory, till at some point of time the program may come to an exceptional halt because of the
lack of free memory. An example of this error is:
➢ Freeing an Already Freed Resource
In general, in programs, resources are first allocated and then freed. For example, memory is first allocated
and then deallocated. This error occurs when the programmer tries to free the already freed resource. The
impact of this common error can be catastrophic
➢ NULL Dereferencing
This error occurs when we try to access the contents of a location that points to NULL. This is a commonly
occurring error which can bring a software system down. It is also difficult to detect as it the NULL
dereferencing may occur only in some paths and only under certain situations.
12) Write a note on the importance of Comments and Layout in Coding (4)
➢ Commenting and Layout
Comments are textual statements that are meant for the program reader to aid the understanding of code.
The purpose of comments is not to explain in English the logic of the program—if the logic is so complex
that it requires comments to explain it, it is better to rewrite and simplify the code instead. Some such
guidelines
are:
• Single line comments for a block of code should be aligned with the code they are meant for.
Page | 40
SOFTWARE ENGINEERING [BCA] SDMCBM
• There should be comments for all major variables explaining what they represent.
• A block of comments should be preceded by a blank comment line with just "/*" and ended with a line
containing just "*/".
• Trailing comments after statements should be short, on the same line, and shifted far enough to separate
them from statements.
13) Explain static analysis and its uses. (5)[detailed ans in notes]
➢ Static Analysis
Analysis of programs by methodically analyzing the program text is called Static analysis is usually
performed mechanically by the aid of software tools. During static analysis the program itself is not
executed, but the program text is the input to the tools. The aim of the static analysis tools is to detect
errors or potential errors of the program that can be useful for documentation or understanding of the
program. An advantage is that static analysis sometimes detects the errors themselves. This saves the
effort of tracing the error from the data that reveals the presence of errors. Furthermore, static analysis
can provide "warnings" against potential errors and can provide insight into the structure of the program.
It is also useful for determining violations of local programming standards, which the standard compilers
will be unable to detect. Extensive static analysis can considerably reduce the effort later needed during
testing.
Examples of data flow anomalies are unreachable code, unused variables, and unreferenced labels.
Unreachable code is that part of the code to which there is not a feasible path; there is no possible
execution in which it can be executed. Technically this is not an error, and a compiler will at most generate
a warning.
Unreferenced labels and unused variables are like unreachable code in that they are technically not errors,
but often are symptoms of errors; thus their presence often implies the presence of errors. Data flow
analysis is usually performed by representing a program as a graph, sometimes called the flow graph.
14) List out the various items in the checklist while reviewing the code. (5)
A Sample Checklist
• Does each of the modules in the system design exist in detailed design?
• Are there analyses to demonstrate the performance requirement can be met?
• Are all the assumptions explicitly stated and are they acceptable?
• Are all relevant aspects of system design reflected in detailed design?
• Are all the data formats consistent with the system design?
Code reading involves careful reading of the code by the programmer to detect any discrepancies between
the design specifications and the actual implementation. It involves determining the abstraction of a
module and then comparing it with its specifications. The process of code reading is best done by reading
the code inside-out, starting with the innermost structure of the module. First determine its abstract
behavior and specify the abstraction. Then the higher-level structure is considered, with the inner
structure replaced by its abstraction. This process is continued until we reach the module or program being
read. At that time the abstract behavior of the program/module will be known, which can then be
compared to the specifications to determine any discrepancies. Code reading is very useful and can detect
errors often not revealed by testing Reading in the manner of stepwise-abstraction also forces the
programmer to code in a manner conducive to this process, which leads to well-structured programs. Code
reading is sometimes called desk review.
Page | 41
SOFTWARE ENGINEERING [BCA] SDMCBM
➢ Static Analysis
Analysis of programs by methodically analyzing the program text is called Static analysis is usually
performed mechanically by the aid of software tools. During static analysis the program itself is not
executed, but the program text is the input to the tools. The aim of the static analysis tools is to detect
errors or potential errors or to generate information about the structure of the program that can be useful
for documentation or understanding of the program. An advantage is that static analysis sometimes
detects the errors themselves, not just the presence of errors, as in testing. This saves the effort of tracing
the error from the data that reveals the presence of errors. Furthermore, static analysis can provide
"warnings" against potential errors and can provide insight into the structure of the program. It is also
useful for determining violations of local programming standards, which the standard compilers will be
unable to detect. Extensive static analysis can considerably reduce the effort later needed during testing.
Data flow anomalies are "suspicious" use of data in a program. In general, data flow anomalies are
technically not errors, and they may go undetected by the compiler. However, they are often a symptom of
an error, caused due to carelessness in typing or error in coding. At the very least, presence of data flow
anomalies implies poor coding. Hence, if a program has data flow anomalies, they should be properly
addressed.
x = a;
x does not appear in any right hand side
x = b;
An example of the data flow anomaly is the live variable problem, in which a variable is assigned some
value but then the variable is not used in any later computation. Such an assignment to the variable is
clearly redundant. Another simple example of this is having two assignments to a variable without using
the value of the variable between the two assignments. In this case the first assignment is redundant. For
example, consider the simple case of the code segment shown in Figure 8.2. Clearly, the first assignment
statement is useless. Perhaps the programmer meant to say y := b in the second statement, and mistyped y
as x. In that case, detecting this anomaly and directing the programmer's attention to it can save
considerable effort in testing and debugging. In addition to revealing anomalies, data flow analysis can
provide valuable information for documentation of programs. For example, data flow analysis can provide
information about which variables are modified on invoking a procedure in the caller program and the
value of the variables used in the called procedure (this can also be used to make sure that the interface of
the procedure is minimum, resulting in lower coupling). This information can be useful during maintenance
to ensure that there are no undesirable side effects of some modifications to a procedure.
Other examples of data flow anomalies are unreachable code, unused variables, and unreferenced labels.
Unreachable code is that part of the code to which there is not a feasible path; there is no possible
execution in which it can be executed. Technically this is not an error, and a compiler will at most generate
a warning. The program behavior during execution may also be consistent with its specifications. However,
often the presence of unreachable code is a sign of lack of proper understanding of the program by the
programmer, which suggests that the presence of errors is likely. Often, unreachable code comes into
existence when an existing program is modified. In that situation unreachable code may signify undesired
or unexpected side effects of the modifications. Unreferenced labels and unused variables are like
unreachable code in that they are technically not errors, but often are symptoms of errors; thus their
presence often implies the presence of errors. Data flow analysis is usually performed by representing a
program as a graph, sometimes called the flow graph.
Page | 42
SOFTWARE ENGINEERING [BCA] SDMCBM
UNIT IV
QUESTIONS CARRYING TWO MARKS
2) Define Testing.
Software Testing is a method to check whether the actual software product matches expected
requirements and to ensure that software product is Defect free.
Page | 43
SOFTWARE ENGINEERING [BCA] SDMCBM
Page | 44
SOFTWARE ENGINEERING [BCA] SDMCBM
➢ Presence of an error implies that a failure must have occurred and observance of a failure implies
that a fault must be present in the system.
➢ Testing only reveals the presence of failure; the actual faults are identified by debugging.
2) What is test oracle? Explain with diagram. (4)
A test oracle is a mechanism, different from the program itself that can be used to check the correctness of
the output of the program for the test cases.
Page | 45
SOFTWARE ENGINEERING [BCA] SDMCBM
Getting an ideal test criterion is not generally possible so that more practical properties of test criteria
have been proposed. Some axioms capturing some of the desirable properties of test criteria have been
proposed.
A.)Applicability Axiom: it states that for every program there exists a test case set T that satisfies the
criterion.
B.) Anti extentionality Axiom: it states that there are programs P and Q, both implementing the some
specifications, such that a test set T satisfy the criterion for P but does not satisfy the criterion Q.
C) Anti decomposition Axiom: it states that there exists a program P and its component Q such that a test
case set T satisfies the criterion for P and T ’ is the set of values that variables can assume on entering Q for
some test case T and T ’ does not satisfy the criterion Q.
D) Anti composition Axiom: it states that there exists programs P and Q such that T satisfies the criterion
for P and the outputs of P for T satisfies the criterion Q, but T does not satisfy the criterion for P;Q.
Getting a criterion that is reliable and valid and that can be satisfied by a manageable number of test cases
is usually not possible. Even when the criterion is specified, generating test cases to satisfy a criterion is not
simple. In general, generating test cases for most of the criteria cannot be automated.
• Without looking at the internal structure of the program, it is impossible to determine equivalence
classes. The equivalence class partitioning method tries to approximate this ideal. An equivalence class
is formed of the inputs for which the behavior of the system is specified or expected to be similar. Each
group of inputs for which the behavior is expected to be different from others is considered a separate
equivalence class.
• For example, the specifications of a module that determines the absolute value for integers specify
one behavior for positive integers and another for negative integers.
In this case, we will form two equivalence classes:-
Page | 46
SOFTWARE ENGINEERING [BCA] SDMCBM
For example, if an input condition specifies a range of values (say, 0 < count < Max), then form a valid
equivalence class with that range and two invalid equivalence classes, one with values less than the lower
bound of the range (i.e., count < 0) and the other with values higher than the higher bound (count > Max).
• The entire range of an input will not be treated in same manner, then the range should be split into 2
or more equivalence classes .Also valid equivalence class consist of one or more invalid equivalence
class should be identified.
Example:
In case of ranges, for boundary value analysis it is useful to select the boundary elements of the range and
an invalid value just beyond the two ends (for the two invalid equivalence classes). So, if the range is 0.0 <
x < 1.0, then the test cases are 0.0, 1.0 (valid inputs), and -0.1, and 1.1 (for invalid inputs).
Similarly, if the input is a list, attention should be focused on the first and last elements of the list. We
should also consider the outputs for boundary value analysis. If an equivalence class can be identified in
the output, we should try to generate test cases that will produce the output that lies at the boundaries of
the equivalence classes. Furthermore, we should try to form test cases that will produce an output that
does not lie in the equivalence class.
Each condition forms a node in the cause-effect graph. The conditions should be stated such that they can
be set to either true or false.
the cause-effect graph is as shown below
Page | 47
SOFTWARE ENGINEERING [BCA] SDMCBM
In the above graph, the cause-effect relationship of this example is captured. For all effects, one can easily
determine the causes each effect depends on and the exact nature of the dependency. For example,
according to this graph the effect e5(credit amount) depends on the causes c2 (Command is debit),
c3(Account number is valid), and c4(Transaction_amt is valid) in a manner such that the effect e5(Credit
account) is enabled when all c2, c3, and c4 are true. Similarly, the effect e2 (Print "invalid
account_number") is enabled if c3 is false.
From this graph, a 1st of test cases can be generated. The basic strategy is to set an effect to 1 and then set
the causes that enable this condition. The condition of causes forms the test case. A cause may be set to
false, true, or don't care (in the case when the effect does not depend at all on the cause). To do this for all
the effects, it is convenient to use a decision table.
Cause-effect graphing, beyond generating high-yield test cases, also aids the understanding of the
functionahty of the system, because the tester must identify the distinct causes and effects. There are
methods of reducing the number of test cases generated by proper traversing of the graph. Once the
causes and effects are listed and their dependencies specified, much of the remaining work can also be
automated.
Example :- consider the following function to complete the absolute value of a number.
int abs(x)
int x;
{
if(x>=0) x=0-x;
return (x);
}
This program clearly wrong, suppose we execute the function with test cases. The statement coverage
criterion will be satisfied by testing with this set, but error will not revealed.
Page | 48
SOFTWARE ENGINEERING [BCA] SDMCBM
The relationship between the various data flow criteria and control flow criteria is given in fig:-
11) Write a note on WinRunner/ Write the important aspects of WinRunner. (5)
WinRunner is Mercury’s legacy automated testing tool.
WinRunner is a test automation tool, designed to help customers save testingtime and effort by
automating the manual testing process.
• Automated testing with WinRunner addresses the problems by manual testing, speeding up the testing
process.
Page | 49
SOFTWARE ENGINEERING [BCA] SDMCBM
• You can create test scripts that check all aspects of your application, and then runthese tests on each
new build.
• As WinRunner runs tests, it simulates a human user by moving the mouse cursorover the application,
clicking Graphical User Interface (GUI) objects, and entering keyboard input.
Features
➢ WinRunner is: - Functional Regression Testing Tool - Windows PlatformDependent - Only for
Graphical User Interface (GUI) based Application -Based on Object Oriented Technology (OOT)
concept - Only for Staticcontent - Record/Playback Tool
➢ Winrunner environment • Windows - C++, Visual Basic, Java, PowerBuilder, Stingray, Smalltalk •
Web - Web Applications • Othertechnologies - SAP, Siebel, Oracle, PeopleSoft, ActiveX
Features:
• Silk-test has in-built customizable recovery system.So , while automated testing is in progress, even if the
application fails in between , automatically the test continues without halt.
• It has the object-oriented scripting language called 4Test;(which means using the script written in 4Test ,
an application can be tested on different platform).
• Test planning , management and reporting can be done by integration with other tools of segue
software.
TestDirector is Mercury Interactive's software test management tool. It helps quality assurance
personnel plan and organize the testing process. With TestDirector you can create a database of manual
and automated tests, build test cycles.
Page | 50
SOFTWARE ENGINEERING [BCA] SDMCBM
➢ The Apache JMeter™ application is open source software, a 100% pureJava application designed to
load test functional behavior and measure performance. It was originally designed for testing Web
Applications but has since expanded to other test functions.
➢ Apache JMeter may be used to test performance both on static and dynamicresources, Web
dynamic applications. It can be used to simulate a heavy load on a server, group of servers, network
or object to test its strength or to analyze overall performance underdifferent load types.
❖ FTP
❖ HTTP REQUEST
❖ JDBC REQUEST
❖ JAVA OBJECT REQUEST
❖ LDAP REQUEST(LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL)
SQA Robot is a tool from IBM Rational for carrying out functional/regression testing.
Page | 51