Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

SOFTWARE ENGINEERING [BCA] SDMCBM

NOTE: Some answers have been simplified or shortened. For detailed answers please refer pdf notes.

UNIT I
QUESTIONS CARRYING TWO MARKS

1) Give IEEE definition of software.


IEEE defines software as the collection of computer programs, procedures, rules, and associated
documentation and data. This definition clearly states that software is not just programs, but includes all
the associated documentation and data.

2) Give IEEE definition of software engineering.


according to IEEE; Software Engineering is a systematic approach to the development, operation,
maintenance and retirement of the software. There is another definition for s/w engineering, which states
that “Software engineering is an application of science and mathematics by which the capabilities of
computer equipments are made useful to man via computer programs, procedures and associated
documentation”.

3) Mention the problems of software.


4) Differentiate program and programming system product.

5) Justify why software is expensive.


The main reason for the high cost of software is that software development is still labor-intensive. As the
main cost of producing software in the manpower employed, the cost of developing software is generally
measured in terms of person-months of effort spent in development. And productivity is frequently
measured in the industry in terms of DLOC per person-month.

6) Expand LOC, DLOC, KDLOC.


LOC- lines of code
DLOC- Delivered lines of code
KDLOC- Kilo Delivered Lines of Code.

7) What is the problem of change and rework in a software.[can write in own words]
Just uncovering requirements that were not clear earlier is not only the reason for change and rework.
Software development of large and complex systems can take a few years. As the time passes, the needs of
the clients change. This change of needs during development also leads to rework. This change and rework
is a major contributor to software crisis.

8) Mention the problems of software engineering.


• Scale
• Quality and Productivity
• Consistency and Repeatability
• Change

9) What is corrective maintenance?


Software needs to be maintained not because of some of its components wear out and need to be
replaced; but because there are often some residual errors remaining in the system that must be removed
as they are discovered. These errors once discovered, need to be removed, leading to software getting
changed. This is sometimes called as corrective maintenance.

Page | 1
SOFTWARE ENGINEERING [BCA] SDMCBM

10) What is adaptive maintenance


If the operating environment of the software changes, then the software must be modified to the needs of
the changed environment. The software must adapt some new qualities to fit to the new environment. The
maintenance due to this is called adaptive maintenance.

11) Define Software Process.


A software process is a set of activities together with proper ordering to build a high-quality software with
low cost and small cycle-time.

12) Mention three quality dimensions of a software product.


• Product operation
• Product transition
• Product revision

13) List any four quality factors of software engineering.


• Correctness
• Reliability
• Usability
• Testability
• Flexibility

14) Define Maintainability and Testability.


Maintainability is the effort required to locate and fix errors in the programs.
Testability is the effort required to test and ensure that the system or module performs the intended
operation.

15) Define Portability and Reusability.


Portability is the effort required to transfer the software from one hardware configuration to another.
Reusability is the extent to which parts of a software can be used in other related applications.

16) Define Reliability and Usability.


Reliability is the property that defines how well the software meets its requirements.
Usability is the effort required to learn and operate the software properly; it is an important property that
emphasizes human aspects of the system.

17) Write the three major components of SCM.


• CM Functionality
• CM Mechanism
• CM Process

18) Mention various phases of development process


• Requirement specification for understanding and clearly stating the problem.
• Design for deciding a plan for the solution.
• Coding for implementing the planned solution.
• Testing for verifying the programs.

19) What is Unit testing?


The starting point of testing is unit testing. Here each module is tested separately. This test is often
performed by the coder after coding the module. The purpose is to execute different parts of the modules
to detect coding errors. After this, the module is integrated to form sub-systems and then to form the
entire system.

Page | 2
SOFTWARE ENGINEERING [BCA] SDMCBM

20) What are work products?


A work product is a tangible artefact. used during a software development project, for. instance,
requirements specifications, class model. diagrams, and use case specifications.

21) What is Acceptance testing?


Acceptance Testing: This is a type of testing done by users, customers, or other authorised entities to
determine application/software needs and business processes. Description: Acceptance testing is the most
important phase of testing as this decides whether the client approves the application/software or not.

22) Define Product metrics and Process metrics.


Product metrics are used to quantify characteristics of the product being developed. i.e. the software.
Process metrics are use to quantify characteristics of the process being used to develop the software.

LONG ANSWER QUESTIONS

1) Define Software. Briefly explain the software problem. (5)


IEEE defines software as the collection of computer programs, procedures, rules, and associated
documentation and data. This definition clearly states that software is not just programs, but includes all
the associated documentation and data.

Software is Expensive: The main reason for the high cost of software is that software development is still
labor intensive. This implies that software that can cost more than a million dollars can run on hardware
that costs at most tens of thousands of dollars. This example clearly shows that not only is software very
expensive, it indeed forms the major component of the total automated system, with the hardware
forming a very small component.
Late and Unreliable: The software industry has gained a reputation of not being able to deliver on time
and within the budget. A project where budget and the schedule are totally out of control is called a
runaway project. Unreliability means, the software does not do what it is supposed to do or does
something it is not supposed to do.

Maintenance and Rework: Once the software is delivered to the customer, it enters into maintenance
phase. All systems need maintenance. Software needs to be maintained not because of some of its
components wear out and need to be replaced; but because there are often some residual errors
remaining in the system that must be removed as they are discovered. These errors once discovered, need
to be removed, leading to software getting changed. This is sometimes called as corrective maintenance.
The software must adapt some new qualities to fit to the new environment. The maintenance due to this is
called adaptive maintenance.
Clients frequently discover additional requirements they had not specified earlier. This leads to
requirements getting changed when the development may have proceeded to the coding or to the testing
phase. This leads to rework. The requirement, the code all have to be changed to accommodate the new or
changed requirements.

2) Define Software Engineering. Explain the various problems faced in software engineering. (5)
according to IEEE; Software Engineering is a systematic approach to the development, operation,
maintenance and retirement of the software. There is another definition for s/w engineering, which states
that “Software engineering is an application of science and mathematics by which the capabilities of
computer equipments are made useful to man via computer programs, procedures and associated
documentation”.

Page | 3
SOFTWARE ENGINEERING [BCA] SDMCBM

To control software crisis, some methodical approach is needed for software development.
according to IEEE; Software Engineering is a systematic approach to the development, operation,
maintenance and retirement of the software.
i) Scale: A fundamental problem of software engineering is the problem of scale. Developing a
very large system requires a very different set of methods compared to developing a small
system. In other words, the methods that are used for developing small systems generally do
not scale up to large systems.
ii) Quality and Productivity : Like all engineering disciplines, software engineering is driven by
three major components cost, schedule and quality. The cost of developing a system is the cost
of resources used for the system, which in the case of software are, the manpower, hardware,
software and other support resources. The manpower component is predominant as the
software development is highly labor-intensive. Schedule is an important factor in many
projects. For some business systems, it is required to build a software with small cycle of time.
Due to these types of applications, where a reduced cycle time is highly desirable even if the
cost becomes higher. One of the major factor software during any production discipline is
quality. Clearly, the developing methods that produce high quality software is another
fundamental goal of software engineering. We can view the quality of a software product
having three dimensions: Product Operation, Product Transition and Product Revision.

iii) Consistency and Repeatability: Consistency of performance is an important factor for any
organization; it allows an organization to predict the project's outcome with reasonable
accuracy and improve its processes to produce higher-quality products. To achieve consistency,
some standardized procedures must be followed.
iv) Change : As businesses change, they require that the software supporting to change. Overall, as
the world changes faster, software has to change faster. Rapid change has a special impact on
software. As software is easy to change due to its lack of physical properties that may make
changing harder, the expectation is much more from software for change. Therefore, one
challenge for software engineering is to accommodate and embrace change.

3) Explain any four quality attributes of software engineering. (4)


The quality of a software product having three dimensions: Product Operation, Product Transition and
Product Revision.

Page | 4
SOFTWARE ENGINEERING [BCA] SDMCBM

The Product operation deals with the quality factors such as correctness reliability and efficiency. Product
transition deals with quality factors such as portability, interoperability. Product revision deals with aspects
related to modification of programs, including factors like maintainability and testability.
Correctness is the extent to which a program satisfies its specifications.
Reliability is the property that defines how well the software meets its requirements.
Efficiency is the factor in all issues rating to the execution of the software. It includes considerations such
as response time, memory requirements and throughput.
Usability is the effort required to learn and operate the software properly; it is an important property that
emphasizes human aspects of the system.
Maintainability is the effort required to locate and fix errors in the programs. Testability is the effort
required to test and ensure that the system or module performs the intended operation.
Flexibility is the effort required to modify an operational program ( to enhance the functionality).
Portability is the effort required to transfer the software from one hardware configuration to another.
Reusability is the extent to which parts of a software can be used in other related applications.
Inter-operability is the effort required to couple the system with other systems.

4) List and Explain different phases of phased development process. (5)

A phased development process allows proper checking for quality and progress at some different points
during the development. Without this, one would have to wait until the end to see the software has been
produced.
Different phases can have different activities. However, in general, we can say that, any problem solving in
software engineering must consist of these activities:

• Requirement specification for understanding and clearly stating the problem.


• Design for deciding a plan for the solution.
• Coding for implementing the planned solution.
• Testing for verifying the programs.

➢ Requirement Analysis: Requirement analysis is done in order to understand the problem to be solved.
The requirement analysis emphasizes identifying „what‟ is needed from the system, and not „how‟ the
system will achieve the goals.

➢ Software Design: The purpose of design phase is to plan a solution for the problem specified by the
requirement document. Design phase takes us towards „how‟ to satisfy the needs. The output of this
phase is the design document which is the blue-pint or plan for the solution and used later during
implementation, testing and maintenance.

Page | 5
SOFTWARE ENGINEERING [BCA] SDMCBM

Design activity is divided into two phases- System design and detailed design. System design aims to
identify the module that should be included in the system. How these modules will interact with each
other to produce desired results. During detailed design, the internal logic of each of the modules specified
during system design is decided.

➢ Coding: The coding phase affects both testing and maintenance profoundly. Well-written code can
reduce the testing and maintenance effort. Because the testing and maintenance costs of software are
much higher than the coding cost, the goal of coding should be to reduce the testing and maintenance
effort. Simplicity and clarity should be strived for during the coding phase.

➢ Testing: The starting point of testing is unit testing. Here each module is tested separately. This test is
often performed by the coder after coding the module. The purpose is to execute different parts of the
modules to detect coding errors. After this, the module is integrated to form sub-systems and then to form
the entire system. During integration of modules, integration testing is done to detect design errors. After
the system is put together, system testing is performed. Here, the system is tested against the
requirements to see whether all the requirements are met or not. Finally, acceptance testing is performed
by giving user‟s real-world data to demonstrate to the user.

➢ Managing the Process : Development process does not specify how to allocate resources to different
activities in a process. It also will not specify schedule for each activity, how to divide work within a phase,
how to ensure that each phase is being done properly etc. Without properly handling these issues, it is
unlikely that cost and quality objectives can be met. These types of issues are properly handled by project
management.

5) Briefly explain the various characteristics of a software process. (5)


Some characteristics of the software processes are listed below.

➢ Predictability
Predictability of a process determines how accurately the outcome of following a process in a project can
be predicted before the project is completed. Predictability is a fundamental property of any process.
Effective management of quality assurance activities largely depend on the predictability of the process. A
predictable process is also said to be under statistical control.

➢ Support Maintainability and Testability


In the life of software the maintenance costs generally exceed the development costs. One of the
important objectives of the development project should be to produce software that is easy to maintain.
And the process used should ensure this maintainability. a process consists of phases, and a process
generally includes requirements, design, coding, and testing phases. Of the development cost, an example
distribution of effort with the different phases could be:

Page | 6
SOFTWARE ENGINEERING [BCA] SDMCBM

Requirements 10%
Design 10%
Coding 30%
Testing 50%

➢ Support Change
As organizations and businesses change, the software supporting the business has to change. Hence, any
model that builds software and makes change very hard will not be suitable in many situations. Change
also takes place while development is going on. After all, the needs of the customer may change during the
course of the project. And if the project is of any significant duration, considerable changes can be
expected.
➢ Early Defect Removal and Defect Prevention
If there is a greater delay in detecting the errors, it becomes more expensive to correct them. An error that
occurs in the requirement phase if corrected during acceptance testing, can cost about 100 times more
than correcting the error in the requirement phase. Hence it is better to provide support for defect
prevention.

➢ Process Improvement and Feedback


Improving the quality and reducing the cost are the fundamental goals of the software engineering
process. This requires the evaluation of the existing process and understanding the weakness in the
process.

6) With the help of a diagram explain the working of the waterfall model. (5)
Waterfall model is the simplest model which states that the phases are organized in a linear order. In this
model, a project begins with feasibility analysis. On successfully demonstrating the feasibility of a project,
the requirement analysis and project planning begins. The design starts after the requirement analysis is
complete and the coding begins after the design is complete, once the programming is complete, the code
is integrated and testing is done. On successful completion of testing, the system is installed. After this, the
regular operations and maintenance take place as shown in the figure (next page).
Each phase begins soon after the completion of the previous phase. Verification and validation activities
are to be conducted to ensure that the output of a phase is consistent with the overall requirements of the
system. At the end of every phase there will be an output. Outputs of earlier phases can be called as work
products and they are in the form of documents like requirement document and design document. The
output of the project is not just the final program along with the user manuals but also the requirement
document, design document, project plan, test plan and test results.

Page | 7
SOFTWARE ENGINEERING [BCA] SDMCBM

7) Explain the limitation of the waterfall model. (3)

1. Waterfall model assumes that requirements of a system can be frozen before the design begins. It is
difficult to state all the requirements before starting a project.

2. Freezing the requirements usually requires choosing the hardware. A large project might take a few
years to complete. If the hardware stated is selected early then due to the speed at which the hardware
technology is changing, it will be very difficult to accommodate the technological changes.

3. Waterfall model stipulates that the requirements be completely specified before the rest of the
development can proceed. In some situations, it might be desirable to produce a part of the system and
then later enhance the system. This can’t be done if waterfall model is used.

4. It is a document driven model which requires formal documents at the end of each phase. This approach
is not suitable for interactive applications.

5. In an interesting analysis it is found that, the linear nature of the life cycle leads to “blocking states” in
which some project team members have to wait for other team members to complete the dependent task.
The time spent in waiting can exceed the time spent in productive work.

6. Client gets a feel about the software only at the end.


Page | 8
SOFTWARE ENGINEERING [BCA] SDMCBM

8) Explain the working of an iterative enhancement model with the help of a diagram. (5)

This model tries to combine the benefits of both prototyping and waterfall model. The basic idea is,
software should be developed in increments, and each increment adds some functional capability to the
system. This process is continued until the full system is implemented. An advantage of this approach is
that, it results in better testing because testing each increment is likely to be easier than testing the entire
system. As prototyping, the increments provide feedback from the client, which will be useful for
implementing the final system. It will be helpful for the client to state the final requirements.

Here a project control list is created. It contains all tasks to be performed to obtain the final
implementation and the order in which each task is to be carried out. Each step consists of removing the
next task from the list, designing, coding, testing and implementation and the analysis of the partial system
obtained after the step and updating the list after analysis. These three phases are called design phase,
implementation phase and analysis phase. The process is iterated until the project control list becomes
empty. At this moment, the final implementation of the system will be available.

The first version contains some capability. Based on the feedback from the users and experience with the
current version, a list of additional features is generated. And then more features are added to the next
versions. This type of process model will be helpful only when the system development can be broken
down into stages.

9) Explain the prototyping model. (5)

The goal of prototyping is to overcome the limitations of waterfall model. Here a throwaway prototype is
built to understand the requirements. This prototype is developed based on the currently known
requirements. Development of the prototype undergoes design, coding and testing, but each of these
phases is not done very thoroughly of formally. By using the prototype, the client can get actual feel of the
system because the interaction with the prototype can enable the client to better understand the system.
This results in more stable requirements that change less frequently. Prototyping is very much useful if
there is no manual process or existing systems which help to determine the requirements.

Initially, primary version of the requirement specification document will be developed and the end-users
and clients are allowed to use the prototype. Based on their experience with the prototype, they provide
feedback to the developers regarding the prototype. They are allowed to suggest changes if any. Based on
the feedback, the prototype is modified to incorporate the changes suggested. Again clients are allowed to
use the prototype. This process is repeated until no further change is suggested. This model is helpful
when the customer is not able to state all the requirements. Because the prototype is throwaway, only
minimum documentation is needed during prototyping. For example design document and test plan etc.
are not needed for the prototype.

Page | 9
SOFTWARE ENGINEERING [BCA] SDMCBM

10) Explain the spiral model with the help of a diagram. (5)[diagram too complicated to draw, so
skipped, refer pdf notes for it]
As the name suggests, the activities of this model can be organized like a spiral that has many cycles. The
radial dimension represents the cumulative cost incurred in accomplishing the steps done so far, and the
angular dimension represents the progress made in completing each cycle of the spiral.
Each cycle in the spiral begins with the identification of objectives for that cycle; the different alternatives
that are possible for achieving the objectives and the constraints that exist. This is the first quadrant of the
cycle. The next step is to evaluate different alternatives based on the objectives and constraints. The focus
is based on the risks. Risks reflect the chances that some of the objectives of the project may not be met.
Next step is to develop strategies that resolve the uncertainties and risks. This step may involve activities
like prototyping. The risk-driven nature of the spiral model allows it to suit for any applications. The
important feature of spiral model is that, each cycle of spiral is completed by a review that covers all the
products developed during that cycle; including plans for the next cycle.
In a typical application of spiral model, one might start with an extra round-zero, in which the feasibility of
the basic project objectives is studied. In round-one a concept of operation might be developed. The risks
are typically whether or not the goals can be met within the constraints. In round-2, the top-level
requirements are developed. In succeeding rounds the actual development may be done. In a project,
where risks are high, this model is preferable.
Advantages of Spiral Model
1) This model has a realistic approach because software evolves as the model progresses.
2) Because prototypes are also involved it is easy for risk analysis.
3) If used properly, risks can be reduced before they become problematic.
Disadvantages of Spiral Model
1) It is difficult to convince the customers that the evolutionary approach is controllable.
2) It demands considerable risk-assessment expertise and depends heavily on this expertise for the
success.

11) Briefly explain the phases of management process. (5)

12) Briefly explain the various activities of software configuration management process. (5)
Changes continuously take place in a software project—changes due to the evolution of work products as
the project proceeds, changes due to defects (bugs) being found and then fixed, and changes due to
requirement changes. All these are reflected as changes in the files containing source, data, or
documentation. Configuration management (CM) or software configuration management (SCM) is the
discipline for systematically controlling the changes that take place during development. The IEEE defines
SCM as "the process of identifying and defining the items in the system, controlling the change of these
items throughout their life cycle, recording and reporting the status of items and change requests, and
verifying the completeness and correctness of items". Though all three are types of changes, changes due
to product evolution and changes due to bug fixes can be, in some sense, treated as a natural part of the
project itself which have to be dealt with even if the requirements do not change.

Page | 10
SOFTWARE ENGINEERING [BCA] SDMCBM

Software configuration management is a process independent of the development process largely because
most development models look at the macro picture and not on changes to individual files. CM is essential
to satisfy one of the basic objectives of a project—delivery of a high-quality software product to the client.
➢ CM Functionality
To better understand CM, let us consider some of the functionality that a project requires from the CM
process.
• Give latest version of a program. Suppose that a program has to be modified. Clearly, the modification
has to be carried out in the latest copy of that program; otherwise, changes made earlier may be lost. A
proper CM process will ensure that latest version of a file can be obtained easily.
• Undo a change or revert back to a specified version. A change is made to a program, but later it becomes
necessary to undo this change request. Similarly, a change might be made to many programs to implement
some change request and later it may be decided that the entire change should be undone. The CM
process must allow this to happen smoothly.
• Prevent unauthorized changes or deletions. A programmer may decide to change some programs, only to
discover that the change has adverse side effects. The CM process ensures that unapproved changes are
not permitted.
• Gather all sources, documents, and other information for the current system. All sources and related files
are needed for releasing the product. The CM process must provide this functionality. All sources
and related files of a working system are also sometimes needed for reinstallation.
➢ CM Mechanisms
The main purpose of CM is to provide various mechanisms that can support the functionality needed by a
project to handle the types of scenarios discussed above that arise due to changes. The mechanisms
commonly used to provide the necessary functionality include the following
• Configuration identification and baselining
• Version control or version management
• Access control
A Software configuration item (SCI), or item is a document or an artifact that is explicitly placed under
configuration control and that can be regarded as a basic unit for modification. As the project proceeds,
hundreds of changes are made to these configuration items. Without periodically combining proper
versions of these items into a state of the system, it will become very hard to get the system from the
different versions of the many SCIs. For this reason, baselines are established. A baseline, once established,
captures a logical state of the system, and forms the basis of change thereafter. A baseline also forms a
reference point in the development of a system. A baseline essentially is an arrangement of a set of SCIs.
That is, a baseline is a set of SCIs and the relationship between them.
An SCI X is said to depend on another SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request, though, might require changes
be made to some SCIs; the dependency of other SCIs on the ones being changed might require that other
SCIs also need to be changed. Clearly, the dependency between the SCIs needs to be properly understood
and documented.

Page | 11
SOFTWARE ENGINEERING [BCA] SDMCBM

Most CM systems also provide means for access control. To understand the need for access control, let us
understand the life cycle of an SCI. Typically, while an SCI is under development and is not visible to other
SCIs, it is considered being in the working state. An SCI in the working state is
not under SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for
it to be used by others, the SCI is given for review, and the item enters the state "under review." Once an
item is in this state, it is considered as "frozen," and any changes made to a private
copy that the developer may have made are not recognized. After a successful review the SCI is entered
into a library, after which the item is formally under SCM. The basic purpose of this review is to make sure
that the item is of satisfactory quality and is needed by others, though the exact nature of
review will depend on the nature of the SCI and the actual practice of SCM.
➢ CM Process
The CM process defines the set of activities that need to be performed to control change. As with most
activities in project management, the first stage in the CM process is planning. Then the process has to be
executed, generally by using some tools. Finally, as any CM plan requires some discipline
from the project personnel in terms of storing items in proper locations, and making changes properly,
monitoring the status of the configuration items and performing CM audits are therefore other activities in
the CM process.
Planning for configuration management involves identifying the configuration items and specifying the
procedures to be used for controlling and implementing changes to these configuration items. Identifying
configuration items is a fundamental activity in any type of CM.
To facilitate proper naming of configuration items, the naming conventions for CM items are decided
during the CM planning stages. In addition to naming standards, version numbering must be planned.
When a configuration item is changed, the old item is not replaced with the new copy; instead, the old
copy is maintained and a new one is created. This approach results in multiple versions of an item, so
policies for version number assignment are needed. If a CM tool is being used, then sometimes the
tool handles the version numbering.
The configuration controller (CC) is responsible for the implementation of the CM plan. Where there are
large teams or where two or more teams/groups are involved in the development of the same or different
portions of the software or interfacing systems, it may be necessary to have a configuration control board
(CCB). This board includes representatives from each of the teams. A CCB (or a CC) is considered essential
for CM, and the CM plan must clearly define the roles and responsibilities of the CC/CCB. These duties will
also depend on the type of file system and the nature of CM tools being used.

13) Explain the software configuration item (SCI). (5)


A Software configuration item (SCI) is a document or an artifact that is explicitly placed under configuration
control and that can be regarded as a basic unit for modification. As the project proceeds, hundreds of
changes are made to these configuration items. Without periodically combining proper versions of these
items into a state of the system, it will become very hard to get the system from the different versions of
the many SCIs. For this reason, baselines are established. A baseline, once established, captures a logical
state of the system, and forms the basis of change thereafter. A baseline also forms a reference point in

Page | 12
SOFTWARE ENGINEERING [BCA] SDMCBM

the development of a system. A baseline essentially is an arrangement of a set of SCIs. That is, a baseline is
a set of SCIs and the relationship between them.
An SCI X is said to depend on another SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request, though, might require changes
be made to some SCIs; the dependency of other SCIs on the ones being changed might require that other
SCIs also need to be changed. Clearly, the dependency between the SCIs needs to be properly understood
and documented. Most CM systems also provide means for access control. To understand the need for
access control, let us understand the life cycle of an SCI. Typically, while an SCI is under development and is
not visible to other SCIs, it is considered being in the working state. An SCI in the working state is not under
SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for it to be
used by others, the SCI is given for review, and the item enters the state "under review." Once an item is in
this state, it is considered as "frozen," and any changes made to a private copy that the developer may
have made are not recognized. After a successful review the SCI is entered into a library, after which the
item is formally under SCM. The basic purpose of this review is to make sure that the item is of satisfactory
quality and is needed by others, though the exact nature of review will depend on the nature of the SCI
and the actual practice of SCM.

14) Explain the SCM life cycle of an item. (5)


The main purpose of CM is to provide various mechanisms that can support the functionality needed by a
project to handle the types of scenarios discussed above that arise due to changes. The mechanisms
commonly used to provide the necessary functionality include the following
• Configuration identification and baselining
• Version control or version management
• Access control
A Software configuration item (SCI), or item is a document or an artifact that is explicitly placed under
configuration control and that can be regarded as a basic unit for modification. As the project proceeds,
hundreds of changes are made to these configuration items. Without periodically combining proper
versions of these items into a state of the system, it will become very hard to get the system from the
different versions of the many SCIs. For this reason, baselines are established. A baseline, once established,
captures a logical state of the system, and forms the basis of change thereafter. A baseline also forms a
reference point in the development of a system. A baseline essentially is an arrangement of a set of SCIs.
That is, a baseline is a set of SCIs and the relationship between them.
An SCI X is said to depend on another SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request, though, might require changes
be made to some SCIs; the dependency of other SCIs on the ones being changed might require that other
SCIs also need to be changed. Clearly, the dependency between the SCIs needs to be properly understood
and documented.

Page | 13
SOFTWARE ENGINEERING [BCA] SDMCBM

Most CM systems also provide means for access control. To understand the need for access control, let us
understand the life cycle of an SCI. Typically, while an SCI is under development and is not visible to other
SCIs, it is considered being in the working state. An SCI in the working state is
not under SCM and can be changed freely. Once the developer is satisfied that the SCI is stable enough for
it to be used by others, the SCI is given for review, and the item enters the state "under review." Once an
item is in this state, it is considered as "frozen," and any changes made to a private
copy that the developer may have made are not recognized. After a successful review the SCI is entered
into a library, after which the item is formally under SCM. The basic purpose of this review is to make sure
that the item is of satisfactory quality and is needed by others, though the exact nature of
review will depend on the nature of the SCI and the actual practice of SCM.

15) Write a note on Capability Maturity Model. (5)


Once some process improvement takes place, the process state may change, and a new set of possibilities
may emerge. This concept of introducing changes in small increments based on the current state of the
process has been captured in the Capability Maturity Model (CMM) framework. The CMM framework
provides a general roadmap for /process improvement.
Software process capability describes the range of expected results that can be achieved by following the
process. The process capability of an organization determines what can be expected from the organization
in terms of quality and productivity. The goal of process improvement is to improve
the process capability. A maturity level is a well-defined evolutionary plateau towards achieving a mature
software process. Based on the empirical evidence found by examining the processes of many
organizations, the CMM suggests that there are five well-defined maturity levels for a software process.
These are initial (level 1), repeatable, defined, managed, and optimizing (level 5). The CMM framework
says that as process improvement is best incorporated in small increments, processes go from their current
levels to the next higher level when they are improved. Hence, during the course of
process improvement, a process moves from level to level until it reaches level 5.

Page | 14
SOFTWARE ENGINEERING [BCA] SDMCBM

The initial process (level 1) is essentially an ad hoc process that has no formalized method for any activity.
Basic project controls for ensuring that activities are being done properly, and that the project plan is being
adhered to, are missing. Success in such organizations depends solely on the quality and capability of
individuals. The process capability is unpredictable as the process constantly changes. Organizations at this
level can benefit most by improving project management, quality assurance, and change control.
In a repeatable process (level 2), policies for managing a software project and procedures to implement
those policies exist. That is, project management is well developed in a process at this level. Some of the
characteristics of a process at this level are: project commitments are realistic and based on past
experience with similar projects, cost and schedule are tracked and problems resolved when they arise,
formal configuration control mechanisms are in place, and software project standards are defined and
followed. Essentially, results obtained by this process can be repeated as the project
planning and tracking is formal.
At the defined level (level 3) the organization has standardized a software process, which is properly
documented. A software process group exists in the organization that owns and manages the process. In
the process each step is carefully defined with verifiable entry and exit criteria, methodologies
for performing the step, and verification mechanisms for the output of the step. In this process both the
development and management processes are formal.
At the managed level (level 4) quantitative goals exist for process and products. Data is collected from
software processes, which is used to build models to characterize the process. Hence, measurement plays
an important role in a process at this level. Due to the models built, the organization has
a good insight of the process capability and its deficiencies. The results of using such a process can be
predicted in quantitative terms.
At the optimizing level (level 5), the focus of the organization is on continuous process improvement. Data
is collected and routinely analyzed to identify areas that can be strengthened to improve quality or
productivity. New technologies and tools are introduced and their effects measured in an
effort to improve the performance of the process. Best software engineering and management practices
are used throughout the organization.

Page | 15
SOFTWARE ENGINEERING [BCA] SDMCBM

UNIT - II
QUESTIONS CARRYING TWO MARKS

1) Give IEEE definition of software requirements.


A condition or capability needed by a user to solve a problem or achieve an objective. A condition or a
capability that must be met or possessed by a system to satisfy a contract, standard, specification or other
formally imposed document.

2) Mention the basic activities involved in requirement process.


The requirements phase typically consists of three basic activities: problem or requirement analysis,
requirement specification, and requirements validation.

3) Mention the three basic approach to problem analysis.


➢ Informal approaches based on structured communication and interaction.
➢ Conceptual modeling based approaches
➢ Prototyping

4) What are data source and sink? How to represent them in DFD?
5) Give any two symbols used in DFD with their purpose.
The processes are shown by named circles and data flows are represented by named arrows entering or
leaving the bubbles.

Page | 16
SOFTWARE ENGINEERING [BCA] SDMCBM

6) List some common errors while designing the DFD.


1. Unlabeled data flows.
2. Missing data flows; information required by a process is not available.
3. Extraneous data flows; some information is not being used in the process.
4. Consistency not maintained during refinement.
5. Missing processes.
6. Contains some control information.

7) What is a Data Dictionary?


A data dictionary in Software Engineering means a file or a set of files that includes a database's metadata
(hold records about other objects in the database), like data ownership, relationships of the data to
another object, and some other data.

8) Mention the two approaches to Prototyping.


There are two approaches to prototyping: throwaway and evolutionary.

9) What are Throwaway and Evolutionary Prototyping?


In the throwaway approach the prototype is constructed with the idea that it will be discarded after the
analysis is complete and the final system will be built from scratch. In the evolutionary approach, the
prototype is built with the idea that it till eventually be converted into the final system.

10) Mention any four characteristics of an SRS.[any 4]

1. Correct
2. Complete
3. Unambiguous
4. Verifiable.
5. Consistent
6. Ranked for important/stability
7. Modifiable
8. Traceable

11) Mention the components of an SRS.

1. Functionality
2. Performance
3. Design constraints imposed on an implementation
4. External interface

12) List different Design Constraints


• Standard compliance
• Hardware limiitations
• Reliability and fault tolerance
• Security

13) Define Coupling and Cohesion?


Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.

Page | 17
SOFTWARE ENGINEERING [BCA] SDMCBM

14) What is Design Methodology?


Structured Design Methodology (SDM) views every software system as having some inputs that are
converted into the desired outputs by the software system.

15) Mention different types of errors that occurs in an SRS.


• Omission
• Inconsistency
• Incorrect fact
• Ambiguity

16) What are the factors that influencing Coupling?


Coupling is affected by the type of connections between modules, interface complexity, information flow
between module connections, and binding time of module connections.

17) What is Functional Abstraction?


Functional abstraction can be generalized as collections of subprograms referred to as 'groups'. Within
these groups there exist routines which may be visible or hidden.

18) What are top down and bottom-up design approaches?


Top-down design takes the whole software system as one entity and then decomposes it to achieve more
than one sub-system or component based on some characteristics.
The bottom up design model starts with most specific and basic components. It proceeds with composing
higher level of components by using basic or lower level components.

19) Mention different types of modules in a Structure Chart.


• Input module
• Output module
• Coordinate module
• Transform module
• Composite module

20) Define Most Abstract Input and Most Abstract Output?


The most abstract input data elements(MAI) are those data elements in the data flow diagram that are
furthest removed from the physical inputs but it can still be considered inputs to the system.
The most abstract output data elements (MAO) are the data elements that are most removed from the
actual outputs but can still be considered outgoing.

LONG ANSWER QUESTIONS

1) Explain need for SRS. (4)


The origin of most software system is in the need of a client, who either wants to automate existing
manual systems or desires a new software system .The software system itself is created by the developer.
Finally, the completed system will be used by the end users. Thus, there are the major parties interested in
the new system: the client, the user, and the developer. Somehow the requirements for the system that
will satisfy the needs of the clients and the concerns of the users have to be communicated to the
developer. The problem is that the client usually does not understand software or the software
development process, and the developer often does not understand the client’s problem and application
area. This causes a communication gap between the parties involved in the development project. A basic
purpose of software requirements specification is to bridge this communication gap. SRS is the medium
through which the client and user needs are accurately specified; indeed SRS forms the basic of software
development. A good SRS should satisfy all the parties-sometimes very hard to achieve – and involves
trade and persuasion.

Page | 18
SOFTWARE ENGINEERING [BCA] SDMCBM

Another important purpose of development an SRS is helping the clients understand their own needs. As
we mentioned earlier, for software system that are not just automating manual existing system,
requirements have to be visualized and created. Even where the primary goal is to automate an existing
manual system, new requirements emerge as the introduction of software offers new potential for
features, such as providing new services, performance activities in a different manner, and collection data
that were either impossible or infeasible without a software system. In order to satisfy the client, which is
the basic quality objective of software development, the client has to be made aware of these potentials
and aided in visualizing and conceptualizing the need and requirements of his organization. The process of
developing an SRS usually helps in this, as it forces the client and the users to think, visualize, interact, and
discuss with others (include the requirement analyst) to identify the requirements.

2) Explain the activities of Requirement Process with a block diagram. (5)

The requirements phase typically consists of three basic activities: problem or requirement analysis,
requirement specification, and requirements validation. The first aspect, deals with understanding the
problem, the goals, the constraints, etc. Problem analysis starts with some general “statement of need” or
a high level “problem statement”. During analysis the problem domain and the environment are modeled
in an effort to understand the system behavior, constraints on the system, its inputs and outputs, etc. The
basic purpose of this activity is to obtain a thorough understanding of what the software needs to provide.
The understanding obtained by problem analysis forms the basis of the second activity-requirements
specification.

As analysis produces large amounts of information and knowledge with possible redundancies; properly
organizing and describing the requirements is an important goal of this activity. The final activity focuses
on validating that what has been specifies in the SRS are indeed all the requirements of the software and
making sure that the SRS is of good quality. The requirements process terminates with the production of
the validated SRS. In most real systems, there is considerable overlap and feedback between these
activities. So, some parts of the system are analyzed and then specified while the analysis of the other
parts is going on.

As shown in the figure, from the specification activity we may go back to the analysis activity. This happens
as frequently some parts of the problem are analyzed and then specified before other parts are analyzed
and specified. Furthermore, the process of specification frequently shows shortcomings in the knowledge
of the problem, thereby necessitating further analysis. Once the specification is “complete” it goes through
the validation activity. This activity may reveal problems in the specification itself, which requires going
back to the specification step, or may reveal shortcomings in the understanding of the problem, which
requires going back to the analysis activity.

Page | 19
SOFTWARE ENGINEERING [BCA] SDMCBM

3) Explain Data Flow Diagram with example.


A DFD shows the flow of data through a system. It views a system as a function that transforms the inputs
into desired outputs. The DFD aims to capture the transformations that take place within a system to the
input data so that eventually the output data is produced. The agent that performs the transformation of
data from one state to another is called a. process (or a bubble). DFD shows the movement of data through
the different transformations or processes in the system. The processes are shown by named circles and
data flows are represented by named arrows entering or leaving the bubbles. A rectangle represents a
source or sink and is a net originator or consumer of data. A source or a sink is typically outside the main
system of study.

In this DFD there is one basic input data flow, the weekly timesheet, which originates from the source
worker. The basic output is the paycheck, the sink for which is also the worker. In this system, first the
employee's record is retrieved, using the employee ID, which is contained in the timesheet. From the
employee record, the rate of payment and overtime are obtained. These rates and the regular and over-
time hours (from the timesheet) are used to compute the pay. After the total pay is determined, taxes are
deducted. To compute the tax deduction, information from the tax-rate file is used. The amount of tax
deducted is recorded in the employee and company records. Finally, the paycheck is issued for the net pay.
The amount paid is also recorded in company records. All external file such as employee record, company
record, and tax rates are shown as a labeled straight line. The need for multiple data flows represented by
a process is represented by a “*” between the data flows. This symbol represents the AND relationship.

In the DFD, for the process "weekly pay" the data flow "hours" and "pay rate" both are needed, as shown
in the DFD. Similarly, the OR relationship is represented by a "+" between the data flows.

4) Explain the characteristics of an SRS. (5)


Characteristics of an SRS
A good SRS is:
1. Correct
2. Complete
3. Unambiguous
4. Verifiable.
5. Consistent
6. Ranked for important/stability
7. Modifiable
8. Traceable

Page | 20
SOFTWARE ENGINEERING [BCA] SDMCBM

• A SRS is correct if every requirement included in SRS represents something required in the final
system. An SRS is complete if everything the software is supposed to do and the responses of the
software to all classes of input data are specified in the SRS. Completeness and correctness go hand-in-
hand.

• An SRS is unambiguous if and only if every requirement stated one and only one interpretation.
Requirements are often written in natural language, which are inherently ambiguous. If the
requirements are specified using natural language, the SRS writer should ensure that there is no
ambiguity. One way to avoid ambiguity is to use some formal requirement specification language. The
major disadvantage of using formal languages is large effort is needed to write an SRS and increased
difficulty in understanding formally stated requirements especially by clients.

• A SRS is verifiable if and only if every stored requirement is verifiable. A requirement is verifiable if
there exists some cost effective process that can check whether the final software meets that
requirement. Un-ambiguity is essential for verifiability. Verification of requirements is often done
through reviews.

• A SRS is consistent if there is no requirement that conflicts with another. This can be explained with
the help of an example: suppose that there is a requirement stating that process A occurs before
process B. But another requirement states that process B starts before process A. This is the situation
of inconsistency. Inconsistencies in SRS can be a reflection of some major problems.

• Generally, all the requirements for software need not be of equal importance. Some are critical.
Others are important but not critical. An SRS is ranked for importance and/or stability if for each
requirement the importance and the stability of the requirement are indicated. Stability of a
requirement reflects the chances of it being changed. Writing SRS is an iterative process.

• An SRS is modifiable if its structure and style are such that any necessary change can be made easily
while preserving completeness and consistency. Presence of redundancy is a major difficulty to
modifiability as it can easily lead to errors. For example, assume that a requirement is stated in two
places and that requirement later need to be changed. If only one occurrence of the requirement is
modified, the resulting SRS will be inconsistent

• An SRS is traceable if the origin of each requirement is clear and if it facilitates the referencing of each
requirement in future development. Forward traceability means that each requirement should be
traceable to some design and code elements. Backward traceability requires that it is possible to trace
the design and code element to the requirements they support.

5) Explain the various components of an SRS. (5)


Completeness of specifications is difficult to achieve and even more difficult to verify. Having guidelines
about what different things an SRS should specify will help in completely specifying the requirements. Here
we desirable some of the system properties that an SRS must address are:
1. Functionality
2. Performance
3. Design constraints imposed on an implementation
4. External interface

Page | 21
SOFTWARE ENGINEERING [BCA] SDMCBM

Conceptually, an SRS should have these components.


1. Functional requirements
Functional requirements specify which outputs should be produced from the given inputs. They describe
the relationship between the input and output of the system. For each functional requirement, a detailed
description of all data inputs and their source, the unit of measure, and the range of valid inputs must be
specified.

2. Performance Requirements
This part of an SRS specifies the performance constraints on the software system. All the requirements
relating to the performance characteristics of the system must be clearly specified. There are two types of
performance requirements : Static and Dynamic.
Static requirements are those that do not impose constraint on the execution characteristics of the system.
Dynamic requirement specify constraint on the execution behaviour of the system.

3. Design constraints
There are a number of factors in the client’s environment that may restrict the choices of the designer.
Such factors include standards that must be followed, resource limits operating environment, reliability
and security requirements, and policies that may have an impact on the design of the system. An SRS
should identify and specify all such constraints.
➢ Standard compliance : This specifies the requirements for the standards that the system must follow.
➢ Hardware Limitations : The software may have to operate on some existing or pre-determined
hardware, thus imposing restrictions on the design. Hardware limitations can include the type of machines
to be used, operating system available on the system, languages supported.
➢ Reliability and Fault Tolerance : Fault tolerance requirements often make the system more complex
and expensive. Reliability requirements are very important for critical applications.
➢ Security : Security requirements place restrictions on the use of certain commands control access data,
provide different kinds of access requirements for different people, require the use of passwords and
cryptography techniques and maintain a log of activities in the system.

6) Write a note on specification languages for an SRS. (6)


Requirement specification necessitates the use of some specification language. The language should
possess many of the desired qualities such as modifiability, understandability, unambiguous and so forth of
the SRS. In addition we want the language to be easy to learn and use. For example, to avoid ambiguity, it
is best to use some formal language. But for the ease of understanding a natural language might be
preferable. Some of the most commonly used languages for requirement specification are:-
o Structured English
o Regular expressions
o Decision tables
o Finite state automata

1. Structured English
Natural languages have been widely used for specifying requirements. The major advantage of using a
natural language is that both client and supplier understand the language. Initially, since the software
systems were small, requirements were verbally conveyed using the natural language. Later, as software
requirements grew more complex, requirements were specified in a written form, rather than orally, but
the means for the expression stayed the same.
The use of natural languages has some drawbacks:-
o By the very nature of a natural language, written requirements will be imprecise and ambiguous
o Efforts to be more precise and complete result in voluminous requirement specification documents, as
natural languages are quite verbose

Page | 22
SOFTWARE ENGINEERING [BCA] SDMCBM

Due to these drawbacks there is an effort to move from natural languages to formal languages for
requirement specification. However, natural languages are still widely used and are likely to be used in the
near future.

2. Regular Expressions
Regular expressions are used to specify the structure of symbol strings formally. String specifications are
useful for specifying such things as input data, command sequence and contents of a message. Regular
expressions are useful for such cases. Regular expressions can be considered as grammar for specifying the
valid sequences in a language and can be automatically processed. They are routinely used in compiler
construction for recognition of symbols and tokens.
There are few basic constructs allowed in regular expressions:-
1. Atoms : the basic symbol or alphabet of a language.
2. Composition : formed by concatenation two regular expressions.
3. Alternation : Specifies the either/or relationship.
4. Closure : specifies the repeated occurrence of a regular expression.

Page | 23
SOFTWARE ENGINEERING [BCA] SDMCBM

7) Explain the general structure of an SRS document. (6)

Page | 24
SOFTWARE ENGINEERING [BCA] SDMCBM

Page | 25
SOFTWARE ENGINEERING [BCA] SDMCBM

Page | 26
SOFTWARE ENGINEERING [BCA] SDMCBM

8) What is coupling? Explain the factors that affect Coupling. (4)

Two modules are considered independent if one can function completely without the presence of other.
Obviously, if two modules are independent, they are solvable and modifiable separately. However, all the
modules in a system cannot be independent of each other, as they must interact so that together they
produce the desired external behavior of the system.

The more connections between modules, the more dependent they are in the sense that more knowledge
about one module is required to understand or solve the other module. Hence, the fewer and simpler the
connections between modules, the easier it is to understand one without understanding the other.
Coupling between modules is the strength of interconnection between modules or a measure of
independence among modules.

To solve and modify a module separately, we would like the module to be loosely coupled with other
modules. The choice of modules decides the coupling between modules. Coupling is an abstract concept
and is not easily quantifiable. So, no formulas can be given to determine the coupling between two
modules. However, some major factors can be identified as influencing coupling between modules.

Among them the most important are the type of connection between modules, the complexity of the
interface, and the type of information flow between modules. Coupling increase with the complexity and
obscurity of the interface between modules. To keep coupling low we would like to minimize the number
of interfaces per module and the complexity of each interface. An interface of a module is used to pass
information to and from other modules. Complexity of the interface is another factor affecting coupling.

The more complex each interface is, higher will be the degree of coupling. The type of information flow
along the interfaces is the third major factor-affecting coupling. There are two kinds of information that

Page | 27
SOFTWARE ENGINEERING [BCA] SDMCBM

can flow along an interface: data or control, Passing or receiving control information means that the action
of the module will depend on this control information, which makes it more difficult to understand the
module and provide its abstraction. Transfer of data information means that a module passes as input
some data to another module and gets in return some data as output.

9) List and explain different levels of Cohesion. (7)


Cohesion is the concept that tries to capture this intra-module. With cohesion we are interested in
determining how closely the elements of a module are related to each other.
There are several levels of Cohesion:

• Coincidental
• Logical
• Temporal
• Procedural
• Communicational
• Sequential
• Functional

• Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of breaking the
program into smaller modules for the sake of modularization. Because it is unplanned, it may serve
confusion to the programmers and is generally not-accepted.
• Logical cohesion - When logically categorized elements are put together into a module, it is called logical
cohesion.
• Temporal Cohesion - When elements of module are organized such that they are processed at a similar
point in time, it is called temporal cohesion.
• Procedural cohesion - When elements of module are grouped together, which are executed sequentially
in order to perform a task, it is called procedural cohesion.
• Communicational cohesion - When elements of module are grouped together, which are executed
sequentially and work on same data (information), it is called communicational cohesion.
• Sequential cohesion - When elements of module are grouped because the output of one element serves
as input to another and so on, it is called sequential cohesion.
• Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly expected.
Elements of module in functional cohesion are grouped because they all contribute to a single well-defined
function. It can also be reused.

10) Explain Structure Chart with an example. (6)


A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition of a problem.
It is a tool to aid in software design. It is particularly helpful on large problems.
A Structure Chart (SC) in software engineering and organizational theory is a chart that shows the
breakdown of a system to its lowest manageable levels. They are used in structured programming to
arrange program modules into a tree. Each module is represented by a box, which contains the module's
name. The tree structure visualizes the relationships between modules.
A structure chart illustrates the partitioning of a problem into subproblems and shows the hierarchical
relationships among the parts. A classic "organization chart" for a company is an example of a structure
chart.
The top of the chart is a box representing the entire problem, the bottom of the chart shows several boxes
representing the less complicated subproblems.
A structure chart is NOT a flowchart. It has nothing to do with the logical sequence of tasks. It does NOT
show the order in which tasks are performed. It does NOT illustrate an algorithm.
A structure chart is a top-down modular design tool, constructed of squares representing the different
modules in the system, and lines that connect them. The lines represent the connection and or ownership
between activities and sub-activities as they are used in organization charts.

Page | 28
SOFTWARE ENGINEERING [BCA] SDMCBM

For example, if the invocation of modules C and D in module A depends on the outcome of some decision,
that is represented by a small diamond in the box for A, with the arrows joining C and D coming out of this
diamond, as shown in Figure.

11) Explain the different types of modules used in Structure Chart. (5)
Modules in a system can be categorized into a few classes. Some modules obtain information from their
subordinates and then pass it to their superiordinate. This kind of module is an input module. Similarly,
there are output modules that take information from their superiordinate and pass it on to its
subordinates. As the name suggests, the input and output modules are typically used for input and output
of data The input modules get the data from the sources and get it ready to be processed, and the output
modules take the output produced and prepare it for proper presentation to the environment.
Then some modules exist solely for the sake of transforming data into some other form. Such a module is
called a transform module. Most of the computational modules typically fall in this category. Finally, there
are modules whose primary concern is managing the flow of data to and from different subordinates. Such
modules are called coordinate modules. The structure chart representation of the different types of
modules is shown in Figure 5.3. A module can perform functions of more than one type of module.

Figure 5.3: Different types of modules


Page | 29
SOFTWARE ENGINEERING [BCA] SDMCBM

12) Write a note on SDM strategy. (5)


Structured Design Methodology (SDM) views every software system as having some inputs that are
converted into the desired outputs by the software system.
In properly designed systems, it is often the case that a module with a subordinate does not perform much
computation. The bulk of actual computation is performed by its subordinates, and the module itself
largely coordinates the data flow between the subordinates to get the computation done. The
subordinates in turn can get the bulk of their work done by their subordinates until the "atomic” modules,
which have no subordinates, are reached. Factoring is the process of decomposing a module so that the
bulk of its work is done by its subordinates. There are four major steps in this strategy:
• Restate the problem as a data flow diagram
• Identify the input and output data elements
• First-level factoring
• Factoring of input, output, and transform branches

1) Restate the Problem as a Data Flow Diagram


To use the SDM, the first step is to construct the data flow diagram for the problem. There is a
fundamental difference between the DFDs drawn during requirements analysis and structured design. The
general rules of drawing a DFD remain the same.

2) Identify the Most Abstract Input and Output Data Elements


The most abstract input data elements(MAI) are those data elements in the data flow diagram that are
furthest removed from the physical inputs but can still be considered inputs to the system.
Similarly, the most abstract output data elements (MAO) are the data elements that are most removed
from the actual outputs but can still be considered outgoing. The MAO data elements may also be
considered the logical output data items.

3) First-Level Factoring
Having identified the central transforms and the most abstract input and output data items, we are ready
to identify some modules for the system. We first specify the main module, whose purpose is to invoke the
subordinates. The main module is therefore coordinated. For each of the most abstract input data items,
an immediate subordinate module to the main module is specified. Each of these modules is an input
module, whose purpose is to deliver to the main module the most abstract data item for which it is
created.
Similarly, for each most abstract output data item, a subordinate module that is an output module that
accepts data from the main module is specified. Each of the arrows connecting these input and output
subordinate modules is labeled with the respective abstract data item flowing in the proper direction.
Finally, for each central transform, a module subordinate to the main one is specified. These modules will
be transformed modules, whose purpose is to accept data from the main module, and then return the
appropriate data to the main module. The data items coming to a transform module from the main
module are on the incoming arcs of the corresponding transform in the data flow diagram. The data items
returned are on the outgoing arcs of that transform. Note that here a module is created for a transform,
while input/output modules are created for data items.

4) Factoring the Input, Output, and Transform Branches


The first-level factoring results in a very high-level structure, where each subordinate module has a lot of
processing to do. To simplify these modules, they must be factored into subordinate modules that will
distribute the work of a module. Each of the input, output and transformation modules must be
considered for factoring.

Page | 30
SOFTWARE ENGINEERING [BCA] SDMCBM

14) Write short note on Data Abstraction.


Data abstraction: This involves specifying data that describes a data object. For example, the data object
window encompasses a set of attributes (window type, window dimension) that describe the window
object clearly. In this abstraction mechanism, representation and manipulation details are ignored.
Another form of information hiding is to let a module see only those data items needed by it. The other
data items should be "hidden" from such modules and the modules should not be allowed to access these
data items. Thus, each module is given access to data items on a "need-to-know" basis. Many modem
programming languages provide information hiding principle in the form of data abstraction. With support
for data abstraction, a package or a module is defined that encapsulates the data. Some operations are
defined by the module on the encapsulated data. Other modules that are outside this module can only
invoke .These predefined operations 1the encapsulated data. The advantage of this form of data
abstraction is that e data is entirely in the control of the module in which the data is encapsulated. Other
modules cannot access or modify the data; the operations that can access and modify are also a part of
this module. . Many of the older languages, like Pascal, C, and FORTRAN, do not provide mechanisms to
support data abstraction. With such languages, data abstraction can be supported only by a disciplined use
of the language. For example, to implement a data abstraction of a stack in C, one method is to define a
struct containing all the data items needed to implement the stack and then to define functions and
procedures on variables of this type. A possible definition of the struct and the interface of.the "push"
operation is given next:
typedef struct {
int elts[100];
int top;
} stack;
void push (s, i)
stack s;. int i;
{ ……
……..
}

15) What is Functional Abstraction? Discuss its role in System Design.


Functional abstraction: This involves the use of parameterized subprograms. Functional abstraction can be
generalized as collections of subprograms referred to as 'groups'. Within these groups there exist routines
that may be visible or hidden. Visible routines can be used within the containing groups as well as within
other groups, whereas hidden routines are hidden from other groups and can be used within the
containing group only.

16) Write a note on Design Heuristics.

The strategy requires the designer to exercise sound judgment and common sense. The basic objective is
to make the program structure reflect the problem as closely as possible. Here we mention some heuristics
that can be used to modify the structure, if necessary.

Module size is often considered the indication of module complexity. In terms of the structure of the
system, very large modules may not be implementing a single function and can therefore be broken into
many modules, each implementing a different function. On the other hand, modules that are too small
may not require any additional identity and can be combined with other modules.

However, the decision to split a module or combine different modules should not be based on size alone.
Cohesion and coupling of modules should be the primary guiding factors. A module should be split into
separate modules only if the cohesion of the original module was low, the resulting modules have a higher
degree of cohesion, and the coupling between modules doesn’t increase. Similarly, two or more modules
should be combined only if the resulting module has a high degree of cohesion and the coupling of the

Page | 31
SOFTWARE ENGINEERING [BCA] SDMCBM

resulting module is not greater than the coupling of the sub-modules. In general, if the module should
contain LOC between 5 and 100. Above 100 and less than 5 LOC is not desirable.

Another factor to be considered is “fan-in” and “fan-out” of modules. Fan-in of a module is the number of
arrows coming towards the module indicating the number of superordinates. Fan-out of a module is the
number of arrows going out of that module; indicating the number of subordinates for that module. A
very-high fan-out is not desirable as it means that the module has to control and co-ordinate too many
modules. Whenever possible, fan-in should be maximized. In general, the fan-out should not be more than
6.

Another important factor that should be considered is the correlation of the scope of effect and scope of
control. The scope of effect of a decision is collection of all the modules that contain any processing that is
conditional that decision or whose invocation is dependent on the outcome of the decision; The scope of
control of a module is the module itself and all its subordinates. The system is usually simpler when the
scope of effect of a decision is a subset of the scope of control of the module in which decision is located.

17) Write a note on Transaction Analysis


In the transform analysis, most of the transforms in the data flow diagram have a few inputs and a few
outputs. There may be situations where a transform splits an input stream into many different substreams.
For example, this is the case with systems where there are many different sets of possible actions and the
actions to be performed depend upon the input command specified. In such situations, the transform€
analysis can be supplemented by transaction analysis. The detailed data flow diagram of the transform€
splitting the input may look like the DFD shown below.

DFD for transaction analysis


The module splitting the input is called the transaction center, it need not be a central transform and may
occur on the input branch or output branch of the DFD of the system. one of the standard ways to convert
the above DFD into a structure chart is to have an input module that gets the analysed transaction and a
dispatch module that invokes the modules for the different transactions. This is shown in the structure
chart below.

Page | 32
SOFTWARE ENGINEERING [BCA] SDMCBM

Factored transaction center


For smaller systems the analysis and the dispatching can be done in the transaction center module itself,
giving rise to a flatter structure.
UNIT III
QUESTIONS CARRYING TWO MARKS

1) List the desirable properties of module specification.


(a) the interface of the module (all data items, their types, and whether they are for input and/or output),
(b) the abstract behavior of the module (what the module does) by specifying the module's functionality or
its input/output behavior, and
(c) all other modules used by the module being specified-this information is quite useful in maintaining and
understanding the design.

2) What is Design Walkthrough?


A design walkthrough is a manual method of verification. The definition and use of walkthroughs change
from organization to organization. A design walkthrough is done in an informal meeting called by the
designer or the leader of the designer's group

3) What is the purpose of Critical Design Review?


The purpose of critical design review is to ensure that the detailed design satisfies the specification laid
down by system design. Detecting errors in detailed design is the aim of critical design review.
4) What is PDL? Why it is useful?
Program Design Language (or PDL, for short) is a method for designing and documenting methods and
procedures in software.
PDL is used to express the design in a language that is as precise and unambiguous as possible without
having too much detail and that can be easily converted into an implementation.

5) What is the primary goal of Coding phase?


The goal of the coding or programming phase is to translate the design of the system produced during the
design phase into code in a given programming language, which can be executed by a computer and that
performs the computation specified by the design.

Page | 33
SOFTWARE ENGINEERING [BCA] SDMCBM

6) What is Structured Programming?


Structured programming is a programming paradigm aimed at improving the clarity, quality, and
development time of a computer program by making extensive use of the structured control flow
constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines.

7) What is Information Hiding?


Information hiding is the principle of segregation of the design decisions in a computer program that are
most likely to change, thus protecting other parts of the program from extensive modification if the design
decision is changed.

8) What do you mean by Prologue?


Comments for a module are often called prologue for the module, which describes the functionality and
the purpose of the module, its public interface and how the module is to be used, parameters of the
interface, assumptions it makes about the parameters, and any side effects it has.

9) What are the two categories of program verification methods?


Program verification methods fall into two categories—static and dynamic methods

10) What do you mean by Static Analysis with respect to Coding?


Analysis of programs by methodically analyzing the program text is called Static analysis is usually
performed mechanically by the aid of software tools.

11) Define Code Inspection or Reviews.


The review process was started with the purpose of detecting errors in the code. Code inspection or
reviews are usually held after the successful completion of the coding phase.

12) What is Unit Testing?


unit testing is a dynamic method for verification where the program is actually compiled and executed.

13) What is State Diagram?


A state diagram for an object does not represent all the actual states of the object, as there are many
possible states. A state diagram attempts to represent only the logical states of the object.

14) What is FSA?


A finite state automaton (FSA) includes the concept of state and input data streams. An FSA has a finite set
of states and specifies transitions between the states. The transition from one state to another is based on
the input.

LONG ANSWER QUESTIONS

1) Explain module specifications in detailed design. (4)

2) Explain PDL with suitable example. (6)


Program Design Language (or PDL, for short) is a method for designing and documenting methods and
procedures in software. It is related to pseudocode, but unlike pseudocode, it is written in plain language
without any terms that could suggest the use of any programming language or library.
PDL has an overall outer syntax of a structured programming language and has a vocabulary of a natural
language (English in our case). It can be thought of as "structured English". Because the structure of a
design expressed in PDL is formal, using the formal language constructs, some amount of automated
processing can be done on such designs. As an example, consider the problem of finding the minimum and
maximum of a set of numbers in a file and outputting these numbers in PDL as shown in Figure given
below.

Page | 34
SOFTWARE ENGINEERING [BCA] SDMCBM

minmax (infile)
ARRAY a
DO UNTIL end of input
READ an item to a
ENDDO
max, min := first item of a
DO FOR each item in a
IF max < item THEN set max to item
IF min > item THEN set min to item
ENDDO
END
PDL description of the minmax program.

Notice that in the PDL program we have the entire logic of the procedure, but little about the details of
implementation in a particular language. To implement this in a language, each of the PDL statements will
have to be converted into programming language statements. With PDL, a design can be expressed in
whatever level of detail that is suitable for the problem. One way to use PDL is to first generate a rough
outline of the entire solution at a given level of detail. When the design is agreed on at this level, more

The structured outer syntax of PDL also encourages the use of structured language constructs while
implementing the design. The basic constructs of PDL are similar to those of a structured language.

PDL provides IF construct which is similar to the if-then-else construct of Pascal. Conditions and the
statements to be executed need not be stated in a formal language. For a general selection, there is a CASE
statement. Some examples of The DO construct is used to indicate repetition. The construct is indicated by:
DO iteration-criteria
one or more statements
ENDDO
The iteration criteria can be chosen to suit the problem, and unlike a formal programming language, they
need not be formally stated. Examples of valid uses are:

DO WHILE there are characters in input file

DO UNTIL the end of file is reached


A variety of data structures can be defined and used in PDL such as lists, tables, scalar, and integers.
Variations of PDL, along with some automated support, are used extensively for communicating designs.

3) Write a note on Logic/Algorithm design. (5)


The basic goal in detailed design is to specify the logic for the different modules. Specifying the logic will
require developing an algorithm that will implement the given specifications.
An algorithm is a sequence of steps that need to be performed to solve a given problem. The problem need
not be a programming problem.

A procedure is a finite sequence of well-defined steps or operations, each of which requires a finite
amount of memory and time to complete.

The starting step in the design of algorithms is statement of the problem. The problem for which an
algorithm is being devised has to be precisely and clearly stated and properly understood by the person
responsible for designing the algorithm. The next step is development of a mathematical model where one
has to select the mathematical structures that are best suited for the problem.

Page | 35
SOFTWARE ENGINEERING [BCA] SDMCBM

The next step is the design of the algorithm- the data structure and program structure are decided. Once
the algorithm is designed, correctness should be verified.

The most common method for designing algorithms or the logic for a module is to use the stepwise
refinement technique.

The stepwise refinement technique breaks the logic design problem into a series of steps, so that the
development can be done gradually.

The process starts by converting the specifications of the module into an abstract description of an
algorithm containing a few abstract statements.

In each step, one or several statements in the algorithm developed so far are decomposed into more
detailed instructions.

The successive refinement terminates when all instructions are sufficiently precise that they can easily be
converted into programming language statements. The stepwise refinement technique is a top-down
method for developing detailed design.

4) Explain State Modeling of Classes with example (5)


A state diagram for an object does not represent all the actual states of the object, as there are many
possible states. A state diagram attempts to represent only the logical states of the object. A logical state
of an object is a combination of all those states from which the behavior of the object is similar for all
possible events. Two logical states will have different behavior for at least one event. For example, for an
object that represents a stack, all states that represent a stack of size more than 0 and less than some
defined maximum are similar as the behavior of all operations defined on the stack will be similar in all
such states. However, the state representing an empty stack is different as the behavior of top and pop
operations are different now. Similarly, the state representing a full stack is different. The state model for
this bounded size stack is shown in Figure

The finite state modeling of objects is an aid to understand the effect of various operations defined on the
class on the state of the object. To develop the logic of operations, regular approaches for algorithm
development can be used. The model can also be used to validate if the logic for an operation is correct.

5) Explain the verification method of a detailed design. (6)


Verification
There are a few techniques available to verify that the detailed design is consistent with the system
design. The focus of verification in the detailed design phase is on showing that the detailed design
meets the specifications laid down in the system design. The three verification methods we consider
are design walkthrough, critical design review, and consistency checkers.

Page | 36
SOFTWARE ENGINEERING [BCA] SDMCBM

Design Walkthroughs
A design walkthrough is a manual method of verification. The definition and use of walkthroughs change
from organization to organization. A design walkthrough is done in an informal meeting called by the
designer or the leader of the designer's group. The walkthrough group is usually small and contains, along
with designer, the group and/or another designer of the group.

The designer might just get together with a colleague for the walkthrough or the group leader might
require the designer to have the walkthrough with him. In a walkthrough the designer explains the logic
step by step, and the members of the group ask questions, point out possible errors or seek clarification. A
beneficial side effect of walkthroughs is that in the process of articulating and explaining the design in
detail, the designer himself can uncover some of the errors. Walkthroughs are essentially a form of peer
review. Due to its informal nature, they are usually not as effective as the design review.

Critical Design Review


The purpose of critical design review is to ensure that the detailed design satisfies the specification laid
down by system design. It is very desirable to detect and remove design error, as the cost of removing
them later can be considerably more than the cost of removing them at design time. Detecting errors in
detailed design is the aim of critical design review.
The critical design review process is similar to the other reviews, in that groups of people get together to
discuss the design with the aim of revealing design errors or undesirable properties. The review groups
include, besides the author of the detailed design, a member of the system design team, the programmer
responsible for ultimately coding the module(s) under review, and an independent software quality
engineer.

It should be kept in mind that the aim of the meeting is to uncover design errors, not to fix them. Fixing is
done later. Also, the psychological frame of mind should be healthy, and the designer should not be put in
a defensive position. The meeting should end with a list of action items, to be acted on later by the
designer.

Consistency Checkers
Design reviews and walkthrough are manual processes; the people involved in the review and walkthrough
determine the error in the design .If the design is specified in PDL or some other formally defined design
language, it is possible to detect some design defects by using consistency checkers.

Consistency checkers are essentially compilers that take as input the design specified in a design language
(PDL in our case). Clearly, they cannot produce executable code because the inner syntax of PDL allows
natural language and many activities are specified in the natural language. A consistency checker can
ensure that any modules invoke or used by a given module actually exist in the design and that the
interface used by the called is consistent with the interface definition of the called module.

6) What are the activities that are undertaken during critical design review? (5)
The purpose of critical design review is to ensure that the detailed design satisfies the specification laid
down by system design. It is very desirable to detect and remove design error, as the cost of removing
them later can be considerably more than the cost of removing them at design time. Detecting errors in
detailed design is the aim of critical design review.

The critical design review process is similar to the other reviews, in that groups of people get together to
discuss the design with the aim of revealing design errors or undesirable properties. The review groups
include, besides the author of the detailed design, a member of the system design team, the programmer
responsible for ultimately coding the module(s) under review, and an independent software quality
engineer.

Page | 37
SOFTWARE ENGINEERING [BCA] SDMCBM

It should be kept in mind that the aim of the meeting is to uncover design errors, not to fix them. Fixing is
done later. Also, the psychological frame of mind should be healthy, and the designer should not be put in
a defensive position. The meeting should end with a list of action items, to be acted on later by the
designer.
A Sample Checklist
Does each of the modules in the system design exist in detailed design?
Are there analyses to demonstrate the performance requirement can be met?
Are all the assumptions explicitly stated and are they acceptable?
Are all relevant aspects of system design reflected in detailed design?

7) Write a note on
i. Design Walkthroughs ii. Consistency Checkers (6)

Design Walkthroughs: A design walkthrough is a manual method of verification. The definition and use of
walkthroughs change from organization to organization. A design walkthrough is done in an informal
meeting called by the designer or the leader of the designer's group. The walkthrough group is usually
small and contains, along with designer, the group and/or another designer of the group.

Consistency Checkers : Consistency checkers are essentially compilers that take as input the design
specified in a design language (PDL in our case). Clearly, they cannot produce executable code because the
inner syntax of PDL allows natural language and many activities are specified in the natural language. A
consistency checker can ensure that any modules invoke or used by a given module actually exist in the
design and that the interface used by the called is consistent with the interface definition of the called
module.

8) Write a note on top-down and bottom-up approaches in coding. (5)

9) Explain the concept of Structured Programming (5)


Structured Programming
The basic objective of the coding activity is to produce programs are easy to understand. It has been
argued by many that structured programming practice helps develop programs that are easier to
understand. Structured programming is often regarded as "goto-less" programming. Although extensive
use of gotos is certainly desirable, structured programs can be written with the use of gotos.

A program has a static structure as well as a dynamic structure. The static structure is the structure of the
text of the program, which is usually just a linear organization of statements of the program, The dynamic
structure of the program is the sequences of statements executed during the execution of the program. In
other words, both the static structure and the dynamic behavior are sequences of statements; where the
sequence representing the static structure of a program is fixed, the sequence of statements it executes
can change from execution to execution.

It will be easier to understand the dynamic behavior if the structure in the dynamic behavior resembles the
static structure. The closer the correspondence between execution and text structure, the easier the
program is to understand, and the more different the structure during execution, the harder it will be to
argue about the behavior from the program text. The goal of structured programming is to ensure that the
static structure and the dynamic structures are the same. That is, the objective of structured programming
is to write programs so that the sequence of statements executed during the execution of a program is the
same as the sequence of statements in the text of that program. As the statements in a program text are
linearly organized, the objective of structured programming becomes developing programs whose control
flow during execution is linearized and follows the linear organization of the program text. Clearly, no
meaningful program can be written as a sequence of simple statements without any branching or
repetition. In structured programming, a statement is not a simple assignment statement, it is a structured

Page | 38
SOFTWARE ENGINEERING [BCA] SDMCBM

statement. The key property of a structured statement is that it has a single-entry and a single-exit, That is,
during execution, the execution of the (structured) statement starts from one defined point and the
execution terminates at one defined point. With single-entry and single-exit statements, we can view a
program as a sequence of (structured) statements. And if all statements are structured statements, then
during execution, the sequence of execution of these statements will be the same as the sequence in the
program text. Hence, by using single-entry and single-exit statements, the correspondence between the
static and dynamic structures can be obtained. The most commonly used single-entry and single-exit
statements are:
Selection: if B then S1 else S2
if B then SI

Iteration: While B do S
Repeat S until B

Sequencing: S1; S2; S3;.

It can be shown that these three basic constructs are sufficient to program any conceivable algorithm.
Modem languages have other such constructs that help linearize the control flow of a program, which
makes it easier to understand a program. Hence, programs should be written so that, as far as possible,
single-entry, single-exit control constructs is used. The basic goal, as we have tried to emphasize, is to
make the logic of the program simple to understand. The basic objective of using structured constructs is
to linearize the control flow so that the execution behavior is easier to understand. In linearized control
flow, if we understand the behavior of each of the basic constructs properly, the behavior of the program
can be considered a composition of the behaviors of the different statements. Overall, it can be said that
structured programming, in general, leads to programs that are easier to understand than unstructured
programs.

10) Explain the Information Hiding with an example (5)


Information Hiding
When the information is represented as data structures, only some defined operations should be
performed on the data structures. This, essentially, is the principle of information hiding. The information
captured in the data structures should be hidden from the test of the system, and only the access functions
on the data structures that represent the operations performed on the information should be visible. The
other modules access the data only with the help of these access functions.

For example, a ledger in an accountant's office has some defined uses: debit, credit, check the current
balance, etc. An operation where all debits are multiplied together and then divided by the sum of all
credits is typically not performed. So, any information in the problem domain typically has a small number
of defined operations performed on it.

Information hiding can reduce the coupling between modules and make the system more maintainable.
With information hiding, the impact on the modules using the data needs to be evaluated only when the
data structure or its access functions are changed. Otherwise, as the other modules are not directly
accessing the data, changes in these modules will have little direct effect on other modules using the data.
Also, when a data structure is changed, the effect of the change is generally limited to the access functions
if information hiding is used. Otherwise, all modules using the data structure may have to be changed.

Information hiding is also an effective tool for managing the complexity of developing software. As we
have seen, whenever possible, problem partitioning must be used so that concerns can be separated and
different parts solved separately.

Page | 39
SOFTWARE ENGINEERING [BCA] SDMCBM

11) Explain common errors that occur during Coding (5) [for detailed ans refer notes]

➢ Memory Leaks
A memory leak is a situation where the memory is allocated to the program which is not freed
subsequently. This error is a common source of software failures which occurs frequently in the languages
which do not have automatic garbage collection (like C, C++). A software system with memory leaks keeps
consuming memory, till at some point of time the program may come to an exceptional halt because of the
lack of free memory. An example of this error is:
➢ Freeing an Already Freed Resource
In general, in programs, resources are first allocated and then freed. For example, memory is first allocated
and then deallocated. This error occurs when the programmer tries to free the already freed resource. The
impact of this common error can be catastrophic

➢ NULL Dereferencing
This error occurs when we try to access the contents of a location that points to NULL. This is a commonly
occurring error which can bring a software system down. It is also difficult to detect as it the NULL
dereferencing may occur only in some paths and only under certain situations.

➢ Lack of Unique Addresses


Aliasing creates many problems, and among them is violation of unique addresses when we expect
different addresses.
➢ Array Index Out of Bounds
Array index often goes out of bounds, leading to exceptions. Care needs to be taken to see that the array
index values are not negative and do not exceed their bounds.
➢ Arithmetic exceptions
These include errors like divide by zero and floating point exceptions. The result of these may vary from
getting unexpected results to termination of the program.
➢ Off by One
This is one of the most common errors which can be caused in many ways. For example, starting at 1 when
we should start at 0 or vice versa, writing <— N instead of < N or vice versa, and so on.
➢ Enumerated data types
Overflow and underflow errors can easily occur when working with enumerated types, and care should be
taken when assuming the values of enumerated data types

➢ Illegal use of &; instead of &;&:


This bug arises if non short circuit logic (like & or |) is used instead of short circuit logic (&& or ||). Non
short circuit logic will evaluate both sides of the expression. But short circuit operator evaluates one side
and based on the result, it decides if it has to evaluate the other side or not.

➢ String handling errors


There are a number of ways in which string handling functions like strcpy, sprintf, gets etc can fail.
Examples are one of the operands is NULL, the string is not NULL terminated, or the source operand may
have greater size than the destination. String handling errors are quite common.

12) Write a note on the importance of Comments and Layout in Coding (4)
➢ Commenting and Layout
Comments are textual statements that are meant for the program reader to aid the understanding of code.
The purpose of comments is not to explain in English the logic of the program—if the logic is so complex
that it requires comments to explain it, it is better to rewrite and simplify the code instead. Some such
guidelines
are:
• Single line comments for a block of code should be aligned with the code they are meant for.

Page | 40
SOFTWARE ENGINEERING [BCA] SDMCBM

• There should be comments for all major variables explaining what they represent.
• A block of comments should be preceded by a blank comment line with just "/*" and ended with a line
containing just "*/".
• Trailing comments after statements should be short, on the same line, and shifted far enough to separate
them from statements.

13) Explain static analysis and its uses. (5)[detailed ans in notes]
➢ Static Analysis
Analysis of programs by methodically analyzing the program text is called Static analysis is usually
performed mechanically by the aid of software tools. During static analysis the program itself is not
executed, but the program text is the input to the tools. The aim of the static analysis tools is to detect
errors or potential errors of the program that can be useful for documentation or understanding of the
program. An advantage is that static analysis sometimes detects the errors themselves. This saves the
effort of tracing the error from the data that reveals the presence of errors. Furthermore, static analysis
can provide "warnings" against potential errors and can provide insight into the structure of the program.
It is also useful for determining violations of local programming standards, which the standard compilers
will be unable to detect. Extensive static analysis can considerably reduce the effort later needed during
testing.

Examples of data flow anomalies are unreachable code, unused variables, and unreferenced labels.
Unreachable code is that part of the code to which there is not a feasible path; there is no possible
execution in which it can be executed. Technically this is not an error, and a compiler will at most generate
a warning.
Unreferenced labels and unused variables are like unreachable code in that they are technically not errors,
but often are symptoms of errors; thus their presence often implies the presence of errors. Data flow
analysis is usually performed by representing a program as a graph, sometimes called the flow graph.

14) List out the various items in the checklist while reviewing the code. (5)
A Sample Checklist
• Does each of the modules in the system design exist in detailed design?
• Are there analyses to demonstrate the performance requirement can be met?
• Are all the assumptions explicitly stated and are they acceptable?
• Are all relevant aspects of system design reflected in detailed design?
• Are all the data formats consistent with the system design?

15) Explain any two code verification methods(5)


➢ Code Reading

Code reading involves careful reading of the code by the programmer to detect any discrepancies between
the design specifications and the actual implementation. It involves determining the abstraction of a
module and then comparing it with its specifications. The process of code reading is best done by reading
the code inside-out, starting with the innermost structure of the module. First determine its abstract
behavior and specify the abstraction. Then the higher-level structure is considered, with the inner
structure replaced by its abstraction. This process is continued until we reach the module or program being
read. At that time the abstract behavior of the program/module will be known, which can then be
compared to the specifications to determine any discrepancies. Code reading is very useful and can detect
errors often not revealed by testing Reading in the manner of stepwise-abstraction also forces the
programmer to code in a manner conducive to this process, which leads to well-structured programs. Code
reading is sometimes called desk review.

Page | 41
SOFTWARE ENGINEERING [BCA] SDMCBM

➢ Static Analysis

Analysis of programs by methodically analyzing the program text is called Static analysis is usually
performed mechanically by the aid of software tools. During static analysis the program itself is not
executed, but the program text is the input to the tools. The aim of the static analysis tools is to detect
errors or potential errors or to generate information about the structure of the program that can be useful
for documentation or understanding of the program. An advantage is that static analysis sometimes
detects the errors themselves, not just the presence of errors, as in testing. This saves the effort of tracing
the error from the data that reveals the presence of errors. Furthermore, static analysis can provide
"warnings" against potential errors and can provide insight into the structure of the program. It is also
useful for determining violations of local programming standards, which the standard compilers will be
unable to detect. Extensive static analysis can considerably reduce the effort later needed during testing.

Data flow anomalies are "suspicious" use of data in a program. In general, data flow anomalies are
technically not errors, and they may go undetected by the compiler. However, they are often a symptom of
an error, caused due to carelessness in typing or error in coding. At the very least, presence of data flow
anomalies implies poor coding. Hence, if a program has data flow anomalies, they should be properly
addressed.

x = a;
x does not appear in any right hand side
x = b;
An example of the data flow anomaly is the live variable problem, in which a variable is assigned some
value but then the variable is not used in any later computation. Such an assignment to the variable is
clearly redundant. Another simple example of this is having two assignments to a variable without using
the value of the variable between the two assignments. In this case the first assignment is redundant. For
example, consider the simple case of the code segment shown in Figure 8.2. Clearly, the first assignment
statement is useless. Perhaps the programmer meant to say y := b in the second statement, and mistyped y
as x. In that case, detecting this anomaly and directing the programmer's attention to it can save
considerable effort in testing and debugging. In addition to revealing anomalies, data flow analysis can
provide valuable information for documentation of programs. For example, data flow analysis can provide
information about which variables are modified on invoking a procedure in the caller program and the
value of the variables used in the called procedure (this can also be used to make sure that the interface of
the procedure is minimum, resulting in lower coupling). This information can be useful during maintenance
to ensure that there are no undesirable side effects of some modifications to a procedure.

Other examples of data flow anomalies are unreachable code, unused variables, and unreferenced labels.
Unreachable code is that part of the code to which there is not a feasible path; there is no possible
execution in which it can be executed. Technically this is not an error, and a compiler will at most generate
a warning. The program behavior during execution may also be consistent with its specifications. However,
often the presence of unreachable code is a sign of lack of proper understanding of the program by the
programmer, which suggests that the presence of errors is likely. Often, unreachable code comes into
existence when an existing program is modified. In that situation unreachable code may signify undesired
or unexpected side effects of the modifications. Unreferenced labels and unused variables are like
unreachable code in that they are technically not errors, but often are symptoms of errors; thus their
presence often implies the presence of errors. Data flow analysis is usually performed by representing a
program as a graph, sometimes called the flow graph.

Page | 42
SOFTWARE ENGINEERING [BCA] SDMCBM

UNIT IV
QUESTIONS CARRYING TWO MARKS

1) Define error, fault and failure.


The term error refers to the discrepancy between the observed value and the actual value.
A fault is the basic reason for software malfunction and is synonymous with the commonly used term bug.
Failure is the inability of a system or its components to perform a required function according to its
specifications.

2) Define Testing.
Software Testing is a method to check whether the actual software product matches expected
requirements and to ensure that software product is Defect free.

3) What are Test Oracles?


Test Oracle is a mechanism, different from the program itself, that can be used to test the accuracy of a
program's output for test cases.

4) Which are the basic approaches in Testing?


5) What are Test Cases?
A test case is a document, which has a set of test data, preconditions, expected results and postconditions,
developed for a particular test scenario in order to verify compliance against a specific requirement.
6) Define Software Maintenance.
Software maintenance is the process of changing, modifying, and updating software to keep up with
customer needs. Software maintenance is done after the product has launched for several reasons
including improving the software overall, correcting issues or bugs, to boost performance, and more.

7) What is Mutation Testing?


Mutation Testing is a type of Software Testing that is performed to design new software tests and also
evaluate the quality of already existing software tests. Mutation testing is related to modification a
program in small ways. It focuses to help the tester develop effective tests or locate weaknesses in the test
data used for the program.

8) What is Black Box Testing?


Black box testing involves testing a system with no prior knowledge of its internal workings. A tester
provides an input, and observes the output generated by the system under test.

9) What is White Box Testing?


White box testing is an approach that allows testers to inspect and verify the inner workings of a software
system—its code, infrastructure, and integrations with external systems.

10) Define Single-mode fault?


Many of the defects in software generally involve one condition, that is, some special value of one of the
parameters. Such a defect is called single-mode fault. Simple examples of single mode fault are a software
not able to print for a particular type of printer, a software that cannot compute fare properly when the
traveller is a minor, a telephone billing software that does not compute the bill properly for a particular
country.

11) Define Multi-mode fault.


12) Define Pair-wise Testing.
Pairwise Testing also known as All-pairs testing is a testing approach taken for testing the software using
combinatorial method.

Page | 43
SOFTWARE ENGINEERING [BCA] SDMCBM

13) What is Error-guessing?


Error guessing is a type of testing method in which prior experience in testing is used to uncover the
defects in software. It is an experience based test technique in which the tester uses his/her past
experience or intuition to gauge the problematic areas of a software application

14) Write any two important features of SilkTest.


• It can access the database and validation can be done.
• For test creation and customization, workflow elements are available.

15) Write a note on SQA Robot.


SQA Robot is a tool from IBM Rational for carrying out functional / regression testing. This tool is a part of
the test suit that contains:
a. SQA manager, to manage testing process.
b. LoadTest, to test networking and web applications
c. SiteCheck, to check websites

16) Write any two important aspects of WinRunner.


• We can add checkpoints to compare actual and expected results. The checkpoints can be GUI
checkpoints, bitmap checkpoints and web links.
• It provides a facility for synchronization of test cases.

17) What is a use of LoadRunner?


Mercury Interactive’s load runner is used to test the client/server applications such as database
management systems and websites.
Load runner accurately measures and analyzes the performance of the client/server application.

18) What is Apache JMeter? Why it is used?


JMeter is a test tool from Apache used to analyze and measure the performance of applications, different
software services and products. It is open source software entirely written in Java, used to test both web
and FTP applications as long as the system supports a Java Virtual Machine (JVM).

LONG ANSWER QUESTIONS

1) Explain error, fault, and failure. (6)


The term Error is used in two different ways,
i) It refers to the difference between the actual output of the software and the correct output.
Here the error is measure of the actual and ideal.
ii) ii) It refers to human actions that result in software containing a defect or fault.
Fault is a condition that ensures a system to fail in performing its required function. It is the basic reason
for software malfunction and a synonymous with the commonly known as bug.
Failure is the inability of a system or component to perform a required function according to its
specifications. A software failure occurs if the behavior of the software is different from the specified
behavior. Failures may be caused due to functional or performance reasons. A failure is produced only
when there is a fault in the system.
For a system,
➢ Faults have the potential to cause failures.

Page | 44
SOFTWARE ENGINEERING [BCA] SDMCBM

➢ Presence of an error implies that a failure must have occurred and observance of a failure implies
that a fault must be present in the system.
➢ Testing only reveals the presence of failure; the actual faults are identified by debugging.
2) What is test oracle? Explain with diagram. (4)
A test oracle is a mechanism, different from the program itself that can be used to check the correctness of
the output of the program for the test cases.

Testing and Test Oracles


Here, we can consider testing a process in which the test cases are given to the test oracle and the
program under testing. The output of the two is then compared to determine if the program behaved
correctly for the test cases.
Test oracles are necessary for testing. Ideally, we would like an automated oracle, which always gives a
correct answer. However, often the oracles are human beings, who can make mistakes. As a result, when
there is a discrepancy between the results of the program and the oracle, we have to verify the result
produced by the oracle, before declaring that there is a fault in the program.
The human oracles generally use the specifications of the program to decide what the "correct" behavior
of the program should be.
An oracle generated from the specifications will only produce correct results if the specifications are
correct.

3) Explain the test case and test criteria. (5)


• To determine a set of test case such that successful execution of all item implies there are no errors
in the program.
• Each test case is needed to generate the test case, machine time is needed to execute the
program, more effort is needed to evaluate the result .Therefore minimize the number of test case
needed to detect the errors.
If testing has to be successful, then we should have test cases that are good at revealing the presence of
faults. We would like to determine a set of test cases such that successful.
Normally in testing we have to determine a set of test cases such that successful execution of all of them
implies that there are no errors in the program. Generally we have to minimize the test cases costly and
required more effort.
The fundamental goals of a practical testing are
• Maximize the number of errors detected.
• Minimize the number of test cases.
An ideal test case set is one that succeeds only if there are no errors in the program. One possible ideal test
case is one that includes all the possible inputs to the program. This is often called exhaustive testing.

Test Selection Criterion (test criterion)


Is used for selecting test cases. For a given program P and its specification S, a test selection criterion
specified by a set of a test case T. the criterion becomes a basis for test case selection.
There are two fundamental properties for a testing criterion: reliability and validity.
A criterion is reliable if all the sets (of test cases) that satisfy the criterion detect the same errors. A
criterion is valid if for any error in the program there is some set satisfying the criterion that will reveal the
error.

Page | 45
SOFTWARE ENGINEERING [BCA] SDMCBM

Getting an ideal test criterion is not generally possible so that more practical properties of test criteria
have been proposed. Some axioms capturing some of the desirable properties of test criteria have been
proposed.
A.)Applicability Axiom: it states that for every program there exists a test case set T that satisfies the
criterion.
B.) Anti extentionality Axiom: it states that there are programs P and Q, both implementing the some
specifications, such that a test set T satisfy the criterion for P but does not satisfy the criterion Q.
C) Anti decomposition Axiom: it states that there exists a program P and its component Q such that a test
case set T satisfies the criterion for P and T ’ is the set of values that variables can assume on entering Q for
some test case T and T ’ does not satisfy the criterion Q.
D) Anti composition Axiom: it states that there exists programs P and Q such that T satisfies the criterion
for P and the outputs of P for T satisfies the criterion Q, but T does not satisfy the criterion for P;Q.
Getting a criterion that is reliable and valid and that can be satisfied by a manageable number of test cases
is usually not possible. Even when the criterion is specified, generating test cases to satisfy a criterion is not
simple. In general, generating test cases for most of the criteria cannot be automated.

4) Briefly explain Functional Testing. (6)


There are 2 approaches to testing
a. functional (black-box)
b. structural (white-box).
In functional testing : The structure of the program is not considered. Test cases are decided on the basis
of requirements of specifications of the program or module and internal of the modules or the programs
are not considered for selection of test cases. Due to its nature functional testing also called as black-box
testing.
In structural testing : test cases are generated based on actual code of the program or module to be
tested, this structural approach sometimes are called as white-box testing or glass-box testing.
The basis for deciding test cases in functional testing is requirements or specifications of the system or
module. For modules created during design, test cases for functional testing are decided from the module
specifications produced during the design.
There are no formal rules for designing test cases for functional testing. In fact there are no precise criteria
for selecting test cases. There are a number of techniques and heuristics that can be used to select test
cases that have been found to be very successful in detecting errors.

5) Explain Equivalence Class Partitioning. (4)


• We can’t do exhaustive testing, the next approach is to divide all the input into a set of equivalence
classes. So that ,test in an equivalence class succeeds , then every test in that class will succeed e.g:
identify the classes of test cases such that success of one test cases in a class implies success of others.

• Without looking at the internal structure of the program, it is impossible to determine equivalence
classes. The equivalence class partitioning method tries to approximate this ideal. An equivalence class
is formed of the inputs for which the behavior of the system is specified or expected to be similar. Each
group of inputs for which the behavior is expected to be different from others is considered a separate
equivalence class.

• For example, the specifications of a module that determines the absolute value for integers specify
one behavior for positive integers and another for negative integers.
In this case, we will form two equivalence classes:-

1. consisting of positive integers


2. other consisting of negative integers.
• Equivalence classes are usually formed by considering each condition specified on an input as
specifying a valid equivalence class and one or more invalid equivalence classes.

Page | 46
SOFTWARE ENGINEERING [BCA] SDMCBM

For example, if an input condition specifies a range of values (say, 0 < count < Max), then form a valid
equivalence class with that range and two invalid equivalence classes, one with values less than the lower
bound of the range (i.e., count < 0) and the other with values higher than the higher bound (count > Max).
• The entire range of an input will not be treated in same manner, then the range should be split into 2
or more equivalence classes .Also valid equivalence class consist of one or more invalid equivalence
class should be identified.

6) Explain Boundary Value Analysis. (3)


It has been observed that programs that work correctly for a set of values in an equivalence class fail on
some special values. These values often lie on the boundary of the equivalence class. Test cases that have
values on the boundaries of equivalence classes are therefore likely to be "high-yield" test cases, and
selecting such test cases is the aim of the boundary value analysis.
• In boundary value analysis, we choose an input for a test case from an equivalence class, such that the
input lies at the edge of the equivalence classes. Boundary values for each equivalence class, including
the equivalence classes of the output, should be covered. Boundary value test cases are also called
"extreme cases." Hence, we can say that a boundary value test case is a set of input data that lies on
the edge or boundary of a class of input data or that generates output that lies at the boundary of a
class of output data.

Example:
In case of ranges, for boundary value analysis it is useful to select the boundary elements of the range and
an invalid value just beyond the two ends (for the two invalid equivalence classes). So, if the range is 0.0 <
x < 1.0, then the test cases are 0.0, 1.0 (valid inputs), and -0.1, and 1.1 (for invalid inputs).

Similarly, if the input is a list, attention should be focused on the first and last elements of the list. We
should also consider the outputs for boundary value analysis. If an equivalence class can be identified in
the output, we should try to generate test cases that will produce the output that lies at the boundaries of
the equivalence classes. Furthermore, we should try to form test cases that will produce an output that
does not lie in the equivalence class.

7) Explain the Cause-Effect Graphing with an diagram. (5)


• One weakness with the equivalence class partitioning and boundary value method is that they consider
each input value separately, both concentrates on condition and class of one input. They do not
consider combination of input circumstances.
• Cause-effect graphing is a technique that aids in selecting combinations of input conditions in a
systematic way, such that the number of test cases does not become unmanageably large.
• The technique starts with identifying causes and effects of the system under testing.
• A cause is a distinct input condition,
• and an effect is a distinct output condition.

Each condition forms a node in the cause-effect graph. The conditions should be stated such that they can
be set to either true or false.
the cause-effect graph is as shown below

Page | 47
SOFTWARE ENGINEERING [BCA] SDMCBM

In the above graph, the cause-effect relationship of this example is captured. For all effects, one can easily
determine the causes each effect depends on and the exact nature of the dependency. For example,
according to this graph the effect e5(credit amount) depends on the causes c2 (Command is debit),
c3(Account number is valid), and c4(Transaction_amt is valid) in a manner such that the effect e5(Credit
account) is enabled when all c2, c3, and c4 are true. Similarly, the effect e2 (Print "invalid
account_number") is enabled if c3 is false.
From this graph, a 1st of test cases can be generated. The basic strategy is to set an effect to 1 and then set
the causes that enable this condition. The condition of causes forms the test case. A cause may be set to
false, true, or don't care (in the case when the effect does not depend at all on the cause). To do this for all
the effects, it is convenient to use a decision table.

Cause-effect graphing, beyond generating high-yield test cases, also aids the understanding of the
functionahty of the system, because the tester must identify the distinct causes and effects. There are
methods of reducing the number of test cases generated by proper traversing of the graph. Once the
causes and effects are listed and their dependencies specified, much of the remaining work can also be
automated.

8) Briefly explain Structural Testing. (6)


The functional testing is concerned with functionality rather than implementation of the program.
Structural testing, on the other hand is concerned with testing the implementation of the program.The
intent of this testing is not to exercise all the different input or output conditions but to exercise the
different programming structures and data structures used in the program. structural testing is also called
white-box testing.
To test the structure of a program, structural testing aims to achieve test cases that will force the desired
coverage of different structures. The criteria for structural testing are generally quite precise as they are
based on program structures, which are formal and precise. There are three basic structural testing
approaches
A. Control flow based testing
B. Data flow based testing
C. Mutation testing.

9) Explain Control Flow-based testing with suitable example. (5)


Let the control flow graph of a program P be G.A node in this graph represents a block of statements that is
always executed together . e.g : whenever the first statement is executed , all other statements are also
executed.An edge (i,j) (from node i to j) represents possible transfer of control after executing the last
statement of the block represented by node “i” to the first statement of the block represented by node
“j”. A node corresponding to a block,whose first statement is the start statement of P is an called start
node of G and the node corresponding to a block whose last statement is an exit statement is called “exit
node”. A path is a finite sequence of node(n1,n2…..nk),k>1,such that there is an edge(ni, ni+1) for all nodes
ni in the sequence (except the last node nk). A complete path is a path whose first node is the start node
and the last node is an exit node.

Example :- consider the following function to complete the absolute value of a number.
int abs(x)
int x;
{
if(x>=0) x=0-x;
return (x);
}
This program clearly wrong, suppose we execute the function with test cases. The statement coverage
criterion will be satisfied by testing with this set, but error will not revealed.

Page | 48
SOFTWARE ENGINEERING [BCA] SDMCBM

10) Explain Data Flow-based testing with an example. (5)


Some criteria that select the path to be executed during testing .Testing based on data flow analysis. The
data flow based testing approaches, information about , where the variable are defined and where the
definition are used is also specify the test cases.

Variable occurrence can be one of the following 3 types:-


1. def :- represents the definition of a variable, the variable on the left-hand side of an assignment
statement in the one getting defined.
2. C-Use :- Represents computational use of a variable. Any statement that uses the value of variables
for computational purpose. In an assignment statement , all variables on the right hand side have a
C-Use occurrence.
3. P-Use :- Represents predicate use. These are all the occurrence of the variables in a predicate
,which is used for transfer of control.
• C-use :- of a variable x is considered global C-use with each node i, associate all the global C-use
variable in that node. The P-use is associated with edges. If X1,X2…..Xn where 2 edge go to 2
different block j & k. Then 2 edges(i,j) and (i,k).
• A path from node i to j is called “def-clear” path with respect to (w.r.t) to a variable x.
As mentioned earlier ,a criteria C1 include another criterion C2 ( represented by C1=>C2) if any set
of test cases that satisfy criteria C1 also satisfy the criteria C2.

The relationship between the various data flow criteria and control flow criteria is given in fig:-

11) Write a note on WinRunner/ Write the important aspects of WinRunner. (5)
WinRunner is Mercury’s legacy automated testing tool.
WinRunner is a test automation tool, designed to help customers save testingtime and effort by
automating the manual testing process.

• Automated testing with WinRunner addresses the problems by manual testing, speeding up the testing
process.

Page | 49
SOFTWARE ENGINEERING [BCA] SDMCBM

• You can create test scripts that check all aspects of your application, and then runthese tests on each
new build.

• As WinRunner runs tests, it simulates a human user by moving the mouse cursorover the application,
clicking Graphical User Interface (GUI) objects, and entering keyboard input.

• It create a summary report showing the test status.

Features

➢ WinRunner is: - Functional Regression Testing Tool - Windows PlatformDependent - Only for
Graphical User Interface (GUI) based Application -Based on Object Oriented Technology (OOT)
concept - Only for Staticcontent - Record/Playback Tool
➢ Winrunner environment • Windows - C++, Visual Basic, Java, PowerBuilder, Stingray, Smalltalk •
Web - Web Applications • Othertechnologies - SAP, Siebel, Oracle, PeopleSoft, ActiveX

12) Write a note on SilkTest/Write the important features of SilkTest. (4)


Segue Softwares Silk Test can be used for testing variety of application such as

➢ Standalone java, Visual Basic..


➢ Websites written in HTML,XML,Javascript,Applet using Internet Explorer and Netscape Navigator,
Databases.

Features:

• Silk-test has in-built customizable recovery system.So , while automated testing is in progress, even if the
application fails in between , automatically the test continues without halt.

• It has the object-oriented scripting language called 4Test;(which means using the script written in 4Test ,
an application can be tested on different platform).

• It can access database and the validation can be done.

• Test planning , management and reporting can be done by integration with other tools of segue
software.

13) Write the important features of TestDirector. (5)

TestDirector is Mercury Interactive&#39;s software test management tool. It helps quality assurance
personnel plan and organize the testing process. With TestDirector you can create a database of manual
and automated tests, build test cycles.

➢ It is web based tool(client/server based).


➢ It provides the features to document testing procedures.
➢ The testing can be done during nighttimes or when the system load is less.
➢ It provides the features of setting groups of machine to carry out testing .
➢ It keeps all the history of all the test runs.
➢ It keeps a log of all defects(or bugs) found and the status of each bug can be changed by authorized
persons only.
➢ It provides the feature of creating different users with different privileges(developer, tester,
Manager, beta tester).
➢ It generates the test reports and analysis for the MANAGER TO DECIDE WHEN TO RELEASE THE
SOFTWARE TO MARKET.

Page | 50
SOFTWARE ENGINEERING [BCA] SDMCBM

14) Explain the salient features of Apache JMeter. (4)

➢ The Apache JMeter™ application is open source software, a 100% pureJava application designed to
load test functional behavior and measure performance. It was originally designed for testing Web
Applications but has since expanded to other test functions.
➢ Apache JMeter may be used to test performance both on static and dynamicresources, Web
dynamic applications. It can be used to simulate a heavy load on a server, group of servers, network
or object to test its strength or to analyze overall performance underdifferent load types.

Some categories are

❖ FTP
❖ HTTP REQUEST
❖ JDBC REQUEST
❖ JAVA OBJECT REQUEST
❖ LDAP REQUEST(LIGHTWEIGHT DIRECTORY ACCESS PROTOCOL)

15) Explain the role of LoadRunner in testing. (4)

HPE LoadRunner is a software testing tool from Hewlett Packard Enterprise

➢ Mercury Interactives LoadRunner is used to test the Client/Serverapplications such as database


Management systems and websites.
➢ When multiple users access the applications requires lot of infrastructure andmanpower, if testing
tools are not used.
➢ Using LoadRunner simulates multiple transactions from the same machine and hence it creates a
scenario of simulataneous access to the applications. So instaed of “real users” we are using “virtual
users” are simulated.
➢ In LoadRunner , we divide the performance testing requirements into various scenarios.( A
Scenario is a series of actions that are to be tested.)
➢ The Virtual Users(Vusers) submits the request to the server. Vuser script is generated and this script
is executed for simulating multiple users.
➢ LoadRunner simulates user activity by generating messages between application components or by
simulating interactions with the user interface such as keypresses or mouse movements. The
messages and interactions to be generated are stored in scripts. LoadRunner can generate the
scripts by recording them, such as logging HTTP requests between a client web browser and an
application&#39;s web server .

16) Write the important features of SQA Robot. (4)

SQA Robot is a tool from IBM Rational for carrying out functional/regression testing.

• SQA manager , to manage testing process.


• SQA Robot used for testing of applications written in VB,Delphi,C++ , java etc.
• ERP packages as well as using IDE’S such as visual studio ,jbuilder etc.
• LoadTest is used to test networking and web applications.
• SiteCheck to check the websites.
• PurifyPlus , to check code coverage for C and C++ and to analyze the performance of the code as
well as to detect bottleneck in the code.

Page | 51

You might also like