Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Se Tpqa

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49

Gokula Krishna College of Engineering

Department of CSE
20A05403T-Software Engineering
TOP PRIORITY QUESTIONS-2023

Top 15 - 2 Marks Questions


1) What is Software? What is Software Engineering? U1
 Software is a program or set of programs containing instructions that provide
desired functionality.
 Engineering is the process of designing and building something that serves a
particular purpose and finds a cost-effective solution to problems.
 Software engineering is the process of designing, developing, testing, and
maintaining software.
 It is a systematic and disciplined approach to software development that aims to
create high-quality, reliable, and maintainable software.

2) What is COCOMO model? U1


 Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number
of Lines of Code.
 It is a procedural cost estimate model for software projects and is often used as a
process of reliably predicting the various parameters associated with making a
project such as size, effort, cost, time, and quality.

3) What are the various phases of SDLC? U1

 requirements & Analysis.


 Project Planning.
 Design.
 Coding & Implementation.
 Testing.
 Deployment.
 Maintenance.

4) What are the characteristics of good SRS document? U2


1) Correctness
2) Completeness
3) Consistency
4) unambitiousness
5) Ranking for importance and stability
6) Modifiability
7) Verifiability
8) Traceability

5) Write short note on requirements specification? U2


 A software requirements specification (SRS) is a document that describes what the
software will do and how it will be expected to perform.
 Requirement specification, also known as documentation, is a process of jotting down all
the system and user requirements in the form of a document. These requirements must be
clear, complete, comprehensive, and consistent. 
 During the capturing activity, we gather all the requirements from various sources.

6) Define Software Myths? U2

 Software myths are preconceived notions about software and its creation that people hold
to be true but are in fact untrue.
 Professionals in Software Engineering have now identified the software myths that have
persisted throughout the years.
 These fallacies are common knowledge to managers and software developers. However, it
might be challenging to change old behaviors.
 Types of Software Myths
1) Management Myths
2) Customer Myths
3) Practitioner’s Myths

7) What is Cohesion and coupling? U3


 Cohesion is the measure of degree of relationship between elements of a module.
 Coupling is the measure of degree of relationship between different modules.
 Both coupling and cohesion are important factors in determining the maintainability,
scalability, and reliability of a software system.
 High coupling and low cohesion can make a system difficult to change and test, while low
coupling and high cohesion make a system easier to maintain and improve.

8) Mentions some software analysis & design tools? U3

 Data Flow Diagram


 Structure Charts
 HIPO Diagram
 Structured English
 Pseudo-Code
 Decision Tables
 Entity-Relationship Model
 Data Dictionary

9) What are the Characteristics of a good user interface? U3


 User interface is the front-end application view to which user interacts in order to use the
software.
 The software becomes more popular if its user interface is:
1) Attractive
2) Simple to use
3) Responsive in short time
4) Clear to understand
5) Consistent on all interface screens

 There are two types of User Interface:


 Command Line Interface: Command Line Interface provides a command prompt, where the
user types the command and feeds to the system. The user needs to remember the syntax of
the command and its use.
 Graphical User Interface: Graphical User Interface provides the simple interactive interface to
interact with the system. GUI can be a combination of both hardware and software. Using
GUI, user interprets the software.

10) What is Testing? List out Level of Testing. U4


 The software testing is the process of evaluating and verifying that a software product or
application does what it is supposed to do.
 The benefits of testing include preventing bugs, reducing development costs and improving
performance.

Level of Testing

1. Unit Testing: checks if software components are fulfilling functionalities or not.


2. Integration Testing: checks the data flow from one module to other modules.
3. System Testing: evaluates both functional and non-functional needs for the testing.
4. Acceptance Testing : checks the requirements of a specification or contract are
met as per its delivery.

11) Different between Black Box Testing and White Box Testing? U4

Difference between Black Box Testing and White Box Testing


Black Box Testing White Box Testing

 This test is a structural test of the


 This test is a functional test of the software.
software.

 It can be initiated on the basis of the  It is started after a detailed design


requirement specifications document. document.

 Knowledge of programming is
 No knowledge of programming is mandatory.
mandatory.

 It is the behavior testing of the software.  It is the logic testing of the software.

 This testing is higher levels of software  This testing is applicable to the lower
testing. levels of software testing.

 Example: Searching something on google by  Example: By input to check and verify


using keywords. loops.

12) What is Debugging? U4

 Debugging is the process of finding and fixing errors or bugs in the source code of any
software.
 When software does not work as expected, computer programmers study the code to
determine why any errors occurred.
 They use debugging tools to run the software in a controlled environment, check the code
step by step, and analyze and fix the issue. 

13) What is the Difference Between Quality Assurance and Quality Control? U5

Quality Assurance (QA) Quality Control (QC)

It focuses on providing assurance that the


It focuses on fulfilling the quality requested.
quality requested will be achieved.

It is the technique of managing quality. It is the technique to verify quality.

It is process oriented. It is product oriented.

It is responsible for the entire software It is responsible for the software testing life
development life cycle. cycle.
Quality Assurance (QA) Quality Control (QC)

It pays main focus is on the intermediate


Its primary focus is on final products.
process.

It is a less time-consuming activity. It is a more time-consuming activity.

14) What is Six sigma? U5

 Six Sigma is the process of producing high and improved quality output.
 This can be done in two phases – identification and elimination.
 The cause of defects is identified and appropriate elimination is done which reduces variation in
whole processes

15) What is Software reverse engineering U5

 software Reverse Engineering is a process of recovering the design, requirement


specifications, and functions of a product from an analysis of its code. It builds a program
database and generates information from this. 

 The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and producing the necessary documents for a legacy system.  

Top 5 - 10 Marks Questions

1) Explain Software development life cycle (SDLC) models. U1


SDLC Models

 Software Development life cycle (SDLC) is a spiritual model used in project management that
defines the stages include in an information system development project, from an initial
feasibility study to the maintenance of the completed application.

 There are different software development life cycle models specify and design, which are
followed during the software development phase. These models are also called "Software
Development Process Models." Each process model follows a series of phase unique to its type
to ensure success in the step of software development.

Here, are some important phases of SDLC life cycle:

Waterfall Model

The waterfall is a universally accepted SDLC model. In this method, the whole process of software
development is divided into various phases.

The waterfall model is a continuous software development model in which development is seen as
flowing steadily downwards (like a waterfall) through the steps of requirements analysis, design,
implementation, testing (validation), integration, and maintenance.
Linear ordering of activities has some significant consequences. First, to identify the end of a phase and
the beginning of the next, some certification techniques have to be employed at the end of each step.
Some verification and validation usually do this mean that will ensure that the output of the stage is
consistent with its input (which is the output of the previous step), and that the output of the stage is
consistent with the overall requirements of the system.

RAD Model

RAD or Rapid Application Development process is an adoption of the waterfall model; it targets
developing software in a short period. The RAD model is based on the concept that a better system can
be developed in lesser time by using focus groups to gather system requirements.

o Business Modeling
o Data Modeling
o Process Modeling
o Application Generation
o Testing and Turnover

Spiral Model

The spiral model is a risk-driven process model. This SDLC model helps the group to adopt elements of
one or more process models like a waterfall, incremental, waterfall, etc. The spiral technique is a
combination of rapid prototyping and concurrency in design and development activities.

Each cycle in the spiral begins with the identification of objectives for that cycle, the different
alternatives that are possible for achieving the goals, and the constraints that exist. This is the first
quadrant of the cycle (upper-left quadrant).

The next step in the cycle is to evaluate these different alternatives based on the objectives and
constraints. The focus of evaluation in this step is based on the risk perception for the project.

The next step is to develop strategies that solve uncertainties and risks. This step may involve activities
such as benchmarking, simulation, and prototyping.

V-Model

In this type of SDLC model testing and the development, the step is planned in parallel. So, there are
verification phases on the side and the validation phase on the other side. V-Model joins by Coding
phase.

Incremental Model

The incremental model is not a separate model. It is necessarily a series of waterfall cycles. The
requirements are divided into groups at the start of the project. For each group, the SDLC model is
followed to develop software. The SDLC process is repeated, with each release adding more
functionality until all requirements are met. In this method, each cycle act as the maintenance phase for
the previous software release. Modification to the incremental model allows development cycles to
overlap. After that subsequent cycle may begin before the previous cycle is complete.

Agile Model

 Agile methodology is a practice which promotes continues interaction of development and


testing during the SDLC process of any project. In the Agile method, the entire project is divided
into small incremental builds. All of these builds are provided in iterations, and each iteration
lasts from one to three weeks.

 Any agile software phase is characterized in a manner that addresses several key assumptions
about the bulk of software projects:

1. It is difficult to think in advance which software requirements will persist and which will change.
It is equally difficult to predict how user priorities will change as the project proceeds.
2. For many types of software, design and development are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to think about how much design is necessary before construction is used to test the
configuration.
3. Analysis, design, development, and testing are not as predictable (from a planning point of view)
as we might like.

Iterative Model

 It is a particular implementation of a software development life cycle that focuses on an initial,


simplified implementation, which then progressively gains more complexity and a broader
feature set until the final system is complete. In short, iterative development is a way of
breaking down the software development of a large application into smaller pieces.

Big bang model

 Big bang model is focusing on all types of resources in software development and coding, with
no or very little planning. The requirements are understood and implemented when they come.

 This model works best for small projects with smaller size development team which are working
together. It is also useful for academic software development projects. It is an ideal model
where requirements are either unknown or final release date is not given.
Prototype Model

 The prototyping model starts with the requirements gathering. The developer and the user
meet and define the purpose of the software, identify the needs, etc.

 A 'quick design' is then created. This design focuses on those aspects of the software that will be
visible to the user. It then leads to the development of a prototype. The customer then checks
the prototype, and any modifications or changes that are needed are made to the prototype.

 Looping takes place in this step, and better versions of the prototype are created. These are
continuously shown to the user so that any new changes can be updated in the prototype. This
process continue until the customer is satisfied with the system. Once a user is satisfied, the
prototype is converted to the actual system with all considerations for quality and security.

2) Explain a) Characteristics of a Good SRS Document b) IEEE 830 guidelines of SRS Document.
U2
a) Characteristics of a Good SRS Document

1. Correctness: 
User review is used to ensure the correctness of requirements stated in the SRS. SRS is
said to be correct if it covers all the requirements that are actually expected from the
system. 

2. Completeness: 
Completeness of SRS indicates every sense of completion including the numbering of all
the pages, resolving the to be determined parts to as much extent as possible as well as
covering all the functional and non-functional requirements properly. 
3. Consistency: 
Requirements in SRS are said to be consistent if there are no conflicts between any set
of requirements. Examples of conflict include differences in terminologies used at
separate places, logical conflicts like time period of report generation, etc. 
4. Unambiguous: 
A SRS is said to be unambiguous if all the requirements stated have only 1
interpretation. Some of the ways to prevent unambiguous include the use of modelling
techniques like ER diagrams, proper reviews and buddy checks, etc. 
5. Ranking for importance and stability: 
There should a criterion to classify the requirements as less or more important or more
specifically as desirable or essential. An identifier mark can be used with every
requirement to indicate its rank or stability. 
6. Modifiability: 
SRS should be made as modifiable as possible and should be capable of easily accepting
changes to the system to some extent. Modifications should be properly indexed and
cross-referenced. 
7. Verifiability: 
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent
to which every requirement is met by the system. For example, a requirement starting
that the system must be user-friendly is not verifiable and listing such requirements
should be avoided. 
8. Traceability: 
One should be able to trace a requirement to design component and then to code
segment in the program. Similarly, one should be able to trace a requirement to the
corresponding test cases. 

9. Design Independence: 
There should be an option to choose from multiple design alternatives for the final
system. More specifically, the SRS should not include any implementation details.  
10. Test-ability: 
A SRS should be written in such a way that it is easy to generate test cases and test
plans from the document. 
11. Understandable by the customer: 
An end user maybe an expert in his/her specific domain but might not be an expert in
computer science. Hence, the use of formal notations and symbols should be avoided to
as much extent as possible. The language should be kept easy and clear. 
12. Right level of abstraction: 
If the SRS is written for the requirements phase, the details should be explained
explicitly. Whereas, for a feasibility study, fewer details can be used. Hence, the level of
abstraction varies according to the purpose of the SRS. 
B) IEEE 830 guidelines of SRS Document.

1. Introduction

1. Purpose
2. Scope
3. Definition, Acronyms and abbreviations
4. References
5. Overview

2. The Overall Description

1. Product Perspective

 System Interfaces
 Interfaces
 Hardware Interfaces
 Software Interfaces
 Communication Interfaces
 Memory Constraints
 Operations
 Site Adaptation Requirements

2. Product Functions
3. User Characteristics
4. Constraints
5. Assumptions for dependencies
6. Apportioning of requirements

3. Specific Requirements

1. External Interfaces
2. Functions
3. Performance requirements
4. Logical database requirements
5. Design Constraints
6. Software System attributes
7. Organization of specific requirements
8. Additional Comments.
3) Explain- Software Design Process U3

 Software Design Process is a high-rank, technology-independent concept that describes


a system that will be able to accomplish the established tasks in the requirement
analysis phase.

principles of good Software Design

 Many principles are employed to organize, coordinate, classify, and set up software
design’s structural components.

 Software Designs become some of the most convenient designs when the following
principles are applied. They help to generate remarkable User Experiences and
customer loyalty.

The principles of a good software design are:

1. Modularity
2. Coupling
3. Abstraction
4. Anticipation of change
5. Simplicity
6. Sufficiency and completeness
 Modularity
Dividing a large software project into smaller portions/modules is known as modularity. It
is the key to scalable and maintainable software design. The project is divided into various
components and work on one component is done at once. It becomes easy to test each
component due to modularity. It also makes integrating new features more accessible.
 Coupling
Coupling refers to the extent of interdependence between software modules and how
closely two modules are connected. Low coupling is a feature of good design. With low
coupling, changes can be made in each module individually, without changing the other
modules.
 Abstraction
The process of identifying the essential behavior by separating it from its implementation
and removing irrelevant details is known as Abstraction. The inability to separate essential
behavior from its implementation will lead to unnecessary coupling.
 Anticipation of Change
The demands of software keep on changing, resulting in continuous changes in
requirements as well. Building a good software design consists of its ability to
accommodate and adjust to change comfortably.
 Simplicity
The aim of good software design is simplicity. Each task has its own module, which can be
utilized and modified independently. It makes the code easy to use and minimizes the
number of setbacks.
 Sufficiency and Completeness
A good software design ensures the sufficiency and completeness of the software
concerning the established requirements. It makes sure that the software has been
adequately and wholly built.
stages of the Software Design Process

Stage 1: Understanding project requirements

Stage 2: Research and Analysis

 Interviews
 Focus groups
 Survey
Stage 3: Design

 Wireframing
 Creating user stories
 Data flow diagrams
 Technical Design
 User Interface
Stage 4: Prototyping
 Low Fidelity Prototyping
 Medium Fidelity Prototyping
 High Fidelity Prototyping
Stage 5: Evaluation
Many tools can be used for designing software. The top 6 most effective and commonly used tools are:-

1. Draw.io
2. Jira
3. Mockflow
4. Sketch
5. Marvel
6. Zeplin

4) What is testing? Explain the different Types of Testing. U4

 software testing is a procedure of implementing software or the application to identify


the defects or bugs.
Manual Testing

In software testing, manual testing can be further classified into three different types of testing, which
are as follows:

o White Box Testing


o Black Box Testing
o Grey Box Testing

White Box Testing

 In white-box testing, the developer will inspect every line of code before handing it over to the
testing team or the concerned test engineers.

 Subsequently, the code is noticeable for developers throughout testing; that's why this process is
known as WBT (White Box Testing).
 In other words, we can say that the developer will execute the complete white-box testing for the
particular software and send the specific application to the testing team.

 The purpose of implementing the white box testing is to emphasize the flow of inputs and outputs
over the software and enhance the security of an application.

White box testing is also known as open box testing, glass box testing, structural testing, clear box
testing, and transparent box testing.

Black Box Testing

 Another type of manual testing is black-box testing. In this testing, the test engineer will analyze
the software against requirements, identify the defects or bug, and sends it back to the
development team.

 Then, the developers will fix those defects, do one round of White box testing, and send it to the
testing team.

 Here, fixing the bugs means the defect is resolved, and the particular feature is working according
to the given requirement.

 The main objective of implementing the black box testing is to specify the business needs or the
customer's requirements.

 In other words, we can say that black box testing is a process of checking the functionality of an
application as per the customer requirement. The source code is not visible in this testing; that's
why it is known as black-box testing.
Types of Black Box Testing

Black box testing further categorizes into two parts, which are as discussed below:

o Functional Testing
o Non-function Testing

Functional Testing

 The test engineer will check all the components systematically against requirement specifications is
known as functional testing. Functional testing is also known as Component testing.

 In functional testing, all the components are tested by giving the value, defining the output, and
validating the actual output with the expected value.

 Functional testing is a part of black-box testing as its emphases on application requirement rather
than actual code. The test engineer has to test only the program instead of the system.

Types of Functional Testing

Just like another type of testing is divided into several parts, functional testing is also classified into
various categories.

The diverse types of Functional Testing contain the following:

o Unit Testing
o Integration Testing
o System Testing

1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this, the test engineer
will test the module of an application independently or test all the module functionality is called unit
testing.

The primary objective of executing the unit testing is to confirm the unit components with their
performance. Here, a unit is defined as a single testable function of a software or an application. And it
is verified throughout the specified application development phase.

2. Integration Testing

Once we are successfully implementing the unit testing, we will go integration testing. It is the second
level of functional testing, where we test the data flow between dependent modules or interface
between two features is called integration testing.

The purpose of executing the integration testing is to test the statement's accuracy between each
module.

Types of Integration Testing

Integration testing is also further divided into the following parts:

o Incremental Testing
o Non-Incremental Testing

Incremental Integration Testing

Whenever there is a clear relationship between modules, we go for incremental integration testing.
Suppose, we take two modules and analysis the data flow between them if they are working fine or not.

If these modules are working fine, then we can add one more module and test again. And we can
continue with the same process to get better results.

In other words, we can say that incrementally adding up the modules and test the data flow between
the modules is known as Incremental integration testing.

Types of Incremental Integration Testing

Incremental integration testing can further classify into two parts, which are as follows:

1. Top-down Incremental Integration Testing


2. Bottom-up Incremental Integration Testing

1. Top-down Incremental Integration Testing


In this approach, we will add the modules step by step or incrementally and test the data flow between
them. We have to ensure that the modules we are adding are the child of the earlier ones.

2. Bottom-up Incremental Integration Testing

In the bottom-up approach, we will add the modules incrementally and check the data flow between
modules. And also, ensure that the module we are adding is the parent of the earlier ones.

Non-Incremental Integration Testing/ Big Bang Method

Whenever the data flow is complex and very difficult to classify a parent and a child, we will go for the
non-incremental integration approach. The non-incremental method is also known as the Big Bang
method.

3. System Testing

Whenever we are done with the unit and integration testing, we can proceed with the system testing.

In system testing, the test environment is parallel to the production environment. It is also known
as end-to-end testing.

In this type of testing, we will undergo each attribute of the software and test if the end feature works
according to the business requirement. And analysis the software product as a complete system.

Non-function Testing

The next part of black-box testing is non-functional testing. It provides detailed information on software
product performance and used technologies.

Non-functional testing will help us minimize the risk of production and related costs of the software.

Non-functional testing is a combination of performance, load, stress, usability and, compatibility


testing.

1. Performance Testing

In performance testing, the test engineer will test the working of an application by applying some load.

In this type of non-functional testing, the test engineer will only focus on several aspects, such
as Response time, Load, scalability, and Stability of the software or an application.

Classification of Performance Testing

Performance testing includes the various types of testing, which are as follows:
o Load Testing
o Stress Testing
o Scalability Testing
o Stability Testing

o Load Testing

While executing the performance testing, we will apply some load on the particular application to check
the application's performance, known as load testing. Here, the load could be less than or equal to the
desired load.

It will help us to detect the highest operating volume of the software and bottlenecks.

o Stress Testing

It is used to analyze the user-friendliness and robustness of the software beyond the common functional
limits.

Primarily, stress testing is used for critical software, but it can also be used for all types of software
applications.

o Scalability Testing

To analysis, the application's performance by enhancing or reducing the load in particular balances is
known as scalability testing.

In scalability testing, we can also check the system, processes, or database's ability to meet an upward
need. And in this, the Test Cases are designed and implemented efficiently.

o Stability Testing

Stability testing is a procedure where we evaluate the application's performance by applying the load for
a precise time.

It mainly checks the constancy problems of the application and the efficiency of a developed product. In
this type of testing, we can rapidly find the system's defect even in a stressful situation.

2. Usability Testing

Another type of non-functional testing is usability testing. In usability testing, we will analyze the user-
friendliness of an application and detect the bugs in the software's end-user interface.
 Here, the term user-friendliness defines the following aspects of an application:

 The application should be easy to understand, which means that all the features must be visible to
end-users.
 The application's look and feel should be good that means the application should be pleasant
looking and make a feel to the end-user to use it.

3. Compatibility Testing

 In compatibility testing, we will check the functionality of an application in specific hardware and
software environments. Once the application is functionally stable then only, we go
for compatibility testing.

 Here, software means we can test the application on the different operating systems and other
browsers, and hardware means we can test the application on different sizes.

Grey Box Testing

 Another part of manual testing is Grey box testing. It is a collaboration of black box and white box
testing.

 Since, the grey box testing includes access to internal coding for designing test cases. Grey box
testing is performed by a person who knows coding as well as testing.

 In other words, we can say that if a single-person team done both white box and black-box testing,
it is considered grey box testing.

 In other words, we can say that whenever we are testing an application by using some tools is
known as automation testing.

 We will go for automation testing when various releases or several regression cycles goes on the
application or software. We cannot write the test script or perform the automation testing without
understanding the programming language.
Some other types of Software Testing

In software testing, we also have some other types of testing that are not part of any above discussed
testing, but those testing are required while testing any software or an application.

o Smoke Testing
o Sanity Testing
o Regression Testing
o User Acceptance Testing
o Exploratory Testing
o Adhoc Testing
o Security Testing
o Globalization Testing

Regression Testing

 Regression testing is the most commonly used type of software testing. Here, the
term regression implies that we have to re-test those parts of an unaffected application.

 Regression testing is the most suitable testing for automation tools. As per the project type and
accessibility of resources, regression testing can be similar to Retesting.

 Whenever a bug is fixed by the developers and then testing the other features of the applications
that might be simulated because of the bug fixing is known as regression testing.

 In other words, we can say that whenever there is a new release for some project, then we can
perform Regression Testing, and due to a new feature may affect the old features in the earlier
releases.

User Acceptance Testing

 The User acceptance testing (UAT) is done by the individual team known as domain
expert/customer or the client. And knowing the application before accepting the final product is
called as user acceptance testing.

 In user acceptance testing, we analyze the business scenarios, and real-time scenarios on the
distinct environment called the UAT environment. In this testing, we will test the application before
UAI for customer approval.
5) a) What is software maintenance? Explain in detail. b) Explain SEI capability maturity model
(CMM)? U5

A) Software Maintenance

 Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the software
require alteration wherever possible.

 Software Maintenance is an inclusive activity that includes error corrections, enhancement of


capabilities, deletion of obsolete capabilities, and optimization.

Need for Maintenance

Software Maintenance is needed for

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Types of Software Maintenance

1. Corrective Maintenance

Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.

2. Adaptive Maintenance

It contains modifying the software to match changes in the ever-changing environment.


3. Preventive Maintenance

It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.

4. Perfective Maintenance

It defines improving processing efficiency or performance or restricting the software to enhance


changeability. This may contain enhancement of existing system functionality, improvement in
computational efficiency, etc.

B) SEI Capability Maturity Model (CMM)

 The Capability Maturity Model (CMM) is a procedure used to develop and refine an organization's
software development process.

 The model defines a five-level evolutionary stage of increasingly organized and consistently more
mature processes.

 CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and
development center promote by the U.S. Department of Defense (DOD).

 Capability Maturity Model is used as a benchmark to measure the maturity of an organization's


software process.

Methods of SEICMM

There are two methods of SEICMM:

 Capability Evaluation: Capability evaluation provides a way to assess the software process


capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.
 Software Process Assessment: Software process assessment is used by an organization to improve
its process capability. Thus, this type of evaluation is for purely internal use.

 SEI CMM categorized software development industries into the following five maturity levels. The
various levels of SEI CMM have been designed so that it is easy for an organization to build its
quality system starting from scratch slowly.

Level 1: Initial

Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited, different
engineers follow their process and as a result, development efforts become chaotic. Therefore, it is also
called a chaotic level.

Level 2: Repeatable

At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.

Level 3: Defined

At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured. ISO
9000 goals at achieving this level.

Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.

Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average defect correction
time, productivity, the average number of defects found per hour inspection, the average number of
failures detected during testing per LOC, etc. The software process and product quality are measured,
and quantitative quality requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process metrics are used to
analyze if a project performed satisfactorily. Thus, the outcome of process measurements is used to
calculate project performance rather than improve the process.

Level 5: Optimizing

At this phase, process and product metrics are collected. Process and product measurement data are
evaluated for continuous process improvement.

Top 10 - 10 Marks Questions


1) Define software engineering. What are the challenges of software engineering? U1

 Software Engineering provides a standard procedure to design and develop a software.

What is Software Engineering?

 The term software engineering is the product of two words, software, and engineering.

 The software is a collection of integrated programs.

 Software subsists of carefully-organized instructions and code written by developers on any of


various particular computer languages.

 Computer programs and related documentation such as requirements, design models and user
manuals.

 Engineering is the application of scientific and practical knowledge to invent, design, build,


maintain, and improve frameworks, processes, etc.

Characteristics of a good software engineer

The features that good software engineers should possess are as follows:

 Exposure to systematic methods, i.e., familiarity with software engineering principles.


 Good technical knowledge of the project range (Domain knowledge).

 Good programming abilities.

 Good communication skills. These skills comprise of oral, written, and interpersonal skills.

 High motivation.

 Sound knowledge of fundamentals of computer science.

 Intelligence.

 Ability to work in a team

 Discipline, etc.

Importance of Software Engineering

The importance of Software engineering is as follows:

1. Reduces complexity: Big software is always complicated and challenging to progress. Software


engineering has a great solution to reduce the complication of any project. Software engineering
divides big problems into various small issues. And then start solving each small issue one by
one. All these small problems are solved independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers are highly
paid experts. A lot of manpower is required to develop software with a large number of codes.
But in software engineering, programmers project everything and decrease all those things that
are not needed. In turn, the cost for software productions becomes less as compared to any
software that does not use software engineering method.
3. To decrease time: Anything that is not made according to the project always wastes time. And if
you are making great software, then you may need to run many codes to get the definitive
running code. This is a very time-consuming procedure, and if it is not well handled, then this
can take a lot of time. So if you are making your software according to the software engineering
method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of
patience, planning, and management. And to invest six and seven months of any company, it
requires heaps of planning, direction, testing, and maintenance. No one can say that he has
given four months of a company to the task, and the project is still in its first stage. Because the
company has provided many resources to the plan and it should be completed. So to handle a
big project without any problem, the company has to go for a software engineering method.
5. Reliable software: Software should be secure, means if you have delivered the software, then it
should work for at least its given time or subscription. And if any bugs come in the software, the
company is responsible for solving all these bugs. Because in software engineering, testing and
maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards. Software
standards are the big target of companies to make it more effective. So Software becomes more
effective in the act with the help of software engineering.

2) Explain risk management, configuration management in Software Engineering? U1

Risk Management Activities

Risk management consists of three main activities, as shown in fig:


Risk Assessment

The objective of risk assessment is to division the risks in the condition of their loss, causing
potential. For risk assessment, first, every risk should be rated in two methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

          p=r*s

Where p is the priority with which the risk must be controlled, r is the probability of the risk
becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified
risks are set up, then the most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.

1. Risk Identification: The project organizer needs to anticipate the risk in the project as early
as possible so that the impact of risk can be reduced by making effective risk management
planning.

A project can be of use by a large variety of risk. To identify the significant risk, this might affect
a project. It is necessary to categories into the different risk of classes.

There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that
are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used
to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and
make a perception of the probability and seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and seriousness of
each risk. Instead, you should authorize the risk to one of several bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.

Risk Control

It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a
plan are determined; the project must be made to include the most harmful and the most likely
risks. Different risks need different containment methods. In fact, most risks need ingenuity on
the part of the project manager in tackling the risk.

There are three main methods to plan for risk management:


1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers
to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For instance,
if there is a risk that some key personnel might leave, new recruitment can be planned.

Risk Leverage: To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk. For this,
the risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of
reduction)

1. Risk planning: The risk planning method considers each of the key risks that have been
identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.

You also should think about data that you might need to collect while monitoring the plan so
that issues can be anticipated.

Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.

2. Risk Monitoring: Risk monitoring is the method king that your assumption about the
product, process, and business risks has not changed.

3) How complex requirements are representing using decision tables and decision trees? U2

 Decision tables and trees are useful tools for documenting and analyzing functional
requirements that involve complex or conflicting rules.
 They help you to visualize the logic, identify gaps or errors, and communicate the
requirements clearly and consistently.
 In this article, you will learn how to handle some common challenges when creating and
using decision tables and trees.
What are decision tables and trees?

 Decision tables and trees are graphical representations of the conditions and actions that
make up a business rule or a use case scenario.

 A decision table consists of rows and columns that show the combinations of conditions
and the corresponding actions.

 A decision tree is a diagram that shows the branching paths of


 conditions and actions. Both tools can help you to simplify and organize the
requirements, as well as to test and verify them.

How to create decision tables and trees?

 To create a decision table or a tree, you need to identify input conditions, output
actions, and rules or scenarios.

 Input conditions can be data, events, user inputs, or other factors that affect the
outcome of the rule or scenario. Output actions are the results or effects of the rule or
scenario and can include tasks, messages, calculations, or other actions.

 Rules or scenarios are combinations of conditions and actions that define the logic and
behavior of the system or process. To list all possible combinations of conditions and
actions, you can use a matrix or table and assign a rule or scenario number to each row.

 Alternatively, you can use a tree diagram to show the hierarchy and sequence of
conditions and actions with each node labeled with a rule or scenario number.

How to handle complex rules or scenarios?

 Sometimes, you may encounter rules or scenarios that are too complex or ambiguous to
fit into a single row or node of a decision table or tree.
 For example, you might have multiple actions for the same condition, or multiple
conditions for the same action, or nested conditions that depend on each other.
 To handle these cases, you can use techniques like splitting the rule or scenario into sub-
rules or sub-scenarios and creating separate decision tables or trees.
 Additionally, you can use extended entries or connectors to indicate that there are more
conditions or actions that are not shown in the table or tree.
 Conditional expressions and operators such as AND, OR, NOT, IF, THEN, and ELSE can
also be used to combine or modify conditions and actions.
 Finally, variables and parameters can be used to represent values and states of
conditions and actions; these should be defined clearly in the table or tree.

how to handle conflicting rules or scenarios?

 When using decision tables or trees, you may come across conflicting rules or scenarios.
 These involve different rules or scenarios that have the same or overlapping conditions,
but different or contradictory actions.
 For instance, a rule may state that a 10% discount should be given to VIP customers,
while another rule may say that a 15% discount should be given to VIP customers who
have orders over $1000.
 To resolve these cases, you can prioritize the rules or scenarios based on their
importance, urgency, frequency, or specificity and apply them in that order.
Additionally, exceptions or exclusions can be used to specify the conditions or actions
that override or cancel out the other rules or scenarios.
 Default or fallback actions can also be employed to handle cases where none of the
rules or scenarios match. Lastly, feedback or confirmation can be requested from the
user or system to choose or verify the actions in case of conflict.

How to test and validate decision tables and trees?

 Once you have created your decision tables or trees, it is essential to test and validate them to
guarantee they are complete, correct, consistent, and clear.
 You can review the tables or trees with stakeholders, users, developers, testers, and other
relevant parties to get their feedback and approval.
 Additionally, it is important to check for any errors, gaps, redundancies, ambiguities, or
contradictions and revise them accordingly.
 You can also use test cases or scenarios based on the rules or scenarios in the tables or trees to
compare the expected and actual outcomes. Additionally, you can use tools or software that can
generate, execute, or automate the tests based on the tables or trees.

4) Explain about Software Requirement Specifications (SRS) Document.U2

 The production of the requirements stage of the software development process


is Software Requirements Specifications (SRS) (also called a requirements document).

 This report lays a foundation for software engineering activities and is constructing
when entire requirements are elicited and analyzed. SRS is a formal report, which acts
as a representation of software that enables the customers to review whether it (SRS) is
according to their requirements.

 Also, it comprises user requirements for a system as well as detailed specifications of


the system requirements.

Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design,


constraints, attributes, or external interfaces.

(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in
another as textual.

(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,

(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A and
B" co-occurs.

(3). Two or more requirements may define the same real-world object but use different terms
for that object. For example, a program's request for user input may be called a "prompt" in
one requirement and a "cue" in another. The use of standard terminology and descriptions
promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one


interpretation. This suggests that each element is uniquely interpreted. In case there is a
method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.

5. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.

Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.

6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly


obtain changes to the system to some extent. Modifications should be perfectly indexed and
cross-referenced.

7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.

8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement
documentation.

There are two types of Traceability:

1. Backward Traceability: This depends upon each requirement explicitly referencing its source
in earlier documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.

The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to
be able to ascertain the complete set of requirements that may be concerned by those
modifications.

9. Design Independence: There should be an option to select from multiple design alternatives


for the final system. More specifically, the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and
symbols should be avoided too as much extent as possible. The language should be kept simple
and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be used.
Hence, the level of abstraction modifies according to the objective of the SRS.

Properties of a good SRS document

The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.

Structured: It should be well-structured. A well-structured document is simple to understand


and modify. In practice, the SRS document undergoes several revisions to cope up with the user
requirements. Often, user requirements evolve over a period of time. Therefore, to make the
modifications to the SRS document easy, it is vital to make the report well-structured.

Black-box view: It should only define what the system should do and refrain from stating how
to do these. This means that the SRS document should define the external behavior of the
system and not discuss the implementation issues. The SRS report should view the system to be
developed as a black box and should define the externally visible behavior of the system. For
this reason, the SRS report is also known as the black-box specification of a system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should be


correct. This means that it should be possible to decide whether or not requirements have been
met in an implementation.

5) What is Dataflow Diagram? Explain Different Level of DFD? U3

 DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD.
 It also gives insight into the inputs and outputs of each entity and the process itself. DFD does
not have control flow and no loops or decision rules are present. Specific operations
depending on the type of data can be explained by a flowchart.

 It is a graphical tool, useful for communicating with users, managers and other personnel. it is
useful for analyzing existing as well as proposed system.
It provides an overview of 
 What data is system processes.
 What transformation are performed.
 What data are stored.
 What results are produced, etc.
 Data Flow Diagram can be represented in several ways. The DFD belongs to structured-
analysis modeling tools. Data Flow diagrams are very popular because they help us to visualize
the major steps and data involved in software-system processes. 
Components of DFD
The Data Flow Diagram has 4 components:
 Process Input to output transformation in a system takes place because of process
function. The symbols of a process are rectangular with rounded corners, oval, rectangle
or a circle. The process is named a short sentence, in one word or a phrase to express its
essence
 Data Flow Data flow describes the information transferring between different parts of the
systems. The arrow symbol is the symbol of data flow. A relatable name should be given
to the flow to determine the information which is being moved. Data flow also represents
material along with information that is being moved. Material shifts are modeled in
systems that are not merely informative. A given flow should only transfer a single type of
information. The direction of flow is represented by the arrow which can also be bi-
directional.
 Warehouse The data is stored in the warehouse for later use. Two horizontal lines
represent the symbol of the store. The warehouse is simply not restricted to being a data
file rather it can be anything like a folder with documents, an optical disc, a filing cabinet.
The data warehouse can be viewed independent of its implementation. When the data
flow from the warehouse it is considered as data reading and when data flows to the
warehouse it is called data entry or data updating.
 Terminator The Terminator is an external entity that stands outside of the system and
communicates with the system. It can be, for example, organizations like banks, groups of
people like customers or different departments of the same organization, which is not a
part of the model system and is an external entity. Modeled systems also communicate
with terminator.
Rules for creating DFD
 The name of the entity should be easy and understandable without any extra
assistance (like comments).
 The processes should be numbered or put in ordered list to be referred easily.
 The DFD should maintain consistency across all the DFD levels.
 A single DFD can have a maximum of nine processes and a minimum of three
processes.
Symbols Used in DFD
 Square Box: A square box defines source or destination of the system. It is also
called entity. It is represented by rectangle.
 Arrow or Line: An arrow identifies the data flow i.e. it gives information to the data
that is in motion.
 Circle or bubble chart: It represents as a process that gives us information. It is
also called processing box.
 Open Rectangle: An open rectangle is a data store. In this data is store either
temporary or permanently. 
Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be created. Levels of
DFD are as follows:
 0-level DFD: It represents the entire system as a single bubble and provides an
overall picture of the system.
 1-level DFD: It represents the main functions of the system and how they interact
with each other. 
 2-level DFD: It represents the processes within each function of the system and
how they interact with each other.
 3-level DFD: It represents the data flow within each process and how the data is
transformed and stored.
Advantages of DFD
 It helps us to understand the functioning and the limits of a system.
 It is a graphical representation which is very easy to understand as it helps
visualize contents.
 Data Flow Diagram represent detailed and well explained diagram of system
components.
 It is used as the part of system documentation file.
 Data Flow Diagrams can be understood by both technical or nontechnical person
because they are very easy to understand.
Disadvantages of DFD
 At times DFD can confuse the programmers regarding the system.
 Data Flow Diagram takes long time to be generated, and many times due to this
reasons analyst are denied permission to work on it.

6) Explain the components of GUI development in software engineering? U3


 User Interface is one of the most common front-end app view and direct human-
computer interactions in which user can manipulate and control software as well as
hardware.

 It can include all methods and devices are used to accommodate interaction between
machines and user.

 User interface can take out many forms, but always accomplishes two fundamental
tasks: 

1. Communicating information from the machine to the user.


2. Communicating information from the user to the machine.
 

User Interface Design

Important qualities of User Interface Design are following: 


 
1. Simplicity: 
 User Interface design should be simple.
 Less number of mouse clicks and keystrokes are required to accomplish
this task.
 It is important that new features only add if there is compelling need for
them and they add significant values to the application.
2. Consistency : 
 The user interface should have a more consistency.
 Consistency also prevents online designers’ information chaos,
ambiguity and instability.
 We should apply typeface, style and size convention in a consistent
manner to all screen components that will add screen learning and
improve screen readability. In this we can provide permanent objects as
unchanging reference points around which the user can navigate.
3. Intuitiveness: 
 The most important quality of good user interface design is intuitive.
 Intuitive user interface design is one that is easy to learn so that user
can pick it up quickly and easily.
 Icons and labels should be concise and cogent. A clear unambiguous
icon can help to make user interface intuitive and a good practice is
making labels conform to the terminology that the application supports.

4. Prevention: 
 A good user interface design should prevent users from performing an
in-appropriate task and this is accomplished by disabling or “graying
cut” certain elements under certain conditions.
5. Forgiveness: 
 This quality can encourage users to use the software in a full extent.
 Designers should provide users with a way out when users find
themselves somewhere they should not go.
6. Graphical User Interface Design: 
 A graphic user interface design provides screen displays that create an
operating environment for the user and form an explicit visual and
functional context for user’s actions.
 It includes standard objects like buttons, icons, text, field, windows,
images, pull-down and pop-up screen menus.

7) a) Explain Coding standards and guidelines


1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data
that can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different
modules should follow some standard format and information. The header format
must contain below things that is being used in various companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output
parameters
 Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to
understand the reason of using it.
 Local variables should be named using camel case lettering starting with
small letter (e.g. localData) whereas Global variables names should start
with a capital letter (e.g. GlobalData). Constant names should be formed
using capital letters only (e.g. CONSDATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with
small letters.
 The name of the function must describe the reason of using the function
clearly and briefly.

4. Indentation:
Proper indentation is very important to increase the readability of the code. For
making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
 There must be a space after giving a comma between two function
arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of
each block in the program.
 All braces should start from a new line and the code following the end of
braces also start from a new line.
5. Error return values and exception handling conventions:
All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.
On the other hand, Coding guidelines give some general suggestions regarding the
coding style that to be followed for the betterment of understandability and
readability of the code. Some of the coding guidelines are given below :

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and
debugging difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the
reason behind using it. This is not possible if an identifier is used for multiple
purposes and thus it can lead to confusion to the reader. Moreover, it leads to more
difficulty during future enhancements.

8. Code should be well documented:


The code should be properly commented for understanding easily. Comments
regarding the statements increase the understandability of the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be
small enough to carry out small work and lengthy functions should be broken into
small ones for completing small tasks.

10. Try not to use GOTO statement:


GOTO statement makes the program unstructured, thus it reduces the
understandability of the program and also debugging becomes difficult.

b) Explain code review and software documentation. U4

Software documentation is a written piece of text that is often accompanied by a software


program. This makes the life of all the members associated with the project easier. It may
contain anything from API documentation, build notes or just help content. It is a very critical
process in software development. It’s primarily an integral part of any computer code
development method.
Types Of Software Documentation :
1. Requirement Documentation: It is the description of how the software shall
perform and which environment setup would be appropriate to have the best out
of it. These are generated while the software is under development and is supplied
to the tester groups too.
2. Architectural Documentation: Architecture documentation is a special type of
documentation that concerns the design. It contains very little code and is more
focused on the components of the system, their roles, and working. It also shows
the data flow throughout the system.
3. Technical Documentation: These contain the technical aspects of the software like
API, algorithms, etc. It is prepared mostly for software devs.
4. End-user Documentation: As the name suggests these are made for the end user.
It contains support resources for the end user.
Purpose of Documentation:
Due to the growing importance of computer code necessities, the method of crucial them
needs to be effective so as to notice desired results. As the such determination of necessities
is often beneath sure regulation and pointers that area unit core in getting a given goal.
These all imply that computer code necessities area unit expected to alter thanks to the ever-
ever-changing technology within the world. however, the very fact that computer code
information id obtained through development has to be modified within the wants of users
and the transformation of the atmosphere area unit is inevitable.
what is more, computer code necessities ensure that there’s a verification and therefore the
testing method, in conjunction with prototyping and conferences there are focus teams and
observations?
For a software engineer reliable documentation is often a should the presence of
documentation helps keep track of all aspects of associate applications and it improves the
standard of wares, it’s the most focused area of unit development, maintenance, and
information transfer to alternative developers. productive documentation can build info
simply accessible, offer a restricted range of user entry purposes, facilitate new users to learn
quickly, alter the merchandise and facilitate chopping out the price.
Importance of software documentation : 
For a programmer reliable documentation is always a must the presence keeps track of all
aspects of an application and helps in keeping the software updated.
Advantages of software documentation : 
 The presence of documentation helps in keeping the track of all aspects of an
application and also improves the quality of the software product.
 The main focus is based on the development, maintenance, and knowledge
transfer to other developers.
 Helps development teams during development.
 Helps end-users in using the product.
 Improves overall quality of software product
 It cuts down duplicative work.
 Makes easier to understand code.
 Helps in establishing internal coordination in work.
Disadvantages of software documentation :
 The documenting code is time-consuming.
 The software development process often takes place under time pressure, due to
which many times the documentation updates don’t match the updated code.
 The documentation has no influence on the performance of an application.
 Documenting is not so fun, it’s sometimes boring to a certain extent.

8) Explain about Debugging.U4

Debugging is the process of identifying and resolving errors, or bugs, in a software system. It
is an important aspect of software engineering because bugs can cause a software system to
malfunction, and can lead to poor performance or incorrect results. Debugging can be a time-
consuming and complex task, but it is essential for ensuring that a software system is
functioning correctly.
There are several common methods and techniques used in debugging, including:
1. Code Inspection: This involves manually reviewing the source code of a software
system to identify potential bugs or errors.
2. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve bugs.
3. Unit Testing: This involves testing individual units or components of a software
system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions between different
components of a software system to identify bugs or errors.
5. System Testing: This involves testing the entire software system to identify bugs or
errors.
6. Monitoring: This involves monitoring a software system for unusual behavior or
performance issues that can indicate the presence of bugs or errors.
7. Logging: This involves recording events and messages related to the software
system, which can be used to identify bugs or errors.
Debugging Process: Steps involved in debugging are:
 Problem identification and report preparation.
 Assigning the report to the software engineer defect to verify that it is genuine.
 Defect Analysis using modeling, documentation, finding and testing candidate
flaws, etc.
 Defect Resolution by making required changes to the system.
 Validation of corrections.
Debugging Approaches/Strategies: 
1. Brute Force: Study the system for a larger duration in order to understand the
system. It helps the debugger to construct different representations of systems to
be debugged depending on the need. A study of the system is also done actively to
find recent changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the
program backward from the location of the failure message in order to identify the
region of faulty code. A detailed study of the region is conducted to find the cause
of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying
the results. The region where the wrong outputs are obtained is the region that
needs to be focused on to find the defect.
4. Using past experience with the software debug the software with similar problems
in nature. The success of this approach depends on the expertise of the debugger.
5.  Cause elimination: it introduces the concept of binary partitioning. Data related to
the error occurrence are organized to isolate potential causes.
6. Static analysis: Analyzing the code without executing it to identify potential bugs
or errors. This approach involves analyzing code syntax, data flow, and control
flow.
7. Dynamic analysis: Executing the code and analyzing its behavior at runtime to
identify errors or bugs. This approach involves techniques like runtime debugging
and profiling.
8. Collaborative debugging: Involves multiple developers working together to debug
a system. This approach is helpful in situations where multiple modules or
components are involved, and the root cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify the sequence of
events leading up to the error. This approach involves collecting and analyzing logs
and traces generated by the system during its execution.
10. Automated Debugging: The use of automated tools and techniques to assist in the
debugging process. These tools can include static and dynamic analysis tools, as
well as tools that use machine learning and artificial intelligence to identify errors
and suggest fixes.
Debugging Tools: 
Debugging tool is a computer program that is used to test and debug other programs. A lot of
public domain software like gdb and dbx are available for debugging. They offer console-
based command-line interfaces. Examples of automated debugging tools include code-based
tracers, profilers, interpreters, etc. Some of the widely used debuggers are:
 Radare2
 WinDbg
 Valgrind

Advantages of Debugging:

Several advantages of debugging in software engineering:

1. Improved system quality


2. Reduced system downtime
3. Increased user satisfaction
4. Reduced development costs
5. Increased security
6. Facilitates change: 
7. Better understanding of the system
8. Facilitates testing

Disadvantages of Debugging:

1. Time-consuming
2. Requires specialized skills
3. Can be difficult to reproduce: 
4. Can be difficult to diagnose
5. Can be difficult to fix
6. Limited insight:
7. Can be expensive: 

9) Explain CASE environment c) CASE support in software life cycle U5


 A CASE (Computer-Aided Software Engineering) tool is a non-exclusive term used to indicate
any form of automated support for software engineering.
 In an increasingly prohibitive sense, a CASE tool implies any tools used to automate some action
related to software development. Many CASE tools are available to make software engineering
development easy and efficient.
 A portion of these CASE tools aid stage-related undertakings, for example, specification,
structured analysis, design, testing, coding, feedback, etc. and others to non-stage exercises, for
example, project management and configuration management.

Why do we need CASE tools?

The major objective of using case tools is to,

1. Increase the efficiency and productivity


2. To make software's cost-efficient
3. To make Good Quality of software

CASE Tools: Case Environment


Benefits of using CASE tools

There are several benefits of using Case tools and working with the case environment,

1. A key advantage emerging out of the utilization of a CASE environment is cost efficiency through
all software advancement stages. Various studies complete to quantify the effect of CASE put
the cast decrease between 30% to 40%.
2. The utilization of CASE tools prompts extensive upgrades quality. This is mainly due to the
realities that one can easily emphasize through the various periods of software development
and the odds of human blunder are impressively decreased.
3. CASE tools help produce high caliber and steady archives. Since the significant information
identifying with a product item is kept up in a focal repository, excess in the put-away
information is diminished and in this way, odds of conflicting documentation are decreased.
4. The presentation of a CASE environment affects the style of working of an organization and
makes it situated towards the organized and methodical methodology.
5. CASE tools have prompted progressive cost-efficiency in software maintenance endeavors. This
emerges not just because of the huge estimation of a CASE environment in traceable errors and
consistency checks, yet likewise, because of the deliberate data catch during the different
periods of software development as a consequence of holding fast to a CASE environment.

10) Explain Basic issues in any reuse program, Reuse approach, Reuse at organization level. U5

Advantages of software reuse

1. Software products are costly.


2. Software project managers are worried about the expensive software development and are
desperately find for ways to cut development cost are,
a. A possible way to reduce development costs is to use parts again from previously
developed software.
b. In addition to decrease development cost and time, use again also leads to the higher
quality of the developed products.

What can be reused?

It is great to know about the kinds of artifacts associated with software development that can be used
again. Almost all artifacts associated with software development, including project plan and test plan,
can be used again. However, the important items that can be effectively used again are,

 Requirements specification
 Design
 Code
 Test cases
 Knowledge

Basic issues in any reuse program

The following are some of the basic issues that must be for starting any reuse program,

1. Component creation
2. Component indexing and storing
3. Component search
4. Component understanding
5. Component adaptation
6. Repository maintenance

1) Component creation

The reusable components have to be first identified. The selection of the correct kind of components
having the potential for reuse is best.

2) Component indexing and storing

Indexing requires classification of the again usable components so that they can be easily found when
looking for a component for reuse. The components need to be stored in a Relational Database
Management System (RDBMS) or an Object-Oriented Database System (ODBMS) for efficient access
when the number of components becomes large.

3) Component searching

The programmers need to search for correct components matching their needs in a database of
components. To be able to search components efficiently, the programmers require a perfect method to
tells the components that they are looking for.
4) Component understanding

The programmers need a prefect and sufficiently complete understanding of what the component does
to be able to decide whether they can reuse the component or not. To understand, the components
should be well documented and should do something simple in the code.

5) Component adaptation

Before they can be reused, the components may need adaptation, since a selected component may not
exactly be mixed the problem at hand. However, tinkering with the code is also not the best solution
because this is very likely to be a source of faults.

6) Repository maintenance

It once is created requires repeated maintenance. New components, as and when made have to be
entered inside the repository. The faulty components have to be followed.

Further, when new applications mixed, the older applications become obsolete. In this case, the
obsolete components might have to be deleted from the repository.

Note 1: Read all 15 - 2 Mark questions.


Note 2: First prepare top 5 – 10 Marks questions completely after that prepare top 10 - 10 Marks
questions then finally focus on top 15 - 10 Marks questions. Later read other questions from the
subject.
Note 3: First write the answers which you know well. (Prefer the questions with
diagrams/derivations)
Note 4: If you don’t know any answer in the question paper, write a related answer which you
know from the same unit (only start this after completing all the known answers from the
question paper).
Note 5: Use decent pen, pencil, and stick pens for the exams. (Presentation is very important)
Note 6: Try to have some pencil work/equation/formula per answer.
Note 7: If you did some mistake, don’t strike the answer randomly, just put a line over it. Like this
Note 7: Don’t stop writing till you fill the booklet completely. Stay in the exam hall until 3hrs
completed.
****************************** ALL THE BEST *****************************

You might also like