Se Tpqa
Se Tpqa
Se Tpqa
Department of CSE
20A05403T-Software Engineering
TOP PRIORITY QUESTIONS-2023
Software myths are preconceived notions about software and its creation that people hold
to be true but are in fact untrue.
Professionals in Software Engineering have now identified the software myths that have
persisted throughout the years.
These fallacies are common knowledge to managers and software developers. However, it
might be challenging to change old behaviors.
Types of Software Myths
1) Management Myths
2) Customer Myths
3) Practitioner’s Myths
Level of Testing
11) Different between Black Box Testing and White Box Testing? U4
Knowledge of programming is
No knowledge of programming is mandatory.
mandatory.
It is the behavior testing of the software. It is the logic testing of the software.
This testing is higher levels of software This testing is applicable to the lower
testing. levels of software testing.
Debugging is the process of finding and fixing errors or bugs in the source code of any
software.
When software does not work as expected, computer programmers study the code to
determine why any errors occurred.
They use debugging tools to run the software in a controlled environment, check the code
step by step, and analyze and fix the issue.
13) What is the Difference Between Quality Assurance and Quality Control? U5
It is responsible for the entire software It is responsible for the software testing life
development life cycle. cycle.
Quality Assurance (QA) Quality Control (QC)
Six Sigma is the process of producing high and improved quality output.
This can be done in two phases – identification and elimination.
The cause of defects is identified and appropriate elimination is done which reduces variation in
whole processes
The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and producing the necessary documents for a legacy system.
Software Development life cycle (SDLC) is a spiritual model used in project management that
defines the stages include in an information system development project, from an initial
feasibility study to the maintenance of the completed application.
There are different software development life cycle models specify and design, which are
followed during the software development phase. These models are also called "Software
Development Process Models." Each process model follows a series of phase unique to its type
to ensure success in the step of software development.
Waterfall Model
The waterfall is a universally accepted SDLC model. In this method, the whole process of software
development is divided into various phases.
The waterfall model is a continuous software development model in which development is seen as
flowing steadily downwards (like a waterfall) through the steps of requirements analysis, design,
implementation, testing (validation), integration, and maintenance.
Linear ordering of activities has some significant consequences. First, to identify the end of a phase and
the beginning of the next, some certification techniques have to be employed at the end of each step.
Some verification and validation usually do this mean that will ensure that the output of the stage is
consistent with its input (which is the output of the previous step), and that the output of the stage is
consistent with the overall requirements of the system.
RAD Model
RAD or Rapid Application Development process is an adoption of the waterfall model; it targets
developing software in a short period. The RAD model is based on the concept that a better system can
be developed in lesser time by using focus groups to gather system requirements.
o Business Modeling
o Data Modeling
o Process Modeling
o Application Generation
o Testing and Turnover
Spiral Model
The spiral model is a risk-driven process model. This SDLC model helps the group to adopt elements of
one or more process models like a waterfall, incremental, waterfall, etc. The spiral technique is a
combination of rapid prototyping and concurrency in design and development activities.
Each cycle in the spiral begins with the identification of objectives for that cycle, the different
alternatives that are possible for achieving the goals, and the constraints that exist. This is the first
quadrant of the cycle (upper-left quadrant).
The next step in the cycle is to evaluate these different alternatives based on the objectives and
constraints. The focus of evaluation in this step is based on the risk perception for the project.
The next step is to develop strategies that solve uncertainties and risks. This step may involve activities
such as benchmarking, simulation, and prototyping.
V-Model
In this type of SDLC model testing and the development, the step is planned in parallel. So, there are
verification phases on the side and the validation phase on the other side. V-Model joins by Coding
phase.
Incremental Model
The incremental model is not a separate model. It is necessarily a series of waterfall cycles. The
requirements are divided into groups at the start of the project. For each group, the SDLC model is
followed to develop software. The SDLC process is repeated, with each release adding more
functionality until all requirements are met. In this method, each cycle act as the maintenance phase for
the previous software release. Modification to the incremental model allows development cycles to
overlap. After that subsequent cycle may begin before the previous cycle is complete.
Agile Model
Any agile software phase is characterized in a manner that addresses several key assumptions
about the bulk of software projects:
1. It is difficult to think in advance which software requirements will persist and which will change.
It is equally difficult to predict how user priorities will change as the project proceeds.
2. For many types of software, design and development are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to think about how much design is necessary before construction is used to test the
configuration.
3. Analysis, design, development, and testing are not as predictable (from a planning point of view)
as we might like.
Iterative Model
Big bang model is focusing on all types of resources in software development and coding, with
no or very little planning. The requirements are understood and implemented when they come.
This model works best for small projects with smaller size development team which are working
together. It is also useful for academic software development projects. It is an ideal model
where requirements are either unknown or final release date is not given.
Prototype Model
The prototyping model starts with the requirements gathering. The developer and the user
meet and define the purpose of the software, identify the needs, etc.
A 'quick design' is then created. This design focuses on those aspects of the software that will be
visible to the user. It then leads to the development of a prototype. The customer then checks
the prototype, and any modifications or changes that are needed are made to the prototype.
Looping takes place in this step, and better versions of the prototype are created. These are
continuously shown to the user so that any new changes can be updated in the prototype. This
process continue until the customer is satisfied with the system. Once a user is satisfied, the
prototype is converted to the actual system with all considerations for quality and security.
2) Explain a) Characteristics of a Good SRS Document b) IEEE 830 guidelines of SRS Document.
U2
a) Characteristics of a Good SRS Document
1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is
said to be correct if it covers all the requirements that are actually expected from the
system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all
the pages, resolving the to be determined parts to as much extent as possible as well as
covering all the functional and non-functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set
of requirements. Examples of conflict include differences in terminologies used at
separate places, logical conflicts like time period of report generation, etc.
4. Unambiguous:
A SRS is said to be unambiguous if all the requirements stated have only 1
interpretation. Some of the ways to prevent unambiguous include the use of modelling
techniques like ER diagrams, proper reviews and buddy checks, etc.
5. Ranking for importance and stability:
There should a criterion to classify the requirements as less or more important or more
specifically as desirable or essential. An identifier mark can be used with every
requirement to indicate its rank or stability.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting
changes to the system to some extent. Modifications should be properly indexed and
cross-referenced.
7. Verifiability:
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent
to which every requirement is met by the system. For example, a requirement starting
that the system must be user-friendly is not verifiable and listing such requirements
should be avoided.
8. Traceability:
One should be able to trace a requirement to design component and then to code
segment in the program. Similarly, one should be able to trace a requirement to the
corresponding test cases.
9. Design Independence:
There should be an option to choose from multiple design alternatives for the final
system. More specifically, the SRS should not include any implementation details.
10. Test-ability:
A SRS should be written in such a way that it is easy to generate test cases and test
plans from the document.
11. Understandable by the customer:
An end user maybe an expert in his/her specific domain but might not be an expert in
computer science. Hence, the use of formal notations and symbols should be avoided to
as much extent as possible. The language should be kept easy and clear.
12. Right level of abstraction:
If the SRS is written for the requirements phase, the details should be explained
explicitly. Whereas, for a feasibility study, fewer details can be used. Hence, the level of
abstraction varies according to the purpose of the SRS.
B) IEEE 830 guidelines of SRS Document.
1. Introduction
1. Purpose
2. Scope
3. Definition, Acronyms and abbreviations
4. References
5. Overview
1. Product Perspective
System Interfaces
Interfaces
Hardware Interfaces
Software Interfaces
Communication Interfaces
Memory Constraints
Operations
Site Adaptation Requirements
2. Product Functions
3. User Characteristics
4. Constraints
5. Assumptions for dependencies
6. Apportioning of requirements
3. Specific Requirements
1. External Interfaces
2. Functions
3. Performance requirements
4. Logical database requirements
5. Design Constraints
6. Software System attributes
7. Organization of specific requirements
8. Additional Comments.
3) Explain- Software Design Process U3
Many principles are employed to organize, coordinate, classify, and set up software
design’s structural components.
Software Designs become some of the most convenient designs when the following
principles are applied. They help to generate remarkable User Experiences and
customer loyalty.
1. Modularity
2. Coupling
3. Abstraction
4. Anticipation of change
5. Simplicity
6. Sufficiency and completeness
Modularity
Dividing a large software project into smaller portions/modules is known as modularity. It
is the key to scalable and maintainable software design. The project is divided into various
components and work on one component is done at once. It becomes easy to test each
component due to modularity. It also makes integrating new features more accessible.
Coupling
Coupling refers to the extent of interdependence between software modules and how
closely two modules are connected. Low coupling is a feature of good design. With low
coupling, changes can be made in each module individually, without changing the other
modules.
Abstraction
The process of identifying the essential behavior by separating it from its implementation
and removing irrelevant details is known as Abstraction. The inability to separate essential
behavior from its implementation will lead to unnecessary coupling.
Anticipation of Change
The demands of software keep on changing, resulting in continuous changes in
requirements as well. Building a good software design consists of its ability to
accommodate and adjust to change comfortably.
Simplicity
The aim of good software design is simplicity. Each task has its own module, which can be
utilized and modified independently. It makes the code easy to use and minimizes the
number of setbacks.
Sufficiency and Completeness
A good software design ensures the sufficiency and completeness of the software
concerning the established requirements. It makes sure that the software has been
adequately and wholly built.
stages of the Software Design Process
Interviews
Focus groups
Survey
Stage 3: Design
Wireframing
Creating user stories
Data flow diagrams
Technical Design
User Interface
Stage 4: Prototyping
Low Fidelity Prototyping
Medium Fidelity Prototyping
High Fidelity Prototyping
Stage 5: Evaluation
Many tools can be used for designing software. The top 6 most effective and commonly used tools are:-
1. Draw.io
2. Jira
3. Mockflow
4. Sketch
5. Marvel
6. Zeplin
In software testing, manual testing can be further classified into three different types of testing, which
are as follows:
In white-box testing, the developer will inspect every line of code before handing it over to the
testing team or the concerned test engineers.
Subsequently, the code is noticeable for developers throughout testing; that's why this process is
known as WBT (White Box Testing).
In other words, we can say that the developer will execute the complete white-box testing for the
particular software and send the specific application to the testing team.
The purpose of implementing the white box testing is to emphasize the flow of inputs and outputs
over the software and enhance the security of an application.
White box testing is also known as open box testing, glass box testing, structural testing, clear box
testing, and transparent box testing.
Another type of manual testing is black-box testing. In this testing, the test engineer will analyze
the software against requirements, identify the defects or bug, and sends it back to the
development team.
Then, the developers will fix those defects, do one round of White box testing, and send it to the
testing team.
Here, fixing the bugs means the defect is resolved, and the particular feature is working according
to the given requirement.
The main objective of implementing the black box testing is to specify the business needs or the
customer's requirements.
In other words, we can say that black box testing is a process of checking the functionality of an
application as per the customer requirement. The source code is not visible in this testing; that's
why it is known as black-box testing.
Types of Black Box Testing
Black box testing further categorizes into two parts, which are as discussed below:
o Functional Testing
o Non-function Testing
Functional Testing
The test engineer will check all the components systematically against requirement specifications is
known as functional testing. Functional testing is also known as Component testing.
In functional testing, all the components are tested by giving the value, defining the output, and
validating the actual output with the expected value.
Functional testing is a part of black-box testing as its emphases on application requirement rather
than actual code. The test engineer has to test only the program instead of the system.
Just like another type of testing is divided into several parts, functional testing is also classified into
various categories.
o Unit Testing
o Integration Testing
o System Testing
1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this, the test engineer
will test the module of an application independently or test all the module functionality is called unit
testing.
The primary objective of executing the unit testing is to confirm the unit components with their
performance. Here, a unit is defined as a single testable function of a software or an application. And it
is verified throughout the specified application development phase.
2. Integration Testing
Once we are successfully implementing the unit testing, we will go integration testing. It is the second
level of functional testing, where we test the data flow between dependent modules or interface
between two features is called integration testing.
The purpose of executing the integration testing is to test the statement's accuracy between each
module.
o Incremental Testing
o Non-Incremental Testing
Whenever there is a clear relationship between modules, we go for incremental integration testing.
Suppose, we take two modules and analysis the data flow between them if they are working fine or not.
If these modules are working fine, then we can add one more module and test again. And we can
continue with the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data flow between
the modules is known as Incremental integration testing.
Incremental integration testing can further classify into two parts, which are as follows:
In the bottom-up approach, we will add the modules incrementally and check the data flow between
modules. And also, ensure that the module we are adding is the parent of the earlier ones.
Whenever the data flow is complex and very difficult to classify a parent and a child, we will go for the
non-incremental integration approach. The non-incremental method is also known as the Big Bang
method.
3. System Testing
Whenever we are done with the unit and integration testing, we can proceed with the system testing.
In system testing, the test environment is parallel to the production environment. It is also known
as end-to-end testing.
In this type of testing, we will undergo each attribute of the software and test if the end feature works
according to the business requirement. And analysis the software product as a complete system.
Non-function Testing
The next part of black-box testing is non-functional testing. It provides detailed information on software
product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the software.
1. Performance Testing
In performance testing, the test engineer will test the working of an application by applying some load.
In this type of non-functional testing, the test engineer will only focus on several aspects, such
as Response time, Load, scalability, and Stability of the software or an application.
Performance testing includes the various types of testing, which are as follows:
o Load Testing
o Stress Testing
o Scalability Testing
o Stability Testing
o Load Testing
While executing the performance testing, we will apply some load on the particular application to check
the application's performance, known as load testing. Here, the load could be less than or equal to the
desired load.
It will help us to detect the highest operating volume of the software and bottlenecks.
o Stress Testing
It is used to analyze the user-friendliness and robustness of the software beyond the common functional
limits.
Primarily, stress testing is used for critical software, but it can also be used for all types of software
applications.
o Scalability Testing
To analysis, the application's performance by enhancing or reducing the load in particular balances is
known as scalability testing.
In scalability testing, we can also check the system, processes, or database's ability to meet an upward
need. And in this, the Test Cases are designed and implemented efficiently.
o Stability Testing
Stability testing is a procedure where we evaluate the application's performance by applying the load for
a precise time.
It mainly checks the constancy problems of the application and the efficiency of a developed product. In
this type of testing, we can rapidly find the system's defect even in a stressful situation.
2. Usability Testing
Another type of non-functional testing is usability testing. In usability testing, we will analyze the user-
friendliness of an application and detect the bugs in the software's end-user interface.
Here, the term user-friendliness defines the following aspects of an application:
The application should be easy to understand, which means that all the features must be visible to
end-users.
The application's look and feel should be good that means the application should be pleasant
looking and make a feel to the end-user to use it.
3. Compatibility Testing
In compatibility testing, we will check the functionality of an application in specific hardware and
software environments. Once the application is functionally stable then only, we go
for compatibility testing.
Here, software means we can test the application on the different operating systems and other
browsers, and hardware means we can test the application on different sizes.
Another part of manual testing is Grey box testing. It is a collaboration of black box and white box
testing.
Since, the grey box testing includes access to internal coding for designing test cases. Grey box
testing is performed by a person who knows coding as well as testing.
In other words, we can say that if a single-person team done both white box and black-box testing,
it is considered grey box testing.
In other words, we can say that whenever we are testing an application by using some tools is
known as automation testing.
We will go for automation testing when various releases or several regression cycles goes on the
application or software. We cannot write the test script or perform the automation testing without
understanding the programming language.
Some other types of Software Testing
In software testing, we also have some other types of testing that are not part of any above discussed
testing, but those testing are required while testing any software or an application.
o Smoke Testing
o Sanity Testing
o Regression Testing
o User Acceptance Testing
o Exploratory Testing
o Adhoc Testing
o Security Testing
o Globalization Testing
Regression Testing
Regression testing is the most commonly used type of software testing. Here, the
term regression implies that we have to re-test those parts of an unaffected application.
Regression testing is the most suitable testing for automation tools. As per the project type and
accessibility of resources, regression testing can be similar to Retesting.
Whenever a bug is fixed by the developers and then testing the other features of the applications
that might be simulated because of the bug fixing is known as regression testing.
In other words, we can say that whenever there is a new release for some project, then we can
perform Regression Testing, and due to a new feature may affect the old features in the earlier
releases.
The User acceptance testing (UAT) is done by the individual team known as domain
expert/customer or the client. And knowing the application before accepting the final product is
called as user acceptance testing.
In user acceptance testing, we analyze the business scenarios, and real-time scenarios on the
distinct environment called the UAT environment. In this testing, we will test the application before
UAI for customer approval.
5) a) What is software maintenance? Explain in detail. b) Explain SEI capability maturity model
(CMM)? U5
A) Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the software
require alteration wherever possible.
o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.
1. Corrective Maintenance
Corrective maintenance aims to correct any remaining errors regardless of where they may cause
specifications, design, coding, testing, and documentation, etc.
2. Adaptive Maintenance
It is the process by which we prevent our system from being obsolete. It involves the concept of
reengineering & reverse engineering in which an old system with old technology is re-engineered using
new technology. This maintenance prevents the system from dying out.
4. Perfective Maintenance
The Capability Maturity Model (CMM) is a procedure used to develop and refine an organization's
software development process.
The model defines a five-level evolutionary stage of increasingly organized and consistently more
mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and
development center promote by the U.S. Department of Defense (DOD).
Methods of SEICMM
SEI CMM categorized software development industries into the following five maturity levels. The
various levels of SEI CMM have been designed so that it is easy for an organization to build its
quality system starting from scratch slowly.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited, different
engineers follow their process and as a result, development efforts become chaotic. Therefore, it is also
called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured. ISO
9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect correction
time, productivity, the average number of defects found per hour inspection, the average number of
failures detected during testing per LOC, etc. The software process and product quality are measured,
and quantitative quality requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process metrics are used to
analyze if a project performed satisfactorily. Thus, the outcome of process measurements is used to
calculate project performance rather than improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data are
evaluated for continuous process improvement.
Computer programs and related documentation such as requirements, design models and user
manuals.
The features that good software engineers should possess are as follows:
Good communication skills. These skills comprise of oral, written, and interpersonal skills.
High motivation.
Intelligence.
Discipline, etc.
The objective of risk assessment is to division the risks in the condition of their loss, causing
potential. For risk assessment, first, every risk should be rated in two methods:
Based on these two methods, the priority of each risk can be estimated:
p=r*s
Where p is the priority with which the risk must be controlled, r is the probability of the risk
becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified
risks are set up, then the most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the project as early
as possible so that the impact of risk can be reduced by making effective risk management
planning.
A project can be of use by a large variety of risk. To identify the significant risk, this might affect
a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that
are used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used
to create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement
and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system
2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and
make a perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and seriousness of
each risk. Instead, you should authorize the risk to one of several bands:
1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.
Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a
plan are determined; the project must be made to include the most harmful and the most likely
risks. Different risks need different containment methods. In fact, most risks need ingenuity on
the part of the project manager in tackling the risk.
Risk Leverage: To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk. For this,
the risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.
Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of
reduction)
1. Risk planning: The risk planning method considers each of the key risks that have been
identified and develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the plan so
that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.
2. Risk Monitoring: Risk monitoring is the method king that your assumption about the
product, process, and business risks has not changed.
3) How complex requirements are representing using decision tables and decision trees? U2
Decision tables and trees are useful tools for documenting and analyzing functional
requirements that involve complex or conflicting rules.
They help you to visualize the logic, identify gaps or errors, and communicate the
requirements clearly and consistently.
In this article, you will learn how to handle some common challenges when creating and
using decision tables and trees.
What are decision tables and trees?
Decision tables and trees are graphical representations of the conditions and actions that
make up a business rule or a use case scenario.
A decision table consists of rows and columns that show the combinations of conditions
and the corresponding actions.
To create a decision table or a tree, you need to identify input conditions, output
actions, and rules or scenarios.
Input conditions can be data, events, user inputs, or other factors that affect the
outcome of the rule or scenario. Output actions are the results or effects of the rule or
scenario and can include tasks, messages, calculations, or other actions.
Rules or scenarios are combinations of conditions and actions that define the logic and
behavior of the system or process. To list all possible combinations of conditions and
actions, you can use a matrix or table and assign a rule or scenario number to each row.
Alternatively, you can use a tree diagram to show the hierarchy and sequence of
conditions and actions with each node labeled with a rule or scenario number.
Sometimes, you may encounter rules or scenarios that are too complex or ambiguous to
fit into a single row or node of a decision table or tree.
For example, you might have multiple actions for the same condition, or multiple
conditions for the same action, or nested conditions that depend on each other.
To handle these cases, you can use techniques like splitting the rule or scenario into sub-
rules or sub-scenarios and creating separate decision tables or trees.
Additionally, you can use extended entries or connectors to indicate that there are more
conditions or actions that are not shown in the table or tree.
Conditional expressions and operators such as AND, OR, NOT, IF, THEN, and ELSE can
also be used to combine or modify conditions and actions.
Finally, variables and parameters can be used to represent values and states of
conditions and actions; these should be defined clearly in the table or tree.
When using decision tables or trees, you may come across conflicting rules or scenarios.
These involve different rules or scenarios that have the same or overlapping conditions,
but different or contradictory actions.
For instance, a rule may state that a 10% discount should be given to VIP customers,
while another rule may say that a 15% discount should be given to VIP customers who
have orders over $1000.
To resolve these cases, you can prioritize the rules or scenarios based on their
importance, urgency, frequency, or specificity and apply them in that order.
Additionally, exceptions or exclusions can be used to specify the conditions or actions
that override or cancel out the other rules or scenarios.
Default or fallback actions can also be employed to handle cases where none of the
rules or scenarios match. Lastly, feedback or confirmation can be requested from the
user or system to choose or verify the actions in case of conflict.
Once you have created your decision tables or trees, it is essential to test and validate them to
guarantee they are complete, correct, consistent, and clear.
You can review the tables or trees with stakeholders, users, developers, testers, and other
relevant parties to get their feedback and approval.
Additionally, it is important to check for any errors, gaps, redundancies, ambiguities, or
contradictions and revise them accordingly.
You can also use test cases or scenarios based on the rules or scenarios in the tables or trees to
compare the expected and actual outcomes. Additionally, you can use tools or software that can
generate, execute, or automate the tests based on the tables or trees.
This report lays a foundation for software engineering activities and is constructing
when entire requirements are elicited and analyzed. SRS is a formal report, which acts
as a representation of software that enables the customers to review whether it (SRS) is
according to their requirements.
1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:
(a) The format of an output report may be described in one requirement as tabular but in
another as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,
(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and
B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms
for that object. For example, a program's request for user input may be called a "prompt" in
one requirement and a "cue" in another. The use of standard terminology and descriptions
promotes consistency.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The
requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement
documentation.
1. Backward Traceability: This depends upon each requirement explicitly referencing its source
in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.
The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to
be able to ascertain the complete set of requirements that may be concerned by those
modifications.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and
symbols should be avoided too as much extent as possible. The language should be kept simple
and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be used.
Hence, the level of abstraction modifies according to the objective of the SRS.
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Black-box view: It should only define what the system should do and refrain from stating how
to do these. This means that the SRS document should define the external behavior of the
system and not discuss the implementation issues. The SRS report should view the system to be
developed as a black box and should define the externally visible behavior of the system. For
this reason, the SRS report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD.
It also gives insight into the inputs and outputs of each entity and the process itself. DFD does
not have control flow and no loops or decision rules are present. Specific operations
depending on the type of data can be explained by a flowchart.
It is a graphical tool, useful for communicating with users, managers and other personnel. it is
useful for analyzing existing as well as proposed system.
It provides an overview of
What data is system processes.
What transformation are performed.
What data are stored.
What results are produced, etc.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-
analysis modeling tools. Data Flow diagrams are very popular because they help us to visualize
the major steps and data involved in software-system processes.
Components of DFD
The Data Flow Diagram has 4 components:
Process Input to output transformation in a system takes place because of process
function. The symbols of a process are rectangular with rounded corners, oval, rectangle
or a circle. The process is named a short sentence, in one word or a phrase to express its
essence
Data Flow Data flow describes the information transferring between different parts of the
systems. The arrow symbol is the symbol of data flow. A relatable name should be given
to the flow to determine the information which is being moved. Data flow also represents
material along with information that is being moved. Material shifts are modeled in
systems that are not merely informative. A given flow should only transfer a single type of
information. The direction of flow is represented by the arrow which can also be bi-
directional.
Warehouse The data is stored in the warehouse for later use. Two horizontal lines
represent the symbol of the store. The warehouse is simply not restricted to being a data
file rather it can be anything like a folder with documents, an optical disc, a filing cabinet.
The data warehouse can be viewed independent of its implementation. When the data
flow from the warehouse it is considered as data reading and when data flows to the
warehouse it is called data entry or data updating.
Terminator The Terminator is an external entity that stands outside of the system and
communicates with the system. It can be, for example, organizations like banks, groups of
people like customers or different departments of the same organization, which is not a
part of the model system and is an external entity. Modeled systems also communicate
with terminator.
Rules for creating DFD
The name of the entity should be easy and understandable without any extra
assistance (like comments).
The processes should be numbered or put in ordered list to be referred easily.
The DFD should maintain consistency across all the DFD levels.
A single DFD can have a maximum of nine processes and a minimum of three
processes.
Symbols Used in DFD
Square Box: A square box defines source or destination of the system. It is also
called entity. It is represented by rectangle.
Arrow or Line: An arrow identifies the data flow i.e. it gives information to the data
that is in motion.
Circle or bubble chart: It represents as a process that gives us information. It is
also called processing box.
Open Rectangle: An open rectangle is a data store. In this data is store either
temporary or permanently.
Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be created. Levels of
DFD are as follows:
0-level DFD: It represents the entire system as a single bubble and provides an
overall picture of the system.
1-level DFD: It represents the main functions of the system and how they interact
with each other.
2-level DFD: It represents the processes within each function of the system and
how they interact with each other.
3-level DFD: It represents the data flow within each process and how the data is
transformed and stored.
Advantages of DFD
It helps us to understand the functioning and the limits of a system.
It is a graphical representation which is very easy to understand as it helps
visualize contents.
Data Flow Diagram represent detailed and well explained diagram of system
components.
It is used as the part of system documentation file.
Data Flow Diagrams can be understood by both technical or nontechnical person
because they are very easy to understand.
Disadvantages of DFD
At times DFD can confuse the programmers regarding the system.
Data Flow Diagram takes long time to be generated, and many times due to this
reasons analyst are denied permission to work on it.
It can include all methods and devices are used to accommodate interaction between
machines and user.
User interface can take out many forms, but always accomplishes two fundamental
tasks:
4. Prevention:
A good user interface design should prevent users from performing an
in-appropriate task and this is accomplished by disabling or “graying
cut” certain elements under certain conditions.
5. Forgiveness:
This quality can encourage users to use the software in a full extent.
Designers should provide users with a way out when users find
themselves somewhere they should not go.
6. Graphical User Interface Design:
A graphic user interface design provides screen displays that create an
operating environment for the user and form an explicit visual and
functional context for user’s actions.
It includes standard objects like buttons, icons, text, field, windows,
images, pull-down and pop-up screen menus.
3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
Meaningful and understandable variables name helps anyone to
understand the reason of using it.
Local variables should be named using camel case lettering starting with
small letter (e.g. localData) whereas Global variables names should start
with a capital letter (e.g. GlobalData). Constant names should be formed
using capital letters only (e.g. CONSDATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with
small letters.
The name of the function must describe the reason of using the function
clearly and briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. For
making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
There must be a space after giving a comma between two function
arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of
each block in the program.
All braces should start from a new line and the code following the end of
braces also start from a new line.
5. Error return values and exception handling conventions:
All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.
On the other hand, Coding guidelines give some general suggestions regarding the
coding style that to be followed for the betterment of understandability and
readability of the code. Some of the coding guidelines are given below :
Debugging is the process of identifying and resolving errors, or bugs, in a software system. It
is an important aspect of software engineering because bugs can cause a software system to
malfunction, and can lead to poor performance or incorrect results. Debugging can be a time-
consuming and complex task, but it is essential for ensuring that a software system is
functioning correctly.
There are several common methods and techniques used in debugging, including:
1. Code Inspection: This involves manually reviewing the source code of a software
system to identify potential bugs or errors.
2. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve bugs.
3. Unit Testing: This involves testing individual units or components of a software
system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions between different
components of a software system to identify bugs or errors.
5. System Testing: This involves testing the entire software system to identify bugs or
errors.
6. Monitoring: This involves monitoring a software system for unusual behavior or
performance issues that can indicate the presence of bugs or errors.
7. Logging: This involves recording events and messages related to the software
system, which can be used to identify bugs or errors.
Debugging Process: Steps involved in debugging are:
Problem identification and report preparation.
Assigning the report to the software engineer defect to verify that it is genuine.
Defect Analysis using modeling, documentation, finding and testing candidate
flaws, etc.
Defect Resolution by making required changes to the system.
Validation of corrections.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to understand the
system. It helps the debugger to construct different representations of systems to
be debugged depending on the need. A study of the system is also done actively to
find recent changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the
program backward from the location of the failure message in order to identify the
region of faulty code. A detailed study of the region is conducted to find the cause
of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying
the results. The region where the wrong outputs are obtained is the region that
needs to be focused on to find the defect.
4. Using past experience with the software debug the software with similar problems
in nature. The success of this approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary partitioning. Data related to
the error occurrence are organized to isolate potential causes.
6. Static analysis: Analyzing the code without executing it to identify potential bugs
or errors. This approach involves analyzing code syntax, data flow, and control
flow.
7. Dynamic analysis: Executing the code and analyzing its behavior at runtime to
identify errors or bugs. This approach involves techniques like runtime debugging
and profiling.
8. Collaborative debugging: Involves multiple developers working together to debug
a system. This approach is helpful in situations where multiple modules or
components are involved, and the root cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify the sequence of
events leading up to the error. This approach involves collecting and analyzing logs
and traces generated by the system during its execution.
10. Automated Debugging: The use of automated tools and techniques to assist in the
debugging process. These tools can include static and dynamic analysis tools, as
well as tools that use machine learning and artificial intelligence to identify errors
and suggest fixes.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of
public domain software like gdb and dbx are available for debugging. They offer console-
based command-line interfaces. Examples of automated debugging tools include code-based
tracers, profilers, interpreters, etc. Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
Advantages of Debugging:
Disadvantages of Debugging:
1. Time-consuming
2. Requires specialized skills
3. Can be difficult to reproduce:
4. Can be difficult to diagnose
5. Can be difficult to fix
6. Limited insight:
7. Can be expensive:
There are several benefits of using Case tools and working with the case environment,
1. A key advantage emerging out of the utilization of a CASE environment is cost efficiency through
all software advancement stages. Various studies complete to quantify the effect of CASE put
the cast decrease between 30% to 40%.
2. The utilization of CASE tools prompts extensive upgrades quality. This is mainly due to the
realities that one can easily emphasize through the various periods of software development
and the odds of human blunder are impressively decreased.
3. CASE tools help produce high caliber and steady archives. Since the significant information
identifying with a product item is kept up in a focal repository, excess in the put-away
information is diminished and in this way, odds of conflicting documentation are decreased.
4. The presentation of a CASE environment affects the style of working of an organization and
makes it situated towards the organized and methodical methodology.
5. CASE tools have prompted progressive cost-efficiency in software maintenance endeavors. This
emerges not just because of the huge estimation of a CASE environment in traceable errors and
consistency checks, yet likewise, because of the deliberate data catch during the different
periods of software development as a consequence of holding fast to a CASE environment.
10) Explain Basic issues in any reuse program, Reuse approach, Reuse at organization level. U5
It is great to know about the kinds of artifacts associated with software development that can be used
again. Almost all artifacts associated with software development, including project plan and test plan,
can be used again. However, the important items that can be effectively used again are,
Requirements specification
Design
Code
Test cases
Knowledge
The following are some of the basic issues that must be for starting any reuse program,
1. Component creation
2. Component indexing and storing
3. Component search
4. Component understanding
5. Component adaptation
6. Repository maintenance
1) Component creation
The reusable components have to be first identified. The selection of the correct kind of components
having the potential for reuse is best.
Indexing requires classification of the again usable components so that they can be easily found when
looking for a component for reuse. The components need to be stored in a Relational Database
Management System (RDBMS) or an Object-Oriented Database System (ODBMS) for efficient access
when the number of components becomes large.
3) Component searching
The programmers need to search for correct components matching their needs in a database of
components. To be able to search components efficiently, the programmers require a perfect method to
tells the components that they are looking for.
4) Component understanding
The programmers need a prefect and sufficiently complete understanding of what the component does
to be able to decide whether they can reuse the component or not. To understand, the components
should be well documented and should do something simple in the code.
5) Component adaptation
Before they can be reused, the components may need adaptation, since a selected component may not
exactly be mixed the problem at hand. However, tinkering with the code is also not the best solution
because this is very likely to be a source of faults.
6) Repository maintenance
It once is created requires repeated maintenance. New components, as and when made have to be
entered inside the repository. The faulty components have to be followed.
Further, when new applications mixed, the older applications become obsolete. In this case, the
obsolete components might have to be deleted from the repository.