Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

80 Marks Software Engineering: Fail in A Variety of Ways and Verifies That Recovery Is Properly Performed

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 9

80 marks SOFTWARE ENGINEERING

Q1.

1. TYPES OF PROCESS PATTERNS:

Page no.1.32 NIrali

2. CORE PRINCIPLES : ANY two (notebook)

3. RECOVERY TESTING:
1. Many computer based systems must recover from faults and resume
processing within a pre specified time.
2. In some cases, a system must be fault tolerant; that is, processing
faults must not cause overall system function to cease.
3. In other cases, a system failure must be corrected within a specified
period of time or severe economic damage will occur.
4. Recovery testing is a system test that forces the software to
fail in a variety of ways and verifies that recovery is properly
performed.
5. If recovery is automatic performed by the system itself), re
initialization, check pointing mechanisms, data recovery, and restart
are evaluated for correctness.
6. If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable limits.

4. PEOPLE AND PROCESS MANAGEMENT SPECTRUM:


Effective software project management focuses on the four P’s: people,
product, process, and project.
The People
1. The “people factor” is so important that the Software Engineering Institute
has developed a people management capability maturity model (PM-CMM), to
enhance the readiness of software organizations to undertake increasingly
complex applications by helping to attract, grow, motivate, deploy, and retain
the talent needed to improve their software development capability.
2. The people management maturity model defines the following key practice
areas for software people: recruiting, selection, performance management,
training, compensation, career development, organization and work design,
and team/culture development.
3. Organizations that achieve high levels of maturity in the people
management area have a higher likelihood of implementing effective
software engineering practices.
The Process
1. A software process provides the framework from which a comprehensive
plan for software development can be established.
2. A small number of framework activities are applicable to all software
projects, regardless of their size or complexity.
3. A number of different task sets—tasks, milestones, work products, and
quality assurance points—enable the ramework activities to be adapted to
the characteristics of the software project and the requirements of the
project team.
4. Finally, umbrella activities—such as software quality assurance, software
configuration management, and measurement—overlay the process
model. Umbrella activities are independent of any one framework activity
and occur throughout the process.
5. SQA(refer 20 marks).
6. ROLE OF SYSTEM ANALYST:
PGno2.23 NIRALI PRAKASHAN—analysis task
Q2.
1. WATERFALL MODEL:[ classic life cycle, THE LINEAR SEQUENTIAL MODEL]

Software requirements analysis.


1. The requirements gathering process is intensified and focused specifically on
software. To understand the nature of the program(s) to be built, the software
engineer ("analyst") must understand the information domain for the
software, as well as required function, behavior, performance, and interface.
2. Requirements for both the system and the software are documented and
reviewed with the customer.
Design.
1. Software design is actually a multistep process that focuses on four distinct
attributes of a program: data structure, software architecture, interface
representations, and procedural (algorithmic) detail. The design process
translates requirements into a representation of the software that can be
assessed for quality before coding begins.
2. Like requirements, the design is documented and becomes part of the
software configuration.
Code generation.
1. The design must be translated into a machine-readable form.The code
generation step performs this task.
2. If design is performed in a detailed manner,code generation can be
accomplished mechanistically.
Testing.
1. Once code has been generated, program testing begins.
2. The testing process focuses on the logical internals of the software, ensuring
that all statements have been tested, and on the functional externals; that is,
conducting tests to uncover errors and ensure that defined input will produce
actual results that agree with required results.
Support.
1. Software will undoubtedly undergo change after it is delivered to the
customer (a possible exception is embedded software). Software
support/maintenance reapplies each of the preceding phases to an existing
program rather than a new one.

2. SOFTWARE PROCESS ASSESSMENTS:

Pg no 1.34,1.35,1.36 of NIRALI
3. BEHAVIORAL MODEL:

1. Most software responds to events from the outside world. This stimulus/response
characteristic forms the basis of the behavioral model.
2. A computer program always exists in some state—an externally observable
mode of behavior (e.g., waiting, computing, printing, polling) that is changed
only when some event occurs.
3. For example, software will remain in the wait state until
(1) An internal clock indicates that some time interval has passed,
(2) An external event (e.g., a mouse movement) causes an interrupt, or
(3) An external system signals the software to act in some manner.
4. A behavioral model creates a representation of the states of the software and
the events that cause
a software to change state.

4. STEPS TO CREATE DATA FLOW MODEL[DFD]:

Page no 2.33 NIRALI

Q3.

1. TECHNIQUES OF WHITE BOX TESTING:

1. BASIC PATH TESTING:


a. The basis path method enables the test case designer to derive a
logical complexity measure of a procedural design and use this
measure as a guide for defining a basis set of execution paths.
b. Test cases derived to exercise the basis set are guaranteed to execute
every statement in the program at least one time during testing.
2. CONDITION TESTING:
a. Condition testing is a test case design method that exercises the
logical conditions contained in a program module.
b. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator.
c. A relational expression takes the form E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator>
is one of the following: <, ≤, =, ≠ (nonequality), >, or ≥.
a. The purpose of condition testing is to detect not only errors in the
conditions of a program but also other errors in the program.
3. DATA FLOW TESTING:
a. The data flow testing method selects test paths of a program according
to the locations of definitions and uses of variables in the program.
b. Data flow testing strategies are useful for selecting test paths of a
program containing nested if and loop statements.
2. CPM(Critical path method):

Page no 4.20 NIRALI

3. PROBLEM DECOMPOSITION:
1. Problem decomposition, sometimes called partitioning or problem
elaboration, is an activity that sits at the core of software requirements
analysis.
2. During the scoping activity no attempt is made to fully decompose the
problem. Rather, decomposition is applied in two major areas: (1) the
functionality that must be delivered and (2) the process that will be used to
deliver it.
3. Human beings tend to apply a divide and conquer strategy when they are
confronted with a complex problems. Stated simply, a complex problem is
partitioned into smaller problems that are more manageable.
4. This is the strategy that applies as project planning begins.
5. Software functions, described in the statement of scope, are evaluated and
refined to provide more detail prior to the beginning of estimation. Because
both cost and schedule estimates are functionally oriented, some degree of
decomposition is often useful.

4. REACTIVE VS PROACTIVE STRATEGIES:


1. The majority of software teams rely solely on reactive risk strategies. At best,
a reactive strategy monitors the project for likely risks.
2. Resources are set aside to deal with them, should they become actual
problems. More commonly, the software team does nothing about risks until
something goes wrong.
3. Then, the team flies into action in an attempt to correct the problem rapidly.
This is often called a fire fighting mode. When this fails, “crisis management”
[CHA92] takes over and the project is in real jeopardy.
4. A considerably more intelligent strategy for risk management is to be
proactive. A proactive strategy begins long before technical work is initiated.
5. Potential risks are identified, their probability and impact are assessed, and
they are ranked by importance.
6. Then, the software team establishes a plan for managing risk. The primary
objective is to avoid risk, but because not all risks can be avoided, the team
works to develop a contingency plan that will enable it to respond in a
controlled and effective manner.

Q4. Six marks each

1. REQUREMENT ENGINEERING TASKS:


INCEPTION:
The first meeting between a software engineer (the analyst) and the
customer can be likened to the awkwardness of a first date between two
adolescents. Communication must be initiated. The first set of context-free
questions focuses on the customer, the overall goals, and the benefits. For
example, the analyst might ask:
• Who is behind the request for this work?
• Who will use the solution?
• What will be the economic benefit of a successful solution?
• Is there another source for the solution that you need?
ELICITATION:
It certainly seems simple enough—ask the customer, the users, and others
what the objectives for the system or product are. But it isn’t simple—it’s
very hard. There are number of problems that help us understand why
requirements elicitation is difficult:
• Problems of scope. The boundary of the system is ill-defined or the
customers/ users specify unnecessary technical detail that may confuse,
rather than clarify, overall system objectives.
• Problems of understanding. The customers/users are not completely sure of
What is needed, have a poor understanding of the capabilities and limitations
of their computing environment.
• Problems of volatility. The requirements change over time.
ELABORATION: NOTE BOOK?
NEGOTIATION: NOTE BOOK?

2. SYSTEM TESTING:
System testing is actually a series of different tests whose primary purpose is
to fully exercise the computer-based system.
Recovery Testing
• Many computer based systems must recover from faults and resume
processing within a prespecified time.
• In some cases, a system must be fault tolerant; that is, processing faults
must not cause overall system function to cease. In other cases, a system
failure must be corrected within a specified period of time or severe economic
damage will occur.
• Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed.
• If recovery is automatic (performed by the system itself), reinitialization,
checkpointing mechanisms, data recovery, and restart are evaluated for
correctness.
• If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated to determine whether it is within acceptable limits.
Security Testing
• Any computer-based system that manages sensitive information or causes
actions that can improperly harm (or benefit) individuals is a target for
improper or illegal penetration.
• Penetration spans a broad range of activities: hackers who attempt to
penetrate systems for sport; disgruntled employees who attempt to
penetrate for revenge; dishonest individuals who attempt to penetrate for
illicit personal gain.
• Security testing attempts to verify that protection mechanisms built
into a system will, in fact, protect it from improper penetration.
• During security testing, the tester plays the role(s) of the individual who
desires to penetrate the system.
• The role of the system designer is to make penetration cost more than the
value of the information that will be obtained.
Stress Testing
• Stress testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume.
• For example, (1) special tests may be designed that generate ten interrupts
per second, when one or two is the average rate, (2) input data rates may be
increased by an order of magnitude to determine how input functions will
respond, (3) test cases that require maximum memory or other resources are
executed, (4) test cases that may cause thrashing in a virtual operating
system are designed, (5) test cases that may cause excessive hunting for
disk-resident data are created. Essentially, the tester attempts to break the
program.
Performance Testing
• For real-time and embedded systems, software that provides required
function but does not conform to performance requirements is unacceptable.
• Performance testing is designed to test the run-time performance of
software within the context of an integrated system.
• Performance testing occurs throughout all steps in the testing process.
• Even at the unit level, the performance of an individual module may be
assessed as white-box tests are conducted.
• However, it is not until all system elements are fully integrated that the true
performance of a system can be ascertained.

3. DEBUGGING & PROCESS OF DEBUGGING


Debugging occurs as a consequence of successful testing. That is,
when a test case uncovers an error, debugging is the process that
results in the removal of the error.
• Referring to Figure 18.8, the debugging process begins with the execution of
a test case.
• Results are assessed and a lack of correspondence between expected and
actual performance is encountered.
• In many cases, the non corresponding data are a symptom of an underlying
cause as yet hidden.
• The debugging process attempts to match symptom with cause, thereby
leading to error correction.
• The debugging process will always have one of two outcomes: (1) the cause
will be found and corrected, or (2) the cause will not be found.
• In the latter case, the person performing debugging may suspect a cause,
design a test case to help validate that suspicion, and work toward error
correction in an iterative fashion. Characteristics of bugs provide some clues
of why Debugging is difficult :
1. The symptom and the cause may be geographically remote.
2. The symptom may disappear (temporarily) when another error is corrected.
3. The symptom may actually be caused by non errors (e.g., round-off
inaccuracies).
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing
problems.
• During debugging, we encounter errors that range from mildly annoying (e.g.,
an incorrect output format) to catastrophic (e.g. the system fails, causing
serious economic or physical damage).
• As the consequences of an error increase, the amount of pressure to find the
cause also increases.
• Often, pressure sometimes forces a software developer to fix one error and
at the same time introduce two more.
Q5.

1. PHASES OF SOFTWARE PROJECT PLANNING: NIRALI page no 4.16


2. COST ESTIMATION TECHNIQUES:
1. Software cost and effort estimation will never be an exact science.
2. Too many variables—human, technical, environmental, political—can
affect the ultimate cost of software and effort applied to develop it.
3. However, software project estimation can be transformed from a black art
to a series of systematic steps that provide estimates with acceptable risk.
4. To achieve reliable cost and effort estimates, a number of options arise:
1. Delay estimation until late in the project (obviously, we can achieve
100% accurate estimates after the project is complete!).
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project
cost and effort estimates.
5. Use one or more empirical models for software cost and effort estimation.
6. Unfortunately, the first option, however attractive, is not practical. Cost
estimates must be provided "up front." However, we should recognize
that the longer we wait, the more we know, and the more we know, the
less likely we are to make serious errors in our estimates.
3. ISO 9000 QUALITY STANDARDS:
1. A quality assurance system may be defined as the organizational
structure, responsibilities, procedures, processes, and resources for
implementing quality management.
2. Quality assurance systems are created to help organizations ensure their
products and services satisfy customer expectations by meeting their
specifications.
3. These systems cover a wide variety of activities encompassing a
product’s entire life cycle including planning, controlling, measuring,
testing and reporting, and improving quality levels throughout the
development and manufacturing process.
4. ISO 9000 describes quality assurance elements in generic terms
that can be applied to any business regardless of the products or
services offered.
5. After adopting the standards, a country typically permits only ISO
registered companies to supply goods and services to government
agencies and public utilities. Telecommunication equipment and medical
devices are examples of product categories that must be supplied by ISO
registered companies.
6. In turn, manufacturers of these products often require their suppliers to
become registered.
7. Private companies such as automobile and computer manufacturers
frequently require their suppliers to be ISO registered as well.
8. To become registered to one of the quality assurance system models
contained in ISO 9000, a company’s quality system and operations are
scrutinized by third party auditors for compliance to the standard and for
effective operation.
9. Upon successful registration, a company is issued a certificate from a
registration body represented by the auditors. Semi-annual surveillance
audits ensure continued compliance to the standard.
[20 points [NIRALI]are considered while testing the organization for ISO
and the results should be positive]

4. SOFTWARE FEASIBILITY:
1. Software feasibility has four solid dimensions
a. Technology— Is a project technically feasible? Is it within the state of
the art? Can defects be reduced to a level matching the application’s
needs?
b. Finance—Is it financially feasible? Can development be completed at a
cost the software organization, its client, or the market can afford?
c. Time—Will the project’s time-to-market beat the competition?
d. Resources—does the organization have the resources needed to
succeed?
2. The feasibility team ought to carry initial architecture and design of the
high-risk requirements to the point at which it can answer these
questions. In some cases, when the team gets negative answers, a
reduction in requirements may be negotiated.

Q6.
1. MAJOR COST COMPONENTS FOR PROJECT:
Nirali pg no 5.30 project cost estimation
2. DIFFERENT ACTIVITIES UNDER PHASE 1 :NIRALI page no 4.16

3. DATA ATTRIBUTES AND DATA RELATIONSHIP:

4. NOTATIONS USED IN DATA dictionary


The data dictionary is an organized listing of all data elements that are
pertinent to the system, with precise, rigorous definitions so that both user
and system analyst will have a common understanding of inputs, outputs,
components of stores and [even] intermediate calculations

Notations:-Page no 2.36 Nirali

You might also like