Software Engineering
Software Engineering
Prepared by
V.Prema AP/CSE
UNIT I
Internet Software :
Programs that support internet accesses and applications. For
example, search engine, browser, e-commerce software,
authoring tools.
Software Engineering
Software methods:
Software engineering methods provide the technical
“how to’s” for building software.
Software process:
Software engineering process is the glue that holds:
- technology together
- enables rational and timely development of computer
software.
Software engineering process is a framework of a set of key
process areas.
It forms a basis for:
- project management, budget and schedule control
- applications of technical methods
- product quality control
What is Software Engineering?
Software tools:
- programs provide automated or semi-automated support
for the process and methods.
- programs support engineers to perform their tasks in a
systematic and/or automatic manner.
Why Software Engineering?
Objectives:
- Identify new problems and solutions in software
production.
- Study new systematic methods, principles, approaches for
system analysis,design, implementation, testing and
maintenance.
- Provide new ways to control, manage, and monitor
software process.
- Build new software tools and environment to support
software engineering.
Why Software Engineering?
Major Goals:
- To increase software productivity and quality.
- To effectively control software schedule and planning.
- To reduce the cost of software development.
- To meet the customers’ needs and requirements.
- To enhance the conduction of software engineering
process.
- To improve the current software engineering practice.
- To support the engineers’ activities in a systematic and
efficient manner.
A Process Framework
Process Framework Activities
Communication
Planning
Modeling
Construction
Deployment
Umbrella Activities
Waterfall model
Incremental process models
Incremental model
RAD model
Evolutionary Process Models
Prototyping model
Spiral model
Object oriented process model
WATERFALL MODEL
a.k.as Linear life cycle model or
COMMUNICATION
Project initiation
classic life cycle model
Req. gathering
PLANNING
Estimating
Scheduling
tracking
MODELLING
Analysis
design
CONSTRUCTION
Code
Test
DEPLOYMENT
Delivery
Support
feedback
WATERFALL MODEL
Deployment
System Delivered to Customer/Market
Bug Fixes and Version Releases Over Time
Strengths
Easy to understand, easy to use
Provides structure to inexperienced staff
Milestones are well understood
Sets requirements stability
Good for management control (plan, staff, track)
Works well when quality is more important than cost or schedule
Waterfall Drawbacks
Correct
Unambiguous
Complete
Consistent
Ranked for importance and/or stability
Verifiable
Modifiable
Traceable
Types of Requirements:
Correct
Consistent
Coherent
Comprehensible
Modifiable
Verifiable
Prioritized
Unambiguous
Traceable
Credible source
Software Requirements
We should try to understand what sort of requirements may arise in the
requirement elicitation phase and what kinds of requirements are expected from
the software system.
Broadly software requirements should be categorized in two categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall into this
category.
They define functions and functionality within and from the software system.
Examples -
Search option given to user to search from various invoices.
User should be able to mail any report to management.
Users can be divided into groups and groups can be given separate rights.
Should comply business rules and administrative functions.
Software is developed keeping downward compatibility intact.
Software Requirements
Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall into
this category. They are implicit or expected characteristics of software, which users
make assumption of.
Non-functional requirements include -
Security
Logging
Storage
Configuration
Performance
Cost
Interoperability
Flexibility
Disaster recovery
Accessibility
User Interface requirements
UIis an important part of any software or
hardware or hybrid system. A software is
widely accepted if it is -
easy to operate
quick in response
effectively handling operational errors
providing simple yet consistent user interface
Software Metrics and Measures
Software Measures can be understood as a process of quantifying and symbolizing various
attributes and aspects of software.
Software Metrics provide measures for various aspects of software process and software
product.
Let us see some software metrics:
Size Metrics - LOC (Lines of Code), mostly calculated in thousands of delivered source code
lines, denoted as KLOC.
Function Point Count is measure of the functionality provided by the software. Function Point
count defines the size of functional aspect of software.
Complexity Metrics - McCabe’s Cyclomatic complexity quantifies the upper bound of the
number of independent paths in a program, which is perceived as complexity of the program or
its modules. It is represented in terms of graph theory concepts by using control flow graph.
Quality Metrics - Defects, their types and causes, consequence, intensity of severity and their
implications define the quality of product.
The number of defects found in development process and number of defects reported by the
client after the product is installed or delivered at client-end, define quality of product.
Process Metrics - In various phases of SDLC, the methods and tools used, the company
standards and the performance of development are software process metrics.
ResourceMetrics - Effort, time and various resources used, represents metrics for resource
measurement.
Petri net
A Petri net, also known as a place/transition (PT) net, is one of
several mathematical modeling languages for the description of
distributed systems.
It is a class of discrete event dynamic system. A Petri net is a
directed bipartite graph, in which the nodes represent transitions
(i.e. events that may occur, represented by bars) and places (i.e.
conditions, represented by circles).
The directed arcs describe which places are pre- and/or
postconditions for which transitions (signified by arrows). Some
sources[1] state that Petri nets were invented in August 1939 by
Carl Adam Petri—at the age of 13—for the purpose of describing
chemical processes.
Formal definition and basic terminology
Formal definition and basic terminology
SOFTWARE DESIGN
Software Design Basics
Simple to use
Responsive in short time
Clear to understand
Consistent on all interfacing screens
CLI has been a great tool of interaction with computers until the video
display monitors came into existence. CLI is first choice of many technical
users and programmers. CLI is minimum interface a software can provide
to its users.
CLI provides a command prompt, the place where the user types the
command and feeds to the system. The user needs to remember the
syntax of command and its use. Earlier CLI were not programmed to
handle the user errors effectively.
Graphical User Interface
Graphical User Interface provides the user graphical means to interact
with the system. GUI can be combination of both hardware and software.
Using GUI, user interprets the software.
Typically, GUI is more resource consuming than that of CLI. With
advancing technology, the programmers and designers create complex GUI
designs that work with more efficiency, accuracy and speed.
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system. A GUI
system has following elements such as:
Graphical User Interface
Window - An area where contents of application are displayed. Contents in a window can be
displayed in the form of icons or lists, if the window represents file structure
Tabs - If an application allows executing multiple instances of itself, they appear on the screen
as separate windows.
Menu - Menu is an array of standard commands, grouped together and placed at a visible place
(usually top) inside the application window. The menu can be programmed to appear or hide on
mouse clicks.
Icon - An icon is small picture representing an associated application. When these icons are
clicked or double clicked, the application window is opened.
Cursor - Interacting devices such as mouse, touch pad, digital pen are represented in GUI as
cursors.
COMPONENT LEVEL DESIGN
Component level design is the definition and design of components and modules
after the architectural design phase. Component-level design defines the data
structures, algorithms, interface characteristics, and communication mechanisms
allocated to each component for the system development.
A complete set of software components is defined during architectural design. But
the internal data structures and processing details of each component are not
represented at a level of abstraction that is close to code. Component-level design
defines the data structures, algorithms, interface characteristics, and
communication mechanisms allocated to each component.
According to OMG UML specification component is expressed as, “A modular,
deployable, and replaceable part of a system that encapsulates implementation
and exposes a set of interfaces.”
Component Views
• OO View – A component is a set of collaborating classes.
• Conventional View – A component is a functional element of a program that
incorporates processing logic, the internal data structures required to implement
the processing logic, and an interface that enables the component to be invoked
and data to be passed to it.
CLASS ELABORATION
Class elaboration focuses on providing a detailed description of attributes, interfaces and
methods before the development of the system activities. The following example provides a
elaboration design class for “PrintJob”, the elaborated design class provides a detail
description of the attributes, interfaces and the operations of the class.
Views of a Component
A component can have three different views − object-oriented view, conventional view,
and process-related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain
class (analysis) and infrastructure class (design) are explained to identify all attributes and
operations that apply to its implementation. It also involves defining the interfaces that
enable classes to communicate and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the
processing logic, the internal data structures that are required to implement the
processing logic and an interface that enables the component to be invoked and data to
be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from
existing components maintained in a library. As the software architecture is formulated,
components are selected from the library and used to populate the architecture.
A user interface (UI) component includes grids, buttons referred as controls, and utility
components expose a specific subset of functions used in other components.
Characteristics of Components
Reusability − Components are usually designed to be reused in
different situations in different applications. However, some
components may be designed for a specific task.
Replaceable − Components may be freely substituted with other
similar components.
Not context specific − Components are designed to operate in
different environments and contexts.
Extensible − A component can be extended from existing
components to provide new behavior.
Encapsulated − A A component depicts the interfaces, which allow
the caller to use its functionality, and do not expose details of the
internal processes or any internal variables or state.
Independent − Components are designed to have minimal
dependencies on other components.
Principles of Component−Based Design
Reusability − Components are usually designed to be reused in different
situations in different applications. However, some components may be
designed for a specific task.
Replaceable − Components may be freely substituted with other similar
components.
Not context specific − Components are designed to operate in different
environments and contexts.
Extensible − A component can be extended from existing components to
provide new behavior.
Encapsulated − A A component depicts the interfaces, which allow the
caller to use its functionality, and do not expose details of the internal
processes or any internal variables or state.
Independent − Components are designed to have minimal dependencies
on other components.
UNIT IV
Software Validation
Validation is process of examining whether or not the software satisfies the user requirements. It
is carried out at the end of the SDLC. If the software matches requirements for which it was made,
it is validated.
Validation ensures the product under development is as per the user requirements.
Validation answers the question – "Are we developing the product which attempts all that user
needs from this software ?".
Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business requirements, and
is developed adhering to the proper specifications and methodologies.
Verification ensures the product being developed is according to design specifications.
Verification answers the question– "Are we developing this product by firmly following all design
specifications ?"
Verifications concentrates on the design and system specifications.
Manual vs Automated Testing
Testing can either be done manually or using an automated testing tool:
Manual - This testing is performed without taking help of automated testing
tools. The software tester prepares test cases for different sections and
levels of the code, executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm
whether or not right test cases are used. Major portion of testing involves
manual testing.
Automated This testing is a testing procedure done with aid of automated
testing tools. The limitations with manual testing can be overcome using
automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This
can be easily done with manual testing. But to check if the web-server can
take the load of 1 million users, it is quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load
testing, stress testing, regression testing.
Black-box testing
It is carried out to test functionality of the program. It is also called
‘Behavioral’ testing. The tester in this case, has a set of input values and
respective desired results. On providing input, if the output matches with the
desired results, the program is tested ‘ok’, and problematic otherwise.
Black-box testing
In this testing method, the design and structure of the code are not known to the
tester, and testing engineers and end users conduct this test on the software.
Black-box testing techniques:
Equivalence class - The input is divided into similar classes. If one element of
a class passes the test, it is assumed that all the class is passed.
Boundary values - The input is divided into higher and lower end values. If
these values pass the test, it is assumed that all values in between may pass too.
Cause-effect graphing - In both previous methods, only one input value at a
time is tested. Cause (input) – Effect (output) is a testing technique where
combinations of input values are tested in a systematic way.
Pair-wise Testing - The behavior of software depends on multiple parameters.
In pairwise testing, the multiple parameters are tested pair-wise for their
different values.
State-based testing - The system changes state on provision of input. These
systems are tested based on their states and input.
White-box testing
In this testing method, the design and structure of the code are known to the
tester. Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
•Control-flow testing - The purpose of the control-flow testing to set up test
cases which covers all statements and branch conditions. The branch
conditions are tested for both being true and false, so that all statements can
be covered.
•Data-flow testing - This testing technique emphasis to cover all the data
variables included in the program. It tests where the variables were declared
and defined and where they were used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to software
development. Before jumping on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the software.
Software is tested on various levels -
Unit Testing
While coding, the programmer performs some tests on that unit of program to
know if it is error free. Testing is performed under white-box testing approach.
Unit testing helps developers decide that individual units of the program are
working as per requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find
out if the units if integrated together would also work without errors. For
example, argument passing and data updation etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can
be accomplished using one or more of the following tests:
Performance testing - This test proves how efficient the software is. It tests
the effectiveness and average time taken by the software to do desired task.
Performance testing is done by means of load testing and stress testing where
the software is put under high user and data load under various environment
conditions.
Security & Portability - These tests are done when the software is meant to
work on various platforms and accessed by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through
last phase of testing where it is tested for user-interaction and response. This is
important because even if the software matches all user requirements and if
user does not like the way it appears or works, it may be rejected.
Beta testing - After the software is tested internally, it is handed over to the
users to use it under their production environment only for testing purpose. This
is not as yet the delivered product. Developers expect that users at this stage
will bring minute problems, which were skipped to attend.
Regression Testing
Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of the
software increases. Gradually, it becomes next to impossible to remember the
flow of program. If one forgets how software and its underlying programs, files,
procedures are constructed it then becomes very difficult to share, debug and
modify the program.
Software Implementation
Common Pitfalls
Refactoring does “not” mean:
rewriting code
fixing bugs
improve observable aspects of software such as its interface
Refactoring in the absence of safeguards against introducing defects (i.e. violating the
“behaviour preserving” condition) is risky. Safeguards include aids to regression testing
including automated unit tests or automated acceptance tests, and aids to formal
reasoning such as type systems.
Expected Benefits
The following are claimed benefits of refactoring:
refactoring improves objective attributes of code (length, duplication, coupling and
cohesion, cyclomatic complexity) that correlate with ease of maintenance
refactoring helps code understanding
refactoring encourages each developer to think about and understand design decisions,
in particular in the context of collective ownership / collective code ownership
refactoring favors the emergence of reusable design elements (such as design patterns)
and code modules
UNIT V
PROJECT MANAGEMENT
Project Management
The job pattern of an IT company engaged in software development
can be seen split in two parts:
Software Creation
Software Project Management
Step 4 − Reconcile estimates: Compare the resulting values from Step 3 to those
obtained from Step 2. If both sets of estimates agree, then your numbers are highly
reliable. Otherwise, if widely divergent estimates occur conduct further investigation
concerning whether −
The scope of the project is not adequately understood or has been misinterpreted.
The function and/or activity breakdown is not accurate.
Historical data used for the estimation techniques is inappropriate for the application, or
obsolete, or has been misapplied.
Step 5 − Determine the cause of divergence and then reconcile the estimates.
Estimation Techniques - Function Points
A Function Point (FP) is a unit of measurement to express the amount of
business functionality, an information system (as a product) provides to a user.
FPs measure software size. They are widely accepted as an industry standard
for functional sizing.
For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −
ISO Standards
COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size
measurement method.
FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems
engineering - FiSMA 1.1 functional size measurement method.
IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software
measurement - IFPUG functional size measurement method.
Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point
Analysis - Counting Practices Manual.
NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size
measurement method version 2.1 - Definitions and counting guidelines for the
application of Function Point Analysis.
Object Management Group Specification
for Automated Function Point
External Inputs
External Input (EI) is a transaction function in which Data goes “into” the application from
outside the boundary to inside. This data is coming external to the application.
Data may come from a data input screen or another application.
An EI is how an application gets information.
Data can be either control information or business information.
Data may be used to maintain one or more Internal Logical Files.
If the data is control information, it does not have to update an Internal Logical File.
External Outputs
External Output (EO) is a transaction function in which data comes “out” of the system.
Additionally, an EO may update an ILF. The data creates reports or output files sent to
other applications.
External Inquiries
External Inquiry (EQ) is a transaction function with both input and output components
that result in data retrieval.
Definition of RETs, DETs, FTRs
Cont…
Record Element Type
A Record Element Type (RET) is the largest user identifiable subgroup of elements
within an ILF or an EIF. It is best to look at logical groupings of data to help identify
them.
The transaction functions EI, EO, EQ are measured by counting FTRs and DETs that
they contain following counting rules. Likewise, data functions ILF and EIF are
measured by counting DETs and RETs that they contain following counting rules. The
measures of transaction functions and data functions are used in FP counting which
results in the functional size or function points.
Software Engineering | COCOMO Model
Cont…
Cocomo (Constructive Cost Model) is a regression model based on
LOC, i.e number of Lines of Code. It is a procedural cost estimate
model for software projects and often used as a process of reliably
predicting the various parameters associated with making a project
such as size, effort, cost, time and quality. It was proposed by Barry
Boehm in 1970 and is based on the study of 63 projects, which make
it one of the best-documented models.
The key parameters which define the quality of any software products,
which are also an outcome of the Cocomo are primarily Effort &
Schedule:
Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
Schedule: Simply means the amount of time required for the
completion of the job, which is, of course, proportional to the effort put.
It is measured in the units of time such as weeks, months.
Boehm’s definition
Boehm’s definition of organic, semidetached, and embedded systems:
Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
Semi-detached – A software project is said to be a Semi-detached type if the
vital characteristics such as team-size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The
projects classified as Semi-Detached are comparatively less familiar and
difficult to develop compared to the organic ones and require more experience
and better guidance and creativity. Eg: Compilers or different Embedded
Systems can be considered of Semi-Detached type.
Embedded – A software project with requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complex
models.
All the above system types utilize different values of the constants used in
Effort Calculations.
Boehm’s definition
Types of Models: COCOMO consists of a hierarchy of three increasingly
detailed and accurate forms. Any of the three forms can be adopted according
to our requirements. These are types of COCOMO model:
Basic COCOMO Model
Intermediate COCOMO Model
Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough
calculations of Software Costs. Its accuracy is somewhat restricted due to the
absence of sufficient factor considerations.