Software Engineering Tutorial
Software Engineering Tutorial
Software Evolution
The process of developing a software product using software engineering
principles and methods is referred to as software evolution. This
includes the initial development of software and its maintenance and
updates, till desired software product is developed, which satisfies the
expected requirements.
Software Paradigms
Software paradigms refer to the methods and steps, which are taken while
designing the software. There are many methods proposed and are in
work today, but we need to see where in the software engineering these
paradigms stand. These can be combined into various categories, though
each of them is contained in one another:
Design
Maintenance
Programming
Programming Paradigm
This paradigm is related closely to programming aspect of software
development. This includes –
Coding
Testing
Integration
Operational
Transitional
Maintenance
Well-engineered and crafted software is expected to have the following
characteristic3s:
Operational
This tells us how well software works in operations. It can be measured
on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to
another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to
maintain itself in the ever-changing environment:
Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which
uses well-defined engineering concepts required to produce efficient,
durable, scalable, in-budget and on-time software products.
Requirement Engineering
Requirements engineering (RE) refers to the process of defining,
documenting, and maintaining requirements in the engineering design
process. Requirement engineering provides the appropriate mechanism to
understand what the customer desires, analyzing the need, and assessing
feasibility, negotiating a reasonable solution, specifying the solution
clearly, validating the specifications and managing the requirements as
they are transformed into a working system. Thus, requirement
engineering is the disciplined application of proven principles, methods,
tools, and notation to describe a proposed system's intended behavior
and its associated constraints.
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for
developing the software that is acceptable to users, flexible to change and
conformable to established standards.
Types of Feasibility:
SDLC Activities
SDLC provides a serie,s of steps to be followed to design and develop a
software product efficiently. SDLC framework includes the following steps:
Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to negotiate
the terms. He submits his request to the service providing organization in
writing.
Requirement Gathering
This step onwards the software development team works to carry on the
project. The team holds discussions with various stakeholders from
problem domain and tries to bring out as much information as possible on
their requirements. The requirements are contemplated and segregated
into user requirements, system requirements and functional
requirements. The requirements are collected using a number of practices
as given -
System Analysis
At this step the developers decide a roadmap of their plan and try to bring
up the best software model suitable for the project. System analysis
includes Understanding of software product limitations, learning system
related problems or changes to be done in existing systems beforehand,
identifying and addressing the impact of project on organization and
personnel etc. The project team analyzes the scope of the project and
plans the schedule and resources accordingly.
Software Design
Next step is to bring down whole knowledge of requirements and analysis
on the desk and design the software product. The inputs from users and
information gathered in requirement gathering phase are the inputs of
this step. The output of this step comes in the form of two designs; logical
design and physical design. Engineers produce meta-data and data
dictionaries, logical diagrams, data-flow diagrams and in some cases
pseudo codes.
Coding
This step is also known as programming phase. The implementation of
software design starts in terms of writing program code in the suitable
programming language and developing error-free executable programs
efficiently.
Testing
An estimate says that 50% of whole software development process should
be tested. Errors may ruin the software from critical level to its own
removal. Software testing is done while coding by the developers and
thorough testing is conducted by testing experts at various levels of code
such as module testing, program testing, product testing, in-house testing
and testing the product at user’s end. Early discovery of errors and their
remedy is the key to reliable software.
Integration
Software may need to be integrated with the libraries, databases and
other program(s). This stage of SDLC is involved in the integration of
software with outer world entities.
Implementation
This means installing the software on user machines. At times, software
needs post-installation configurations at user end. Software is tested for
portability and adaptability and integration related issues are solved
during implementation.
Operation and Maintenance
This phase confirms the software operation in terms of more efficiency
and less errors. If required, the users are trained on, or aided with the
documentation on how to operate the software and how to keep the
software operational. The software is maintained timely by updating the
code according to the changes taking place in user end environment or
technology. This phase may face challenges from hidden bugs and real-
world unidentified problems.
Disposition
As time elapses, the software may decline on the performance front. It
may go completely obsolete or may need intense upgradation. Hence a
pressing need to eliminate a major portion of the system arises. This
phase includes archiving data and required software components, closing
down the system, planning disposition activity and terminating system at
appropriate end-of-system time.
Waterfall Model
Waterfall model
Winston Royce introduced the Waterfall Model in 1970.This model has five
phases: Requirements analysis and specification, design, implementation,
and unit testing, integration and system testing, and operation and
maintenance. The steps always follow in this order and do not overlap.
The developer must complete every phase before the next phase begins.
This model is named "Waterfall Model", because its diagrammatic
representation resembles a cascade of waterfalls.
1. Requirements analysis and specification phase: The aim
of this phase is to understand the exact requirements of the
customer and to document them properly. Both the customer
and the software developer work together so as to document
all the functions, performance, and interfacing requirement of
the software. It describes the "what" of the system to be
produced and not "how."In this phase, a large document
called Software Requirement Specification
(SRS) document is created which contained a detailed
description of what the system will do in the common
language.
2. Design Phase: This phase aims to transform the requirements
gathered in the SRS into a suitable form which permits further coding in a
programming language. It defines the overall software architecture
together with high level and detailed design. All this work is documented
as a Software Design Document (SDD).
3. Implementation and unit testing: During this phase, design is
implemented. If the SDD is complete, the implementation or coding phase
proceeds smoothly, because all the information needed by software
developers is contained in the SDD.
During testing, the code is thoroughly examined and modified. Small
modules are tested in isolation initially. After that these modules are
tested by writing some overhead code to check the interaction between
these modules and the flow of intermediate output.
4. Integration and System Testing: This phase is highly crucial as the
quality of the end product is determined by the effectiveness of the
testing carried out. The better output will lead to satisfied customers,
lower maintenance costs, and accurate results. Unit testing determines
the efficiency of individual modules. However, in this phase, the modules
are tested for their interactions with each other and with the system.
5. Operation and maintenance phase: Maintenance is the task
performed by every user once the software has been delivered to the
customer, installed, and operational.
When to use SDLC Waterfall Model?
Some Circumstances where the use of the Waterfall model is most suited
are:
o When the requirements are constant and not changed regularly.
o A project is short
o The situation is calm
o Where the tools and technology used is consistent and is not
changing
o When resources are well prepared and are available to use.
Advantages of Waterfall model
o This model is simple to implement also the number of resources
that are required for it is minimal.
o The requirements are simple and explicitly declared; they remain
unchanged during the entire project development.
o The start and end points for each phase is fixed, which makes it
easy to cover progress.
o The release date for the complete product, as well as its final cost,
can be determined before development.
o It gives easy to control and clarity for the customer due to a strict
reporting system.
Disadvantages of Waterfall model
o In this model, the risk factor is higher, so this model is not suitable
for more significant and complex projects.
o This model cannot accept the changes in requirements during
development.
o It becomes tough to go back to the phase. For example, if the
application has now shifted to the coding phase, and there is a
change in requirement, It becomes tough to go back and change it.
o Since the testing done at a later stage, it does not allow identifying
the challenges and risks in the earlier phase, so the risk reduction
strategy is difficult to prepare.
Spiral Model
The spiral model, initially proposed by Boehm, is an evolutionary software
process model that couples the iterative feature of prototyping with the
controlled and systematic aspects of the linear sequential model. It
implements the potential for rapid development of new versions of the
software. Using the spiral model, the software is developed in a series of
incremental releases. During the early iterations, the additional release
may be a paper model or prototype. During later iterations, more and
more complete versions of the engineered system are produced.
Each cycle in the spiral is divided into four parts:
Objective setting: Each cycle in the spiral starts with the identification
of purpose for that cycle, the various alternatives that are possible for
achieving the targets, and the constraints that exists.
Risk Assessment and reduction: The next phase in the cycle is to
calculate these various alternatives based on the goals and constraints.
The focus of evaluation in this stage is located on the risk perception for
the project.
Development and validation: The next phase is to develop strategies
that resolve uncertainties and risks. This process may include activities
such as benchmarking, simulation, and prototyping.
Planning: Finally, the next step is planned. The project is reviewed, and a
choice made whether to continue with a further period of the spiral. If it is
determined to keep, plans are drawn up for the next step of the project.
The development phase depends on the remaining risks. For example, if
performance or user-interface risks are treated more essential than the
program development risks, the next phase may be an evolutionary
development that includes developing a more detailed prototype for
solving the risks.
The risk-driven feature of the spiral model allows it to accommodate any
mixture of a specification-oriented, prototype-oriented, simulation-
oriented, or another type of approach. An essential element of the model
is that each period of the spiral is completed by a review that includes all
the products developed during that cycle, including plans for the next
cycle. The spiral model works for development as well as enhancement
projects.
V-Model
V-Model also referred to as the Verification and Validation Model. In this,
each phase of SDLC must complete before the next phase starts. It follows
a sequential design process same as the waterfall model. Testing of the
device is planned in parallel with a corresponding stage of development.
Incremental Model
Incremental Model is a process of software development where
requirements divided into multiple standalone modules of the software
development cycle. In this model, each module goes through the
requirements, design, implementation and testing phases. Every
subsequent release of the module adds function to the previous release.
The process continues until the complete system achieved.
The various phases of incremental model are as follows:
1. Requirement analysis: In the first phase of the incremental model,
the product analysis expertise identifies the requirements. And the system
functional requirements are understood by the requirement analysis
team. To develop the software under the incremental model, this phase
performs a crucial role.
2. Design & Development: In this phase of the Incremental model of
SDLC, the design of the system functionality and the development method
are finished with success. When software develops new practicality, the
incremental model uses style and development phase.
3. Testing: In the incremental model, the testing phase checks the
performance of each existing function as well as additional functionality.
In the testing phase, the various methods are used to test the behavior of
each task.
4. Implementation: Implementation phase enables the coding phase of
the development system. It involves the final coding that design in the
designing and development phase and tests the functionality in the
testing phase. After completion of this phase, the number of the product
working is enhanced and upgraded up to the final system product
When we use the Incremental Model?
o When the requirements are superior.
o A project has a lengthy development schedule.
o When Software team are not very well skilled or trained.
o When the customer demands a quick release of the product.
o You can develop prioritized requirements first.
Advantage of Incremental Model
o Errors are easy to be recognized.
o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.
Disadvantage of Incremental Model
o Need for good planning
o Total Cost is high.
o Well defined module interfaces are needed.
Agile Model
The meaning of Agile is swift or versatile."Agile process model" refers to
a software development approach based on iterative development. Agile
methods break tasks into smaller iterations, or parts do not directly
involve long term planning. The project scope and requirements are laid
down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are
clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process
model, which typically lasts from one to four weeks. The division of the
entire project into smaller parts helps to minimize the project risk and to
reduce the overall project delivery time requirements. Each iteration
involves a team working through a full software development life cycle
including planning, requirements analysis, design, coding, and testing
before a working product is demonstrated to the client.
Iterative Model
In this Model, you can start with some of the software specifications and
develop the first version of the software. After the first version if there is a
need to change the software, then a new version of the software is
created with a new iteration. Every release of the Iterative Model finishes
in an exact and fixed period that is called iteration.
The Iterative Model allows the accessing earlier phases, in which the
variations made respectively. The final output of the project renewed at
the end of the Software Development Life Cycle (SDLC) process.
Prototype Model
The prototype model requires that before carrying out the development of
actual software, a working prototype of the system should be built. A
prototype is a toy implementation of the system. A prototype usually turns
out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance
as compared to actual software. In many instances, the client only has a
general view of what is expected from the software product. In such a
scenario where there is an absence of detailed information regarding the
input to the system, the processing needs, and the output requirement,
the prototyping model may be employed.
Software Creation
Software Project Management
A project is well-defined task, which is a collection of several operations
done in order to achieve a goal (for example, software development and
delivery). A Project can be characterized as:
Software Project
A Software Project is the complete procedure of software development
from requirement gathering to testing and maintenance, carried out
according to the execution methodologies, in a specified period of time to
achieve intended software product.
Managing People
Act as project leader
Lesion with stakeholders
Managing human resources
Setting up reporting hierarchy etc.
Managing Project
Defining and setting up project scope
Managing project management activities
Monitoring progress and performance
Risk analysis at every phase
Take necessary step to avoid or come out of problems
Act as project spokesperson
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the
production of software actually starts. It is there for the software
production but involves no concrete activity that has any direction
connection with software production; rather it is a set of multiple
processes, which facilitates software production. Project planning may
include the following:
Scope Management
It defines the scope of project; this includes all the activities, process need
to be done in order to make a deliverable software product. Scope
management is essential because it creates boundaries of the project by
clearly defining what would be done in the project and what would not be
done. This makes project to contain limited and quantifiable tasks, which
can easily be documented and in turn avoids cost and time overrun.
During Project Scope management, it is necessary to -
Project Estimation
For an effective management accurate estimation of various measures is
a must. With correct estimation managers can manage and control the
project more efficiently and effectively.
Project estimation may involve the following:
Decomposition Technique
This technique assumes the software as a product of various
compositions.
There are two main models -
Putnam Model
This model is made by Lawrence H. Putnam, which is based on
Norden’s frequency distribution (Rayleigh curve). Putnam
model maps time and efforts required with software size.
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by
Barry W. Boehm. It divides the software product into three
categories of software: organic, semi-detached and
embedded.
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be
done with specified order and within time slot allotted to each activity.
Project managers tend to tend to define various tasks, and project
milestones and arrange them keeping various factors in mind. They look
for tasks lie in critical path in the schedule, which are necessary to
complete in specific manner (because of task interdependency) and
strictly within the time allocated. Arrangement of tasks which lies out of
critical path are less likely to impact over all schedule of the project.
For scheduling a project, it is necessary to -
Resource management
All elements used to develop a software product may be assumed as
resource for that project. This may include human resource, productive
tools and software libraries.
The resources are available in limited quantity and stay in the
organization as a pool of assets. The shortage of resources hampers the
development of project and it can lag behind the schedule. Allocating
extra resources increases development cost in the end. It is therefore
necessary to estimate and allocate adequate resources for the project.
Resource management includes -
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business
competition.
Configuration Management
Configuration management is a process of tracking and controlling the
changes in software in terms of the requirements, design, functions and
development of the product.
IEEE defines it as “the process of identifying and defining the items in the
system, controlling the change of these items throughout their life cycle,
recording and reporting the status of items and change requests, and
verifying the completeness and correctness of items”.
Generally, once the SRS is finalized there is less chance of requirement of
changes from user. If they occur, the changes are addressed only with
prior approval of higher management, as there is a possibility of cost and
time overrun.
Baseline
A phase of SDLC is assumed over if it baselined, i.e. baseline is a
measurement that defines completeness of a phase. A phase is baselined
when all activities pertaining to it are finished and well documented. If it
was not the final phase, its output would be used in next immediate
phase.
Configuration management is a discipline of organization administration,
which takes care of occurrence of any change (process, requirement,
technological, strategical etc.) after a phase is baselined. CM keeps check
on any changes done in software.
Change Control
Change control is function of configuration management, which ensures
that all changes made to software system are consistent and made as per
organizational rules and regulations.
A change in the configuration of product goes through following steps -
Identification - A change request arrives from either internal
or external source. When change request is identified formally,
it is properly documented.
Validation - Validity of the change request is checked and its
handling procedure is confirmed.
Analysis - The impact of change request is analyzed in terms
of schedule, cost and required efforts. Overall impact of the
prospective change on system is analyzed.
Control - If the prospective change either impacts too many
entities in the system or it is unavoidable, it is mandatory to
take approval of high authorities before change is incorporated
into the system. It is decided if the change is worth
incorporation or not. If it is not, change request is refused
formally.
Execution - If the previous phase determines to execute the
change request, this phase take appropriate actions to
execute the change, does a thorough revision if necessary.
Close request - The change is verified for correct
implementation and merging with the rest of the system. This
newly incorporated change in the software is documented
properly and the request is formally is closed.
Gantt Chart
Gantt charts was devised by Henry Gantt (1917). It represents project
schedule with respect to time periods. It is a horizontal bar chart with bars
representing activities and time scheduled for the project activities.
PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts
project as network diagram. It is capable of graphically representing main
events of project in both parallel and consecutive way. Events, which
occur one after another, show dependency of the later event over the
previous one.
Resource Histogram
This is a graphical tool that contains bar or chart representing number of
resources (usually skilled staff) required over time for a project event (or
phase). Resource Histogram is an effective tool for staff planning and
coordination.
Critical Path Analysis
This tools is useful in recognizing interdependent tasks in the project. It
also helps to find out the shortest path or critical path to complete the
project successfully. Like PERT diagram, each event is allotted a specific
time frame. This tool shows dependency of event assuming an event can
proceed to next only if the previous one is completed.
The events are arranged according to their earliest possible start time.
Path between start and end node is critical path which cannot be further
reduced and all events require to be executed in same order.
Software Requirements
The software requirements are description of features and functionalities
of the target system. Requirements convey the expectations of users from
the software product. The requirements can be obvious or hidden, known
or unknown, expected or unexpected from client’s point of view.
Requirement Engineering
The process to gather the software requirements from client, analyze and
document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain
sophisticated and descriptive ‘System Requirements Specification’
document.
Feasibility Study
Requirement Gathering
Software Requirement Specification
Software Requirement Validation
Let us see the process briefly -
Feasibility study
When the client approaches the organization for getting the desired
product developed, it comes up with rough idea about what all functions
the software must perform and which all features are expected from the
software.
Referencing to this information, the analysts does a detailed study about
whether the desired system and its functionality are feasible to develop.
This feasibility study is focused towards goal of the organization. This
study analyzes whether the software product can be practically
materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the
organization. It explores technical aspects of the project and product such
as usability, maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that should
contain adequate comments and recommendations for management
about whether or not the project should be undertaken.
Requirement Gathering
If the feasibility report is positive towards undertaking the project, next
phase starts with gathering requirements from the user. Analysts and
engineers communicate with the client and end-users to know their ideas
on what the software should provide and which features they want the
software to include.
Interviews
Interviews are strong medium to collect requirements. Organization may
conduct several types of interviews such as:
Questionnaires
A document with pre-defined set of objective questions and respective
options is handed over to all stakeholders to answer, which are collected
and compiled.
A shortcoming of this technique is, if an option for some issue is not
mentioned in the questionnaire, the issue might be left unattended.
Task analysis
Team of engineers and developers may analyze the operation for which
the new system is required. If the client already has some software to
perform certain operation, it is studied and requirements of proposed
system are collected.
Domain Analysis
Every software falls into some domain category. The expert people in the
domain can be a great help to analyze general and specific requirements.
Brainstorming
An informal debate is held among various stakeholders and all their inputs
are recorded for further requirements analysis.
Prototyping
Prototyping is building user interface without adding detail functionality
for user to interpret the features of intended software product. It helps
giving better idea of requirements. If there is no software installed at
client’s end for developer’s reference and the client is not aware of its
own requirements, the developer creates a prototype based on initially
mentioned requirements. The prototype is shown to the client and the
feedback is noted. The client feedback serves as an input for requirement
gathering.
Observation
Team of experts visit the client’s organization or workplace. They observe
the actual working of the existing installed systems. They observe the
workflow at client’s end and how execution problems are dealt. The team
itself draws some conclusions which aid to form requirements expected
from the software.
Clear
Correct
Consistent
Coherent
Comprehensible
Modifiable
Verifiable
Prioritized
Unambiguous
Traceable
Credible source
Software Requirements
We should try to understand what sort of requirements may arise in the
requirement elicitation phase and what kinds of requirements are
expected from the software system.
Broadly software requirements should be categorized in two categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall into
this category.
They define functions and functionality within and from the software
system.
Examples -
Search option given to user to search from various invoices.
User should be able to mail any report to management.
Users can be divided into groups and groups can be given
separate rights.
Should comply business rules and administrative functions.
Software is developed keeping downward compatibility intact.
Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall
into this category. They are implicit or expected characteristics of
software, which users make assumption of.
Non-functional requirements include -
Security
Logging
Storage
Configuration
Performance
Cost
Interoperability
Flexibility
Disaster recovery
Accessibility
Requirements are categorized logically as
easy to operate
quick in response
effectively handling operational errors
providing simple yet consistent user interface
User acceptance majorly depends upon how user can use the software. UI
is the only way for users to perceive the system. A well performing
software system must also be equipped with attractive, clear, consistent
and responsive user interface. Otherwise the functionalities of software
system can not be used in convenient way. A system is said be good if it
provides means to use it efficiently. User interface requirements are
briefly mentioned below -
Content presentation
Easy Navigation
Simple interface
Responsive
Consistent UI elements
Feedback mechanism
Default settings
Purposeful layout
Strategical use of color and texture.
Provide help information
User centric approach
Group based view settings.
Modularization
Modularization is a technique to divide a software system into multiple
discrete and independent modules, which are expected to be capable of
carrying out task(s) independently. These modules may work as basic
constructs for the entire software. Designers tend to design modules such
that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’
problem-solving strategy this is because there are many other benefits
attached with the modular design of a software.
Advantage of modularization:
Concurrency
Back in time, all software are meant to be executed sequentially. By
sequential execution we mean that the coded instruction will be executed
one after another implying only one portion of program being activated at
any given time. Say, a software has multiple modules, then only one of all
the modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software
into multiple independent units of execution, like modules and executing
them in parallel. In other words, concurrency provides capability to the
software to execute more than one part of code in parallel to each other.
It is necessary for the programmers and designers to recognize those
modules, which can be made parallel execution.
Example
The spell check feature in word processor is a module of software, which
runs along side the word processor itself.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability
within elements of a module. The greater the cohesion, the better is the
program design.
There are seven types of cohesion, namely –
Coupling
Coupling is a measure that defines the level of inter-dependability among
modules of a program. It tells at what level the modules interfere and
interact with each other. The lower the coupling, the better the program.
There are five levels of coupling, namely -
Design Verification
The output of software design process is design documentation, pseudo
codes, detailed logic diagrams, process diagrams, and detailed description
of all functional or non-functional requirements.
The next phase, which is the implementation of software, depends on all
outputs mentioned above.
It is then becomes necessary to verify the output before proceeding to the
next phase. The early any mistake is detected, the better it is or it might
not be detected until testing of the product. If the outputs of design phase
are in formal notation form, then their associated tools for verification
should be used otherwise a thorough design review can be used for
verification and validation.
By structured verification approach, reviewers can detect defects that
might be caused by overlooking some conditions. A good design review is
important for good software design, accuracy and quality.
Types of DFD
Data Flow Diagrams are either Logical or Physical.
Levels of DFD
Level 0 - Highest abstraction level DFD is known as Level 0
DFD, which depicts the entire information system as one
diagram concealing all the underlying details. Level 0 DFDs are
also known as context level DFDs.
Level 1 - The Level 0 DFD is broken down into more specific,
Level 1 DFD. Level 1 DFD depicts basic modules in the system
and flow of data among various modules. Level 1 DFD also
mentions basic processes and sources of information.
Level 2 - At this level, DFD shows how data flows inside the
modules mentioned in Level 1.
Higher level DFDs can be transformed into more specific lower
level DFDs with deeper level of understanding unless the
desired level of specification is achieved.
Structure Charts
Structure chart is a chart derived from Data Flow Diagram. It represents
the system in more detail than DFD. It breaks down the entire system into
lowest functional modules, describes functions and sub-functions of each
module of the system to a greater detail than DFD.
Structure chart represents hierarchical structure of modules. At each layer
a specific task is performed.
Here are the symbols used in construction of structure charts -
HIPO Diagram
HIPO (Hierarchical Input Process Output) diagram is a combination of two
organized method to analyze the system and provide the means of
documentation. HIPO model was developed by IBM in year 1970.
HIPO diagram represents the hierarchy of modules in the software system.
Analyst uses HIPO diagram in order to obtain high-level view of system
functions. It decomposes functions into sub-functions in a hierarchical
manner. It depicts the functions performed by system.
HIPO diagrams are good for documentation purpose. Their graphical
representation makes it easier for designers and managers to get the
pictorial idea of the system structure.
In contrast to IPO (Input Process Output) diagram, which depicts the flow
of control and data in a module, HIPO does not provide any information
about data flow or control flow.
Example
Both parts of HIPO diagram, Hierarchical presentation and IPO Chart are
used for structure design of software program as well as documentation of
the same.
Structured English
Most programmers are unaware of the large picture of software so they
only rely on what their managers tell them to do. It is the responsibility of
higher software management to provide accurate information to the
programmers to develop accurate yet fast code.
Other forms of methods, which use graphs or diagrams, may are
sometimes interpreted differently by different people.
Hence, analysts and designers of the software come up with tools such as
Structured English. It is nothing but the description of what is required to
code and how to code it. Structured English helps the programmer to
write error-free code.
Other form of methods, which use graphs or diagrams, may are
sometimes interpreted differently by different people. Here, both
Structured English and Pseudo-Code tries to mitigate that understanding
gap.
Structured English is the It uses plain English words in structured
programming paradigm. It is not the ultimate code but a kind of
description what is required to code and how to code it. The following are
some tokens of structured programming.
IF-THEN-ELSE,
DO-WHILE-UNTIL
Analyst uses the same variable and data name, which are stored in Data
Dictionary, making it much simpler to write and understand the code.
Example
We take the same example of Customer Authentication in the online
shopping environment. This procedure to authenticate customer can be
written in Structured English as:
Enter Customer_Name
SEEK Customer_Name in Customer_Name_DB file
IF Customer_Name found THEN
Call procedure USER_PASSWORD_AUTHENTICATE()
ELSE
PRINT error message
Call procedure NEW_CUSTOMER_REQUEST()
ENDIF
The code written in Structured English is more like day-to-day spoken
English. It can not be implemented directly as a code of software.
Structured English is independent of programming language.
Pseudo-Code
Pseudo code is written more close to programming language. It may be
considered as augmented programming language, full of comments and
descriptions.
Pseudo code avoids variable declaration but they are written using some
actual programming language’s constructs, like C, Fortran, Pascal etc.
Pseudo code contains more programming details than Structured English.
It provides a method to perform the task, as if a computer is executing the
code.
Example
Program to print Fibonacci up to n numbers.
Decision Tables
A Decision table represents conditions and the respective actions to be
taken to address them, in a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar
information into a single table and then by combining tables it delivers
easy and convenient decision-making.
Example
Let us take a simple example of day-to-day problem with our Internet
connectivity. We begin by identifying all problems that can arise while
starting the internet and their respective possible solutions.
We list all possible problems under column conditions and the prospective
actions under column Actions.
Conditions/Actions Rules
Shows Connected N N N N Y Y Y Y
Opens Website Y N Y N Y N Y N
Do no action
Table : Decision Table – In-house Internet Troubleshooting
Entity-Relationship Model
Entity-Relationship model is a type of database model based on the notion
of real world entities and relationship among them. We can map real
world scenario onto ER database model. ER Model creates a set of entities
with their attributes, a set of constraints and relation among them.
ER Model is best used for the conceptual design of database. ER Model
can be represented as follows :
Data Dictionary
Data dictionary is the centralized collection of information about data. It
stores meaning and origin of data, its relationship with other data, data
format for usage etc. Data dictionary has rigorous definitions of all names
in order to facilitate user and software designers.
Data dictionary is often referenced as meta-data (data about data)
repository. It is created along with DFD (Data Flow Diagram) model of
software program and is expected to be updated whenever DFD is
changed or updated.
Requirement of Data Dictionary
The data is referenced via data dictionary while designing and
implementing software. Data dictionary removes any chances of
ambiguity. It helps keeping work of programmers and designers
synchronized while using same object reference everywhere in the
program.
Data dictionary provides a way of documentation for the complete
database system in one place. Validation of DFD is carried out using data
dictionary.
Contents
Data dictionary should contain information about the following
Data Flow
Data Structure
Data Elements
Data Stores
Data Processing
Data Flow is described by means of DFDs as studied earlier and
represented in algebraic form as described.
= Composed of
{} Repetition
() Optional
+ And
[/] Or
Example
Address = House No + (Street / Area) + City + State
Course ID = Course Number + Course Name + Course Level + Course
Grades
Data Elements
Data elements consist of Name and descriptions of Data and Control
Items, Internal or External data stores etc. with the following details:
Primary Name
Secondary Name (Alias)
Use-case (How and where to use)
Content Description (Notation etc. )
Supplementary Information (preset values, constraints etc.)
Data Store
It stores the information from where the data enters into the system and
exists out of the system. The Data Store may include -
Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on
different machine.
Tables
o Naming convention
o Indexing property
Data Processing
There are two types of Data Processing:
Structured Design
Structured design is a conceptualization of problem into several well-
organized elements of solution. It is basically concerned with the solution
design. Benefit of structured design is, it gives better understanding of
how the problem is being solved. Structured design also makes it simpler
for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‘divide and conquer’ strategy where
a problem is broken into several small problems and each small problem
is individually solved until the whole problem is solved.
The small pieces of problem are solved by means of solution modules.
Structured design emphasis that these modules be well organized in order
to achieve precise solution.
These modules are arranged in hierarchy. They communicate with each
other. A good structured design always follows some rules for
communication among multiple modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling
arrangements.
Design Process
The whole system is seen as how data flows in the system by
means of data flow diagram.
DFD depicts how functions changes data and state of entire
system.
The entire system is logically broken down into smaller units
known as functions on the basis of their operation in the
system.
Each function is then described at large.
Bottom-up Design
The bottom up design model starts with most specific and basic
components. It proceeds with composing higher level of components by
using basic or lower level components. It keeps creating higher level
components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is
increased.
Bottom-up strategy is more suitable when a system needs to be created
from some existing system, where the basic primitives can be used in the
newer system.
Both, top-down and bottom-up approaches are not practical individually.
Instead, a good combination of both is used.
CLI Elements
GUI Elements
GUI provides a set of components to interact with software or hardware.
Every graphical component provides a way to work with the system. A GUI
system has following elements such as:
Sliders
Combo-box
Data-grid
Drop-down list
Example
Mobile GUI, Computer GUI, Touch-Screen GUI etc. Here is a list of few tools
which come handy to build GUI:
FLUID
AppInventor (Android)
LucidChart
Wavemaker
Visual Studio
Parameter Meaning
When we select source file to view its complexity details in Metric Viewer,
the following result is seen in Metric Report:
N Size N1 + N2
V(G) = e – n + 2
Where
e is total number of edges
n is total number of nodes
Function Point
It is widely used to measure the size of software. Function Point
concentrates on functionality provided by the system. Features and
functionality of the system are used to measure the software complexity.
Function point counts on five parameters, named as External Input,
External Output, Logical Internal Files, External Interface Files, and
External Inquiry. To consider the complexity of software each parameter is
further categorized as simple, average or complex.
Inputs 3 4 6
Outputs 4 5 7
Enquiry 3 4 6
Files 7 10 15
Interfaces 5 7 10
The table above yields raw Function Points. These function points are
adjusted according to the environment complexity. System is described
using fourteen different characteristics:
Data communications
Distributed processing
Performance objectives
Operation configuration load
Transaction rate
Online data entry,
End user efficiency
Online update
Complex processing logic
Re-usability
Installation ease
Operational ease
Multiple sites
Desire to facilitate changes
These characteristics factors are then rated from 0 to 5, as mentioned
below:
No influence
Incidental
Moderate
Average
Significant
Essential
All ratings are then summed up as N. The value of N ranges from 0 to 70
(14 types of characteristics x 5 types of ratings). It is used to calculate
Complexity Adjustment Factors (CAF), using the following formulae:
CAF = 0.65 + 0.01N
Then,
Delivered Function Points (FP)= CAF x Raw FP
This FP can then be used in various metrics, such as:
Cost = $ / FP
Quality = Errors / FP
Productivity = FP / person-month
Software Implementation
In this chapter, we will study about programming methods,
documentation and challenges in software implementation.
Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of
the software increases. Gradually, it becomes next to impossible to
remember the flow of program. If one forgets how software and its
underlying programs, files, procedures are constructed it then becomes
very difficult to share, debug and modify the program. The solution to this
is structured programming. It encourages the developer to use
subroutines and loops instead of using simple jumps in the code, thereby
bringing clarity in the code and improving its efficiency Structured
programming also helps programmer to reduce coding time and organize
code properly.
Structured programming states how the program shall be coded.
Structured programming uses three main concepts:
Top-down analysis - A software is always made to perform
some rational work. This rational work is known as problem in
the software parlance. Thus it is very important that we
understand how to solve the problem. Under top-down
analysis, the problem is broken down into small pieces where
each one has some significance. Each problem is individually
solved and steps are clearly stated about how to solve the
problem.
Modular Programming - While programming, the code is
broken down into smaller group of instructions. These groups
are known as modules, subprograms or subroutines. Modular
programming based on the understanding of top-down
analysis. It discourages jumps using ‘goto’ statements in the
program, which often makes the program flow non-traceable.
Jumps are prohibited and modular format is encouraged in
structured programming.
Structured Coding - In reference with top-down analysis,
structured coding sub-divides the modules into further smaller
units of code in the order of their execution. Structured
programming uses control structure, which controls the flow of
the program, whereas structured coding uses control structure
to organize its instructions in definable patterns.
Functional Programming
Functional programming is style of programming language, which uses
the concepts of mathematical functions. A function in mathematics should
always produce the same result on receiving the same argument. In
procedural languages, the flow of the program runs through procedures,
i.e. the control of program is transferred to the called procedure. While
control flow is transferring from one procedure to another, the program
changes its state.
In procedural programming, it is possible for a procedure to produce
different results when it is called with the same argument, as the program
itself can be in different state while calling it. This is a property as well as
a drawback of procedural programming, in which the sequence or timing
of the procedure execution becomes important.
Functional programming provides means of computation as mathematical
functions, which produces results irrespective of program state. This
makes it possible to predict the behavior of the program.
Functional programming uses the following concepts:
First class and High-order functions - These functions
have capability to accept another function as argument or
they return other functions as results.
Pure functions - These functions do not include destructive
updates, that is, they do not affect any I/O or memory and if
they are not in use, they can easily be removed without
hampering the rest of the program.
Recursion - Recursion is a programming technique where a
function calls itself and repeats the program code in it unless
some pre-defined condition matches. Recursion is the way of
creating loops in functional programming.
Strict evaluation - It is a method of evaluating the
expression passed to a function as an argument. Functional
programming has two types of evaluation methods, strict
(eager) or non-strict (lazy). Strict evaluation always evaluates
the expression before invoking the function. Non-strict
evaluation does not evaluate the expression unless it is
needed.
λ-calculus - Most functional programming languages use λ-
calculus as their type systems. λ-expressions are executed by
evaluating them as they occur.
Common Lisp, Scala, Haskell, Erlang and F# are some examples of
functional programming languages.
Programming style
Programming style is set of coding rules followed by all the programmers
to write the code. When multiple programmers work on the same software
project, they frequently need to work with the program code written by
some other developer. This becomes tedious or at times impossible, if all
developers do not follow some standard programming style to code the
program.
An appropriate programming style includes using function and variable
names relevant to the intended task, using well-placed indentation,
commenting code for the convenience of reader and overall presentation
of code. This makes the program code readable and understandable by
all, which in turn makes debugging and error solving easier. Also, proper
coding style helps ease the documentation and updation.
Coding Guidelines
Practice of coding style varies with organizations, operating systems and
language of coding itself.
The following coding elements may be defined under coding guidelines of
an organization:
Naming conventions - This section defines how to name
functions, variables, constants and global variables.
Indenting - This is the space left at the beginning of line,
usually 2-8 whitespace or single tab.
Whitespace - It is generally omitted at the end of line.
Operators - Defines the rules of writing mathematical,
assignment and logical operators. For example, assignment
operator ‘=’ should have space before and after it, as in “x =
2”.
Control Structures - The rules of writing if-then-else, case-
switch, while-until and for control flow statements solely and in
nested fashion.
Line length and wrapping - Defines how many characters
should be there in one line, mostly a line is 80 characters long.
Wrapping defines how a line should be wrapped, if is too long.
Functions - This defines how functions should be declared
and invoked, with and without parameters.
Variables - This mentions how variables of different data
types are declared and defined.
Comments - This is one of the important coding components,
as the comments included in the code describe what the code
actually does and all other associated descriptions. This
section also helps creating help documentations for other
developers.
Software Documentation
Software documentation is an important part of software process. A well
written document provides a great tool and means of information
repository necessary to know about software process. Software
documentation also provides information about how to use the product.
A well-maintained documentation should involve the following documents:
Requirement documentation - This documentation works
as key tool for software designer, developer and the test team
to carry out their respective tasks. This document contains all
the functional, non-functional and behavioral description of the
intended software.
Source of this document can be previously stored data about
the software, already running software at the client’s end,
client’s interview, questionnaires and research. Generally it is
stored in the form of spreadsheet or word processing
document with the high-end software management team.
This documentation works as foundation for the software to be
developed and is majorly used in verification and validation
phases. Most test-cases are built directly from requirement
documentation.
Software Design documentation - These documentations
contain all the necessary information, which are needed to
build the software. It contains: (a) High-level software
architecture, (b) Software design details, (c) Data flow
diagrams, (d) Database design
These documents work as repository for developers to
implement the software. Though these documents do not give
any details on how to code the program, they give all
necessary information that is required for coding and
implementation.
Technical documentation - These documentations are
maintained by the developers and actual coders. These
documents, as a whole, represent information about the code.
While writing the code, the programmers also mention
objective of the code, who wrote it, where will it be required,
what it does and how it does, what other resources the code
uses, etc.
The technical documentation increases the understanding
between various programmers working on the same code. It
enhances re-use capability of the code. It makes debugging
easy and traceable.
There are various automated tools available and some comes
with the programming language itself. For example java
comes JavaDoc tool to generate technical documentation of
code.
User documentation - This documentation is different from
all the above explained. All previous documentations are
maintained to provide information about the software and its
development process. But user documentation explains how
the software product should work and how it should be used to
get the desired results.
These documentations may include, software installation
procedures, how-to guides, user-guides, uninstallation method
and special references to get more information like license
updation etc.
Software Validation
Validation is process of examining whether or not the software satisfies
the user requirements. It is carried out at the end of the SDLC. If the
software matches requirements for which it was made, it is validated.
Software Verification
Verification is the process of confirming if the software is meeting the
business requirements, and is developed adhering to the proper
specifications and methodologies.
Testing Approaches
Tests can be conducted based on two approaches –
Functionality testing
Implementation testing
When functionality is being tested without taking the actual
implementation in concern it is known as black-box testing. The other side
is known as white-box testing where not only functionality is tested but
the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing. Every
single possible value in the range of the input and output values is tested.
It is not possible to test each and every value in real world scenario if the
range of values is large.
Black-box testing
In this testing method, the design and structure of the code are not known
to the tester, and testing engineers and end users conduct this test on the
software.
Black-box testing techniques:
Equivalence class - The input is divided into similar classes.
If one element of a class passes the test, it is assumed that all
the class is passed.
Boundary values - The input is divided into higher and lower
end values. If these values pass the test, it is assumed that all
values in between may pass too.
Cause-effect graphing - In both previous methods, only one
input value at a time is tested. Cause (input) – Effect (output)
is a testing technique where combinations of input values are
tested in a systematic way.
Pair-wise Testing - The behavior of software depends on
multiple parameters. In pairwise testing, the multiple
parameters are tested pair-wise for their different values.
State-based testing - The system changes state on provision
of input. These systems are tested based on their states and
input.
White-box testing
It is conducted to test program and its implementation, in order to
improve code efficiency or structure. It is also known as ‘Structural’
testing.
In this testing method, the design and structure of the code are known to
the tester. Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
Control-flow testing - The purpose of the control-flow testing
to set up test cases which covers all statements and branch
conditions. The branch conditions are tested for both being
true and false, so that all statements can be covered.
Data-flow testing - This testing technique emphasis to cover
all the data variables included in the program. It tests where
the variables were declared and defined and where they were
used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing
process runs parallel to software development. Before jumping on the
next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden
bugs or issues left in the software. Software is tested on various levels -
Unit Testing
While coding, the programmer performs some tests on that unit of
program to know if it is error free. Testing is performed under white-box
testing approach. Unit testing helps developers decide that individual
units of the program are working as per requirement and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need
to find out if the units if integrated together would also work without
errors. For example, argument passing and data updation etc.
System Testing
The software is compiled as product and then it is tested as a whole. This
can be accomplished using one or more of the following tests:
Functionality testing - Tests all functionalities of the
software against the requirement.
Performance testing - This test proves how efficient the
software is. It tests the effectiveness and average time taken
by the software to do desired task. Performance testing is
done by means of load testing and stress testing where the
software is put under high user and data load under various
environment conditions.
Security & Portability - These tests are done when the
software is meant to work on various platforms and accessed
by number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go
through last phase of testing where it is tested for user-interaction and
response. This is important because even if the software matches all user
requirements and if user does not like the way it appears or works, it may
be rejected.
Alpha testing - The team of developer themselves perform
alpha testing by using the system as if it is being used in work
environment. They try to find out how user would react to
some action in software and how the system should respond
to inputs.
Beta testing - After the software is tested internally, it is
handed over to the users to use it under their production
environment only for testing purpose. This is not as yet the
delivered product. Developers expect that users at this stage
will bring minute problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or
functionality, it is tested thoroughly to detect if there is any negative
impact of the added code. This is known as regression testing.
Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are
needed for reference –
SRS document - Functional Requirements document
Test Policy document - This describes how far testing should
take place before releasing the product.
Test Strategy document - This mentions detail aspects of
test team, responsibility matrix and rights/responsibility of test
manager and test engineer.
Traceability Matrix document - This is SDLC document,
which is related to requirement gathering process. As new
requirements come, they are added to this matrix. These
matrices help testers know the source of requirement. They
can be traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is
being done:
Test Case document - This document contains list of tests
required to be conducted. It includes Unit test plan, Integration
test plan, System test plan and Acceptance test plan.
Test description - This document is a detailed description of
all test cases and procedures to execute them.
Test case report - This document contains test case report
as a result of the test.
Test logs - This document contains test logs for every test
case report.
After Testing
The following documents may be generated after testing :
Test summary - This test summary is collective analysis of all
test reports and logs. It summarizes and concludes if the
software is ready to be launched. The software is released
under version control system if it is ready to launch.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature.
It may be just a routine maintenance tasks as some bug discovered by
some user or it may be a large event in itself based on maintenance size
or nature. Following are some types of maintenance based on their
characteristics:
Corrective Maintenance - This includes modifications and
updations done in order to correct or fix problems, which are
either discovered by user or concluded by user error reports.
Adaptive Maintenance - This includes modifications and
updations applied to keep the software product up-to date and
tuned to the ever changing world of technology and business
environment.
Perfective Maintenance - This includes modifications and
updates done in order to keep the software usable over long
period of time. It includes new features, new user
requirements for refining the software and improve its
reliability and performance.
Preventive Maintenance - This includes modifications and
updations to prevent future problems of the software. It aims
to attend problems, which are not significant at this moment
but may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on
estimating software maintenance found that the cost of maintenance is as
high as 67% of the cost of entire software process cycle.
Maintenance Activities
IEEE provides a framework for sequential maintenance process activities.
It can be used in iterative manner and can be extended so that
customized items and processes can be included.
Software Re-engineering
When we need to update the software to keep it to the current market,
without impacting its functionality, it is called software re-engineering. It
is a thorough process where the design of software is changed and
programs are re-written.
Legacy software cannot keep tuning with the latest technology available
in the market. As the hardware become obsolete, updating of software
becomes a headache. Even if software grows old with time, its
functionality does not.
For example, initially Unix was developed in assembly language. When
language C came into existence, Unix was re-engineered in C, because
working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of software
need more maintenance than others and they also need re-engineering.
Re-Engineering Process
Decide what to re-engineer. Is it whole software or a part of
it?
Perform Reverse Engineering, in order to obtain specifications
of existing software.
Restructure Program if required. For example, changing
function-oriented programs into object-oriented programs.
Re-structure data as required.
Apply Forward engineering concepts in order to get re-
engineered software.
There are few important terms used in Software re-engineering
Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing,
understanding the existing system. This process can be seen as reverse
SDLC model, i.e. we try to get higher abstraction level by analyzing lower
abstraction levels.
An existing system is previously implemented design, about which we
know nothing. Designers then do reverse engineering by looking at the
code and try to get the design. With design in hand, they try to conclude
the specifications. Thus, going in reverse from code to system
specification.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is
all about re-arranging the source code, either in same programming
language or from one programming language to a different one.
Restructuring can have either source code-restructuring and data-
restructuring or both.
Re-structuring does not impact the functionality of the software but
enhance reliability and maintainability. Program components, which cause
errors very frequently can be changed, or updated with re-structuring.
The dependability of software on obsolete hardware platform can be
removed via re-structuring.
Forward Engineering
Forward engineering is a process of obtaining desired software from the
specifications in hand which were brought down by means of reverse
engineering. It assumes that there was some software engineering
already done in the past.
Forward engineering is same as software engineering process with only
one difference – it is carried out always after reverse engineering.
Component reusability
A component is a part of software program code, which executes an
independent task in the system. It can be a small module or sub-system
itself.
Example
The login procedures used on the web can be considered as components,
printing system in software can be seen as a component of the software.
Components have high cohesion of functionality and lower rate of
coupling, i.e. they work independently and can perform tasks without
depending on other modules.
In OOP, the objects are designed are very specific to their concern and
have fewer chances to be used in some other software.
In modular programming, the modules are coded to perform specific tasks
which can be used across number of other software programs.
There is a whole new vertical, which is based on re-use of software
component, and is known as Component Based Software Engineering
(CBSE).
CASE Tools
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of
Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
tools are to name a few.
Use of CASE tools accelerates the development of project to produce
desired result and helps to uncover flaws before moving ahead with next
stage in software development.
Diagram tools
These tools are used to represent system components, data and control
flow among various software components and system structure in a
graphical form. For example, Flow Chart Maker tool for creating state-of-
the-art flowcharts.
Documentation Tools
Documentation in a software project starts prior to the software process,
goes throughout all phases of SDLC and after the completion of the
project.
Documentation tools generate documents for technical users and end
users. Technical users are mostly in-house professionals of the
development team who refer to system manual, reference manual,
training manual, installation manuals etc. The end user documents
describe the functioning and how-to of the system such as user manual.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
Analysis Tools
These tools help to gather requirements, automatically check for any
inconsistency, inaccuracy in the diagrams, data redundancies or
erroneous omissions. For example, Accept 360, Accompa, CaseComplete
for requirement analysis, Visible Analyst for total analysis.
Design Tools
These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design
Programming Tools
These tools consist of programming environments like IDE (Integrated
Development Environment), in-built modules library and simulation tools.
These tools provide comprehensive aid in building software product and
include features for simulation and testing. For example, Cscope to search
code in C, Eclipse.
Prototyping Tools
Software prototype is simulated version of the intended software product.
Prototype provides initial look and feel of the product and simulates few
aspect of actual product.
Prototyping CASE tools essentially come with graphical libraries. They can
create hardware independent user interfaces and design. These tools help
us to build rapid prototypes based on existing information. In addition,
they provide simulation of software prototype. For example, Serena
prototype composer, Mockup Builder.
Maintenance Tools
Software maintenance includes modifications in the software product after
it is delivered. Automatic logging and error reporting techniques,
automatic error ticket generation and root cause Analysis are few CASE
tools, which help software organization in maintenance phase of SDLC. For
example, Bugzilla for defect tracking, HP Quality Center.
Debugging
Advantages of Debugging :
Disadvantages of Debugging:
Debugging
The goal of debugging is to catch and correct errors, especially in an early
stage, and provide tools to support bug finding, should they happen later in
production. Debugging is primarily performed in relation with coding. For
example:
Antibugging
The goal of antibugging is to prevent bugs from happening. This activity
performed throughout the whole development process. For example:
The computer system is simply a machine and hence it cannot perform any work; therefore, in order
to make it functional different languages are developed, which are known as programming languages
or simply computer languages.
Over the last two decades, dozens of computer languages have been developed. Each of these
languages comes with its own set of vocabulary and rules, better known as syntax. Furthermore,
while writing the computer language, syntax has to be followed literally, as even a small mistake will
result in an error and not generate the required output.
Machine Language
Assembly Language
System Language
Scripting Language
This is the language that is written for the computer hardware. Such language is effected directly by
the central processing unit (CPU) of a computer system.
Assembly Language
High-level languages are very important, as they help in developing complex software and they have
the following advantages −
Unlike assembly language or machine language, users do not need to learn the high-level
language in order to work with it.
High-level languages are similar to natural languages, therefore, easy to learn and
understand.
High-level language is designed in such a way that it detects the errors immediately.
Although a high-level language has many benefits, yet it also has a drawback. It has poor control on
machine/hardware.
.NET Framework
.NET is a framework to develop software applications. It is designed and
developed by Microsoft and the first beta version released in 2000.
It is used to develop applications for web, Windows, phone. Moreover, it
provides a broad range of functionalities and support.
Following is the .NET framework Stack that shows the modules and
components of the Framework.
ASP.NET
ASP.NET is a web framework designed and developed by Microsoft. It is
used to develop websites, web applications, and web services. It provides
a fantastic integration of HTML, CSS, and JavaScript. It was first released
in January 2002.
ADO.NET
ADO.NET is a module of .Net Framework, which is used to establish a
connection between application and data sources. Data sources can be
such as SQL Server and XML. ADO .NET consists of classes that can be
used to connect, retrieve, insert, and delete data.
WF (Workflow Foundation)
Windows Workflow Foundation (WF) is a Microsoft technology that
provides an API, an in-process workflow engine, and a rehostable designer
to implement long-running processes as workflows within .NET
applications.
Entity Framework
It is an ORM based open source framework which is used to work with a
database using .NET objects. It eliminates a lot of developers effort to
handle the database. It is Microsoft's recommended technology to deal
with the database.
Parallel LINQ
Parallel LINQ or PLINQ is a parallel implementation of LINQ to objects. It
combines the simplicity and readability of LINQ and provides the power of
parallel programming.
It can improve and provide fast speed to execute the LINQ query by using
all available computer capabilities.
Apart from the above features and libraries, .NET includes other APIs and
Model to improve and enhance the .NET framework.
In 2015, Task parallel and Task parallel libraries were added. In .NET 4.5, a
task-based asynchronous model was added.
Why use C?
C was initially used for system development work, particularly the
programs that make-up the operating system. C was adopted as a system
development language because it produces code that runs nearly as fast
as the code written in assembly language. Some examples of the use of C
might be −
Operating Systems
Language Compilers
Assemblers
Text Editors
Print Spoolers
Network Drivers
Modern Programs
Databases
Language Interpreters
Utilities
C Programs
A C program can vary from 3 lines to millions of lines and it should be
written into one or more text files with extension ".c"; for
example, hello.c. You can use "vi", "vim" or any other text editor to write
your C program into a file.
This tutorial assumes that you know how to edit a text file and how to
write source code inside a program file.
C - Environment Setup
Text Editor
This will be used to type your program. Examples of few a editors include
Windows Notepad, OS Edit command, Brief, Epsilon, EMACS, and vim or vi.
The name and version of text editors can vary on different operating
systems. For example, Notepad will be used on Windows, and vim or vi
can be used on windows as well as on Linux or UNIX.
The files you create with your editor are called the source files and they
contain the program source codes. The source files for C programs are
typically named with the extension ".c".
Before starting your programming, make sure you have one text editor in
place and you have enough experience to write a computer program, save
it in a file, compile it and finally execute it.
The C Compiler
The source code written in source file is the human readable source for
your program. It needs to be "compiled", into machine language so that
your CPU can actually execute the program as per the instructions given.
The compiler compiles the source codes into final executable programs.
The most frequently used and free available compiler is the GNU C/C++
compiler, otherwise you can have compilers either from HP or Solaris if
you have the respective operating systems.
The following section explains how to install GNU C/C++ compiler on
various OS. We keep mentioning C/C++ together because GNU gcc
compiler works for both C and C++ programming languages.
Installation on UNIX/Linux
If you are using Linux or UNIX, then check whether GCC is installed on
your system by entering the following command from the command line −
$ gcc -v
If you have GNU compiler installed on your machine, then it should print a
message as follows −
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr .......
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
If GCC is not installed, then you will have to install it yourself using the
detailed instructions available at https://gcc.gnu.org/install/
This tutorial has been written based on Linux and all the given examples
have been compiled on the Cent OS flavor of the Linux system.
Installation on Mac OS
If you use Mac OS X, the easiest way to obtain GCC is to download the
Xcode development environment from Apple's web site and follow the
simple installation instructions. Once you have Xcode setup, you will be
able to use GNU compiler for C/C++.
Xcode is currently available at developer.apple.com/technologies/tools/.
Installation on Windows
To install GCC on Windows, you need to install MinGW. To install MinGW,
go to the MinGW homepage, www.mingw.org, and follow the link to the
MinGW download page. Download the latest version of the MinGW
installation program, which should be named MinGW-<version>.exe.
While installing Min GW, at a minimum, you must install gcc-core, gcc-g+
+, binutils, and the MinGW runtime, but you may wish to install more.
Add the bin subdirectory of your MinGW installation to
your PATH environment variable, so that you can specify these tools on
the command line by their simple names.
After the installation is complete, you will be able to run gcc, g++, ar,
ranlib, dlltool, and several other GNU tools from the Windows command
line.
Features of C Language
C is the widely used language. It provides many features that are given
below.
1. Simple
2. Machine Independent or Portable
3. Mid-level programming language
4. structured programming language
5. Rich Library
6. Memory Management
7. Fast Speed
8. Pointers
9. Recursion
10. Extensible
1) Simple
C is a simple language in the sense that it provides a structured
approach (to break the problem into parts), the rich set of library
functions, data types, etc.
5) Rich Library
C provides a lot of inbuilt functions that make the development fast.
6) Memory Management
It supports the feature of dynamic memory allocation. In C language,
we can free the allocated memory at any time by calling
the free() function.
7) Speed
The compilation and execution time of C language is fast since there are
lesser inbuilt functions and hence the lesser overhead.
8) Pointer
C provides the feature of pointers. We can directly interact with the
memory by using the pointers. We can use pointers for memory,
structures, functions, array, etc.
9) Recursion
In C, we can call the function within the function. It provides code
reusability for every function. Recursion enables us to use the approach of
backtracking.
10) Extensible
C language is extensible because it can easily adopt new features.
First C Program
Before starting the abcd of C language, you need to learn how to write,
compile and run the first c program.
To write the first c program, open the C console and write the following
code:
1. #include <stdio.h>
2. int main(){
3. printf("Hello C Language");
4. return 0;
5. }
return 0 The return 0 statement, returns execution status to the OS. The
0 value is used for successful execution and 1 for unsuccessful execution.
By menu
Now click on the compile menu then compile sub menu to compile
the c program.
Then click on the run menu then run sub menu to run the c program.
By shortcut
Or, press ctrl+f9 keys compile and run the program directly.
You can view the user screen any time by pressing the alt+f5 keys.
Compilation process in c
What is a compilation?
The compilation is a process of converting the source code into object
code. It is done with the help of the compiler. The compiler checks the
source code for the syntactical or structural errors, and if the source code
is error-free, then it generates the object code.
The c compilation process converts the source code taken as input into
the object code or machine code. The compilation process can be divided
into four steps, i.e., Pre-processing, Compiling, Assembling, and Linking.
The preprocessor takes the source code as an input, and it removes all
the comments from the source code. The preprocessor takes the
preprocessor directive and interprets it. For example, if <stdio.h>, the
directive is available in the program, then the preprocessor interprets the
directive and replace this directive with the content of the 'stdio.h' file.
The following are the phases through which our program passes before
being transformed into an executable form:0s
o Preprocessor
o Compiler
o Assembler
o Linker
Preprocessor
The source code is the code which is written in a text editor and the
source code file is given an extension ".c". This source code is first passed
to the preprocessor, and then the preprocessor expands this code. After
expanding the code, the expanded code is passed to the compiler.
Compiler
The code which is expanded by the preprocessor is passed to the
compiler. The compiler converts this code into assembly code. Or we can
say that the C compiler converts the pre-processed code into assembly
code.
Assembler
The assembly code is converted into object code by using an assembler.
The name of the object file generated by the assembler is the same as the
source file. The extension of the object file in DOS is '.obj,' and in UNIX,
the extension is 'o'. If the name of the source file is 'hello.c', then the
name of the object file would be 'hello.obj'.
Linker
Mainly, all the programs written in C use library functions. These library
functions are pre-compiled, and the object code of these library files is
stored with '.lib' (or '.a') extension. The main working of the linker is to
combine the object code of library files with the object code of our
program. Sometimes the situation arises when our program refers to the
functions defined in other files; then linker plays a very important role in
this. It links the object code of these files to our program. Therefore, we
conclude that the job of the linker is to link the object code of our program
with the object code of the library files and other files. The output of the
linker is the executable file. The name of the executable file is the same
as the source file but differs only in their extensions. In DOS, the
extension of the executable file is '.exe', and in UNIX, the executable file
can be named as 'a.out'. For example, if we are using printf() function in a
program, then the linker adds its associated code in an output file.
hello.c
1. #include <stdio.h>
2. int main()
3. {
4. printf("Hello javaTpoint");
5. return 0;
6. }
Tokens in C
A C program consists of various tokens and a token is either a keyword,
an identifier, a constant, a string literal, or a symbol. For example, the
following C statement consists of five tokens −
printf("Hello, World! \n");
The individual tokens are −
printf
(
"Hello, World! \n"
)
;
Semicolons
In a C program, the semicolon is a statement terminator. That is, each
individual statement must be ended with a semicolon. It indicates the end
of one logical entity.
Given below are two different statements −
printf("Hello, World! \n");
return 0;
Comments
Comments are like helping text in your C program and they are ignored by
the compiler. They start with /* and terminate with the characters */ as
shown below −
/* my first program in C */
You cannot have comments within comments and they do not occur
within a string or character literals.
Identifiers
A C identifier is a name used to identify a variable, function, or any other
user-defined item. An identifier starts with a letter A to Z, a to z, or an
underscore '_' followed by zero or more letters, underscores, and digits (0
to 9).
C does not allow punctuation characters such as @, $, and % within
identifiers. C is a case-sensitive programming language.
Thus, Manpower and manpower are two different identifiers in C. Here are
some examples of acceptable identifiers −
mohd zara abc move_name a_123
myname50 _temp j a23b9 retVal
Keywords
The following list shows the reserved words in C. These reserved words
may not be used as constants or variables or any other identifier names.
double
Whitespace in C
A line containing only whitespace, possibly with a comment, is known as a
blank line, and a C compiler totally ignores it.
Whitespace is the term used in C to describe blanks, tabs, newline
characters and comments. Whitespace separates one part of a statement
from another and enables the compiler to identify where one element in a
statement, such as int, ends and the next element begins. Therefore, in
the following statement −
int age;
there must be at least one whitespace character (usually a space)
between int and age for the compiler to be able to distinguish them. On
the other hand, in the following statement −
fruit = apples + oranges; // get the total fruit
no whitespace characters are necessary between fruit and =, or between
= and apples, although you are free to include some if you wish to
increase readability.
C - Data Types
Data types in c refer to an extensive system used for declaring variables
or functions of different types. The type of a variable determines how
much space it occupies in storage and how the bit pattern stored is
interpreted.
The types in C can be classified as follows –
2
Enumerated types
They are again arithmetic types and they are used to define
variables that can only assign certain discrete integer values
throughout the program.
3
The type void
The type specifier void indicates that no value is available.
4
Derived types
They include (a) Pointer types, (b) Array types, (c) Structure
types, (d) Union types and (e) Function types.
The array types and structure types are referred collectively as the
aggregate types. The type of a function specifies the type of the function's
return value. We will see the basic types in the following section, where as
other types will be covered in the upcoming chapters.
Integer Types
The following table provides the details of standard integer types with
their storage sizes and value ranges −
-32,768 to 32,767 or -
int 2 or 4 bytes
2,147,483,648 to 2,147,483,647
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <float.h>
return 0;
}
When you compile and execute the above program, it produces the
following result on Linux −
CHAR_BIT : 8
CHAR_MAX : 127
CHAR_MIN : -128
INT_MAX : 2147483647
INT_MIN : -2147483648
LONG_MAX : 9223372036854775807
LONG_MIN : -9223372036854775808
SCHAR_MAX : 127
SCHAR_MIN : -128
SHRT_MAX : 32767
SHRT_MIN : -32768
UCHAR_MAX : 255
UINT_MAX : 4294967295
ULONG_MAX : 18446744073709551615
USHRT_MAX : 65535
What is Correctness?
Correctness from software engineering perspective can be defined as the
adherence to the specifications that determine how users can interact
with the software and how the software should behave when it is used
correctly.
If the software behaves incorrectly, it might take considerable amount of
time to achieve the task or sometimes it is impossible to achieve it.
Important rules:
Below are some of the important rules for effective programming which
are consequences of the program correctness theory.
Defining the problem completely.
Develop the algorithm and then the program logic.
Reuse the proved models as much as possible.
Prove the correctness of algorithms during the design phase.
Developers should pay attention to the clarity and simplicity of
your program.
Verifying each part of a program as soon as it is developed.